Playing with Patchblocks Synth Modules — or: Lessons in User Experience

7,700 words • 40 minutes reading time

TL/DR: Detailing my frustrating user experience with a new Patchblocks synth module and the editor software that comes with it. Many suggestions and ideas on how to fix the issues. Using Patchblocks as an example, because the experience is very fresh, but my observations and advice apply to lots of products, both hard- and software. I think there are a number of generally useful take-aways in this post, particularly if you’re new to user interface/user experience design.


A few months ago, I bought a patchblocks synth module. Here it is, next to its box, on my desk:

img_5728

 

What’s patchblocks about?

Patchblocks modules are programmable little boxes that can do all kinds of sound/synth/audio generation and processing tasks. You can chain several of them together to build more complex setups. There is a cross-platform graphical user-interface (GUI) application with which you can edit the patches that can be uploaded to the patchblocks.

Patches remain on the modules even when they are disconnected from the host computer (or from their power supply). That is, you use the editor software to program the modules, and then you’re free to take them somewhere to make music or manipulate sound — no computer connection needed.

The concept is very similar to that of Arduino, but designed to produce or manipulate sound, to be chainable, and with two buttons and knobs each to get your hands on.

If you, like me, like to play both with programmable devices of this kind, and with synthesizers (or sound and music, in general), Patchblocks are a great idea. The hardware is nicely designed and quite affordable. You basically get a tiny, but fully capable synth and audio processor for about 50 €.

 

NOTE

I was prompted to write this post when I had a very frustrating user experience trying to use a Patchblocks module and the editing software that goes with it.

I’d like to point out that I’m not trying to blame Patchblocks in particular for any of the things I write about in this post. They apply to many products and services.

In fact, Patchblocks is a small company run by people that I can relate to. I like what they are doing and have no reason or incentive whatsoever to point them out specifically for any particular problem.

It just so happened that I decided to write about typical user experience issues after I failed getting this product to properly work, using it as an example.

Initially, I published this post before talking to Patchblocks about these issues. That was a bad move, and I apologise (though I would have written this post anyway). I have notified the Patchblocks team about the post on 2017-02-17.

Also, the post was originally much more of a rant, written in a tone that was often inappropriate. I have rewritten it several times into what is now, I believe, mostly neutral and constructive criticism.

If you have any kind of feedback, I’d love to hear it. Please simply leave a comment below. Thanks for reading!

 

Diving Right in — and Hitting the Ground

Once unpacked, I wanted to get some sound out of the module right away. So I connected the USB cable to my Mac (for power supply) and plugged the (presumed) audio-out into my audio mixer using the supplied mini-jack cable — and got … no sound.

Hmm. I’m twisting the knobs and pushing the buttons. Do I just have to turn the volume up? (I realise that the hardware controls are programmable, so they may not do what I think they do, but I’m just fiddling around, trying to get sound.)

Aha! There are two mini-jack connectors on the board. Both are unlabeled, which is strange, because everything else on the board is labeled, down to the tiniest surface-mount components, except those two jacks. (Why?)

So, which one is right? Are they both audio-out, or is one a line-in? Do they even carry audio signals at all?

At this point, I would already have needed to check a manual or look at videos or google for information. I hate checking manuals (as most people do). Frustration sets in. I’m even wondering if my patchblock is defective.

The only indication as to the module’s health is its power LED, which is alight, so the circuit is apparently getting power. I’m concluding that it’s not totally dead. Everything else is left for me to guess. (By the way, my model is a Macaque 1.2.)

UI/UX (user interface/user experience) suggestion: Label the connectors to make it obvious where the audio comes out.

UI/UX suggestion: Add some kind of indicator (another LED?) to the device to signal to the user that the device is producing sound right now. In this way, users can be sure that they should be getting sound.

Then, if the indicator is on, but there is no sound, there must be something wrong with cabling, or the signal path outside of the device. Without such a guide, users are forced to do more in-depth troubleshooting.

I tried different combinations and kept twisting knobs and pushing buttons, but I never got any sound out of it.

My next guess was that maybe you first have to upload a sound patch to the device, via USB, using the free Patchblocks editor software. That would be a huge missed opportunity: why not upload a demo patch to the device as a factory setting, so people can play with it right away and have fun?

UI/UX suggestion: Provide a demo patch, so people can play with the patchblocks module without first needing to hook it up to the editor software.

At the time, I had lots of other things to do, and I wasn’t in the mood to install and setup new software. I had a module that apparently wasn’t working properly, and no obvious way to find out what’s wrong.

I put the module back into its box and shelved it. That was about three months ago.

 

Trying … and Failing Again

Fast forward to February 13, 2017 — I unpacked my patchblocks synth again. This time, I downloaded the editor software and installed it. That was easy enough.

For macOS, it comes in a standard disk image that auto-mounts after finishing the download, with a symbolic link to the macOS Applications folder, where you just drag the app so that it’s copied, and that’s it for installation. This is a best practice on macOS.

A digression (it’s really minor): there are two readme-type text files included in the mounted installation disk image as well. From my experience, barely anyone reads them or even looks at them.

I took a quick peek — there didn’t appear to be anything that would require reading in order to launch or start using the app. (Which is too often the case, explaining why people would tend to ignore these files.)

UI/UX suggestion: If you provide readme-type files next to your installation files, they should contain stuff users absolutely need to read before installing or launching the app. (That text should be as concise as possible.) If they don’t, get rid of them. You can put any such info in the app itself, where users can look it up if they so desire.

UI/UX suggestion: The best option is if users don’t have to read anything before they can start using your app.

When I tried to launch the app, I got a warning message saying that launching was prevented because it’s an unverified application (this means that it was not code-signed with Apple).

This is mentioned on the download page, with a short note about having to unblock it to get it to launch. But it doesn’t say how.

I know how because I’ve done it many times before. But many users may not, so there’s another stumbling block. Those users are forced to find it out themselves. Why not provide this help?

UI/UX suggestion: Provide simple instructions on how to bypass Apple’s security lock and get the app to launch. All it takes is a few lines of text and maybe two screenshots as guidance.

(This is how: you need to launch the System Preferences app, go to the Security pane, switch to the first tab, and there’s a bit that says Application XYZ has been blocked from launching due to blah-blah, next to a button labeled Launch it anyway. Just click that button; that’s it.)

 

Exploring the Application’s User Interface

The app launched into a small window with just two huge buttons:

patchblocks-app-ui-1

Ah! Nice and simple. But what do the buttons do?

The text labels are descriptive enough — even though they imply that a preset and a patch are two different things, which, it seems, they aren’t (more about this below) —, but the illustrations are weird.

The Load Preset button shows a stylised drawing of a patchblocks module. What’s that supposed to mean? It’s like putting an image of a car next to the button that starts the engine.

It seems to say I wanted to put some kind of image here, but I didn’t really want to think about what would be a good choice, so I just used… anything.

The wrench on the right is more indicative of what’s behind that button, even though a wrench is something that you’d normally use to fasten or fix something. I wouldn’t associate a wrench with creating a patch.

Also, the wrench image is only ever used on this button. The related New Patch toolbar function in the patch editor interface (see below) uses a completely different icon. Again, it seems really arbitrary.

Maybe the drawings are just for fun. They’re not meant to be good, self-explanatory icons. Maybe they’re decorative. But it’s mentally distracting that I have to think about this, even just subliminally.

The question that settles it is: Does the user interface profit from using these images at all?

These are the kinds of little things that start off thoughts in users’ heads, and if those thoughts add to the confusion instead of clarity, it hurts the experience. You’re distracting the user instead of guiding them.

As for this interface, I am pretty sure it would have been better to leave those two illustrations out entirely. If they’re mostly decorative or humorous, I’d expect that style to be repeated across all of the application, but in this case it isn’t.

(Yes, this is really sweating the details. These details can make the difference between a mediocre user interface and a good one.)

An alternative: Use a more generally accepted and understood pictogram or icon. For example, loading a preset would map to the notion of opening a document, whereas creating a patch would map to creating a document. (Those icon examples aren’t too great, either, but they’d be less startling.)

UI/UX lession: If you want to add illustrative or descriptive imagery (icons, pictograms) to interface elements, make sure that they are meaningful, that they actually convey the intended meaning, and that using them is advantageous over not using them.

If you can’t find a well-matching image, it’s preferable to use no image at all. Purely decorative imagery has a tendency to increase visual clutter — the presence of too much (or too dense) visual information that makes it hard to maintain focus — and there is a good chance you’ll end up distracting or even confusing your users.

I’m still very unsure as to why these two buttons are there at all. There must be a fundamental difference between loading presets and editing patches, otherwise I wouldn’t have to make this choice at this point.

I have no idea what that difference could be, and there is no explanation. I’m left to guess.

Let’s try Load preset. It takes you to a second interface:

 

The Preset Loader

patchblocks-app-ui-2

Okay! Clicking on the presets, I expected — hoped — finally to hear some sound out of the little device. But nothing happens.

There’s a Play button at the top. I guess I need to click it. (Why? Isn’t it obvious that, when you click on a preset, the first thing you’d want to do is listen to it?)

UI/UX lession: Anticipate what the user is most likely trying to do, and make that as easy and obvious as possible. Avoid making the user do more than what absolutely needs to be done. In general, avoid making the user think.

Finally, I can hear sound!

It takes me a while until I realise that the sound is not actually coming out of the patchblocks device. It’s coming from the editor software and playing through my Mac’s audio-out. The module is still mute. Bummer! What am I doing wrong?

Also, why do I have to check my audio mixer to find out where the sound is coming from? Wouldn’t it be helpful if the interface made that explicit? If sound is coming out of both software as well as the module (I can’t tell), that may need pointing out as well, because it’s not that obvious.

Ok, so maybe I need to upload a preset (patch?) to the device first. If that’s so, why doesn’t the interface provide a hint about it, and instead lets me guess what I need to do?

UI/UX lession: Provide explicit information to the user about any functionality that does not immediately explain itself.

A good method is to place yourself in the position of someone who sees your interface for the very first time and has no idea what’s going on. Would you know what you need to do? If not, change the interface, or be explicit by providing some kind of guide.

Right, there’s a Load to Block button.

Why load, not upload? Why is the synth module called a block here? The package says Synthesizer Module, but the application keeps talking about blocks. Are they the same thing? If not, what’s the difference? Why use different names? (I’ll get back to consistent terminology.)

And why does the icon show the outline of a patchblocks module with an arrow pointing downward, which I’d associate with download?

Anyway, I click it. I get an error message that the device could not be found. How so? It’s connected via USB, and the module’s (block’s?) power LED is on, so it’s not a trivial connection issue. What is it?

It would be helpful to get some kind of hint what to do about this issue. As it is, all I can do, once more, is guess. (Sure, I could find help elsewhere, but then you’re making me do work that I shouldn’t have to be doing.)

UI/UX suggestion: Provide guidance on how to properly connect the module with the host computer.

You’d think how hard can it be to connect a USB cable? Does that really need explanation? But consider that some USB devices must be connected to the main USB hub and don’t work properly when they’re connected to, say, the USB connectors on a keyboard. Be explicit.

Why does this have to be so frustrating? It’s not so much that there are issues — these things happen. It’s that I’m not getting any help whatsoever; I’m left to figure these things out myself. If I need a manual to figure out even basic setup, something’s not right.

I try the classic fix-it-all: I unplug the patchblock from the USB and plug it back in. Nothing happens.

UI/UX suggestion: Provide some means for the user to tell if and when the device is properly connected to the host computer, and accessible to the editor software.

Without such a guide, users are forced to guess what’s going on. It could be faulty cabling. It could be a problem on the USB driver level. It could be a problem with a configuration setting. Basically, it could be anything.

(It’s easy to overlook the most likely reason: The user hasn’t actually plugged their module in. People make stupid mistakes all the time. We’re only humans.)

When you’re leaving your users on their own to figure out the problem, you’re sending implicit messages about your attitude towards the design of your application:

  • We didn’t anticipate this sort of issue (because we did not think about it enough?), and
  • We didn’t perform any user testing to check for it, or
  • We did these things, but we didn’t bother to do anything about what we found.

The take-away is this: do you care about the experience your users will have with your product? (If you don’t care, what does this say about your business?)

Ok, now when I click on Load to Block, I get a Firmware updated! message. Apparently, the software recognised the patchblock after all — though I’m still only guessing, as there’s no way to tell.

That message is confusing, too. I’m not sure what kind of firmware it’s talking about. (By the way, are you sure that all of your users know what firmware even means? My hunch would be yes, but that could be an incorrect assumption.)

So did the app just run an update of the module’s basic firmware — i.e. its operating system? That was actually my initial thought. I was pretty sure that if it meant to say it uploaded a preset, it would have said so. Again, I’m left to guess what is really meant.

UI/UX lession: Provide explicit feedback to the user, using straightforward language. (In particular, avoid assuming that your users already know your internal terminology. New users almost never do. They cannot learn it by guessing, either.)

Instead of saying something technical and ambiguous like Firmware updated!, you could say Success! The patch/preset was uploaded to your patchblocks module. That would be so much more obvious, and it just takes a bit of thought and a few more words.

Clicking around in the preset loader UI, I find out that I get sound — out of my Mac, not out of the patchblocks module — for some of the presets, but nothing happens for some of the others. There’s no apparent reason why, and no indication to tell.

By now, I’m aware that the Patchblocks software has some major conceptual and design issues, as well as what appear to be significant bugs, so I’m kind of ignoring this issue for the moment. I realise that I’m already expecting things not to work by now. That is a pretty bad road to be going down as a new user.

When changing a preset, I was expecting to hear a different sound immediately. Instead, I had to click on the play button twice. Once, to turn off the previous preset. A second time, to turn on the newly selected preset.

That is, to check out a new preset, a user is forced to do three clicks where one would suffice. If there’s a legitimate reason for this behaviour, it’s definitely not obvious.

UI/UX suggestion: Auto-play a new preset on change. If there is a reason that presets have to be stopped and started after change, it’s not immediately clear, so it needs explaining.

You may have noticed in the screenshot that there is a preset selected, but that selection is not highlighted in the preset list. (This is probably simply a bug, but I’ll mention it for completeness’ sake.)

UI/UX lession: If an interface element has a state, such a selection, it’s a good idea to visually represent that state as feedback to the user — for instance, using a change of colour.

 

My Impression so Far

Meh. I haven’t even seen most of the application yet, and there was already so much to stumble over. A lot of things don’t look like they have been thought through very much, if at all.

Is this application even finished, or is it in a non-final stage of development? Did the developers perform any kind of user testing?

At this point, I’m frustrated enough to give up, but the fact that at least I got some interesting sound out keeps me going.

I click on the top right button that says Switch to editor. This takes me to…

 

The Patch Editor

patchblocks-app-ui-4

Holy cow! That’s a big window. On first glance, there is a lot going on, but why are there all these sections with nothing in them?

The screenshot shows its contents after I loaded an example patch. Initially, there is no patch loaded, so you’ll see a big window that’s even more empty, leaving you wondering what’s happening.

UI/UX suggestion: On switching to the patch editor, pre-load a simple patch to make it obvious how to use the interface. One of the tutorials would appear to be a good choice.

Glancing around in the patch editor UI, most of it is sufficiently self-explanatory, but I’ve used similar graph-based patch-editing software before, so it takes me only a few minutes of playing around until I get it.

For other users, this experience may be very different. How do you find out if users will understand your interface? The best way is to test it as much as possible, with real users, preferably ones who haven’t seen it before and don’t know anything about it.

UI/UX lession: Perform user testing to find out if a user interface will work.

Hoping for an interface to be successful is almost guaranteed to fail for all but the most elementary interfaces. (I’ve learned this the hard way.)

Improving the Patch Editor Interface

There are many things to improve upon in the patch editor, but they are rather minor compared to the other issues.

Reduce clutter

Right from the start, the user is presented with no less than seven sections or panels:

  • a toolbar,
  • a nodes browser,
  • the editing workspace,
  • an inspector,
  • an emulator,
  • a console,
  • and a help area.

While it becomes obvious rather quickly what each of them are for, a suggestion would be to hide/show editor panels based on the current editing context.

For example, there is no need to show the console until there is actually console output. Likewise, there is no need for the inspector or help while there is no active selection in the workspace.

As you can see in the above screenshot, many panels are completely empty, not only wasting space, but consuming a part of the user’s attention before it’s necessary.

Give the Window the Room it Needs

The patch editor window appears to open up to a fixed size, which is too small to see all of the workspace, so users have to pan around. While this can’t be avoided on smaller screens, an easy optimisation would be to let the window grab as much space as it can.

It’s a sensible default used by almost all productivity software — that is, anything that entails some notion of a document workspace. There’s nothing to lose. Users can always make the window smaller if they so desire.

I’d recommend not to go truly full-screen (i.e. make the window contents fill all of the screen, hiding the operating system chrome) — this would only be acceptable for immersive applications such as games.

Center All Windows on First Launch

All of the app’s windows initially open up somewhere near the top left corner of the screen, apparently with a fixed offset. On large monitors in particular, this is distracting. A much better choice for initial window placement is to simply center the windows onscreen.

Use Self-Explanatory Icons

Looking at the toolbar, it’s not immediately clear what at least half of the buttons do. New Patch and Play are pretty obvious; Open and Save a little less so. (How many people nowadays still associate saving with the image of a 3,5″ disk? That technology disappeared about 15 to 20 years ago!)

Helpfully and thankfully, the developers provided tooltips that appear on hovering the mouse pointer over the icons. But why would you use icons that force your users to do work to find out what they mean? Doesn’t that kind of defy the point of having icons in the first place?

Suggestion: Use icons that explain themselves immediately.

Alternative suggestion: Use icons with text labels next to them, and allow users to hide the text labels once they learned what the icons mean.

(By the way, you used icons with text labels in the preset loader, so why not in the patch editor? Not only would that be more consistent, it would also solve this issue.)

I can see why you’d remove the text labels in the patch editor’s toolbar: to save space. But you’re not very space-conscious about the rest of the window’s contents, and letting users guess what the icons mean — so you can save space in the toolbar — seems a bad trade-off.

A solution for the space problem would be to allow the toolbar to wrap into two — or more — rows, if the UI framework provides this. That’s not very elegant, either, but I think there’s a way to avoid the problem entirely:

Move Login Fields Somewhere Else

Why are there login fields in the toolbar? They’re not tool functions. Is it necessary for the user to see them all the time? Wouldn’t a user either be logged in or out? But if they don’t want to login yet, does it help if they permanently see those login fields?

Suggestion: Move login fields to a separate dialog/window/panel, and replace the Login label with a login/logout button.

Not only would this save lots of space, it would also make the interface much cleaner. With their white background colour, the login fields are quite distracting. The high contrast causes them to get much more visual focus than what seems appropriate.

Strive for a More Native Look-and-Feel

The app is built using some cross-platform UI framework or environment (my guess would be Java) that feels somewhat alien. Personally, I’m probably overly sensitive to this; other users may not care much.

Anyway, this is tough to solve. It’s very challenging to build cross-platform GUI apps in such a way that they feel at home and elegant on each platform.

Once an app is built, there is no strong business case and thus very little incentive to change the foundation on which it was built — it would likely require a re-write of large parts of the code.

Maybe there was a good reason for choosing that foundation; maybe it was the only realistically available choice. There’s no point in arguing for or against any such choice now.

(I wrote a post about the problems with cross-platform UI layers and ideas about how to write a new platform-independent UI layer from scratch.)

Simplify Editing Text Frame Content

When editing a text frame in the workspace, the text turns into HTML.

I’d always prefer editing HTML over rich text, because HTML is more open and standardised. But there are no editing controls, and having to write HTML by hand is probably less than user-friendly:

  • Users have to know HTML.
  • They have to type HTML code manually, which is inefficient and inconvenient.
  • HTML is much richer than it needs to be — you’ll probably have to do quite a bit of post-processing of the input to filter out things that shouldn’t go into those text frames.

From the patches I’ve looked at, it appears that you’d need at most the following structural options for those text frames:

  • Plain body text
  • Two kinds of text emphasis (bold, italics)
  • One to three levels of headlines
  • Possibly bulleted/numbered lists
  • Links

If you add code sections, blockquotes and images to this, you have exactly the expressive power of Markdown. It’s a very simple text markup language that looks very much like regular plaintext, but can be transparently and efficiently converted to HTML.

(An implementation in PHP using only basic string handling and regular expressions — no external dependencies — takes about 40 kB of code.)

That HTML can then be styled using CSS, as before, but Markdown is much more approachable for your users, and you’ll probably need less code to handle it. Seems like a double-win.

Use a More Elegant Typeface

This is more a matter of taste than a UI issue in any sense, so I’ll just add it as a remark. There’s nothing wrong with geometric sans-serif typefaces such as Futura, but I think they look very awkward and clumsy when used for application interfaces.

I’d bet that most professional [user interface] designers would agree with me on that. Futura was designed in the 1920s — maybe that explains why it feels so out of place.

However, Patchblocks (the company) use Futura as a corporate typeface, and it’s generally a good idea to be consistent and use it everywhere.

 

Trying to Make Things Better, but Ultimately Making them Worse

Scanning the patch editor UI, I wonder what that strange button in the very top right of the toolbar is for. The icon hints at some kind of exchange.

Ok, the tooltip says it takes me back to the preset loader interface, and now I remember where I’ve seen it before.

So, there are actually three interfaces, in three separate windows:

  1. An initial small window to select between the low-complexity preset loader interface and the high-complexity patch editor.
  2. The preset loader.
  3. The patch editor.

Why?

Now that I’ve seen the patch editor, I realise that the preset loader doesn’t do anything that the patch editor doesn’t do as well, it just does less of it. Why do you need the preset loader, then? Is it not redundant?

Maybe I’m misguided, because until now, I’ve silently assumed that a preset and a patch are actually the same thing. At least there is nothing in the user interface that makes a difference explicit, so I have reason to believe my assumption is correct.

If patches and presets are the same thing, why use different names?

UI/UX lession: Use consistent terminology. The same kind of thing should always be identified by the same name. If you use different names, but don’t make it explicit what the difference is (is there any?), you’re causing confusion.

Earlier, I’ve stumbled over the apparent difference between a module and a block. It looks like they’re both the same — why not choose one name and use it consistently? The same applies to patches vs. presets.

Assuming that patches and presets are indeed the same thing, the application provides two separate interfaces that operate on the same object. And it’s not at all clear why, other than that one is simpler. It’s simpler because it only provides a subset of the possibilities of the other, but it still duplicates all of those possibilities.

Why would you choose to do that? My guess it that the designers of the Patchblocks app tried to make things easy by initially hiding complexity. While that’s generally a good idea, in this case, I think, it backfired — it made things worse.

When the app is launched, the user has to choose between the preset loader and the patch editor — but at this point, it’s not at all clear what the difference is. The user is forced to make a choice between either interface before they have the knowledge required to make that choice.

But even after seeing both interfaces, the difference isn’t obvious.

I may be overlooking something important, but if I’m not, there is no need for a separate preset loader at all. If that’s true, there’s no need for the initial two-button window either. Finally, there’s no need for a button to switch interfaces.

 

Good Riddance

UI/UX suggestion: Get rid of both the initial two-button chooser and the preset chooser interface, and instead just keep the patch editor.

The advantages are immediately obvious:

Users aren’t confused by initially having to choose between two separate interfaces before they even understand why those two separate interfaces exist. Win.

Instead of having a separate interface for patch loading, add a patch selector to the patch editor. (It already exists, by the way, but it’s hidden in the menu. More on this below.)

The single biggest missed opportunity is this: When selecting a patch (or preset), the patch would be loaded into the editing workspace. Users could immediately make a mental connection and see how the patch was built, and more quickly get an idea of how the patch editor works. Huge double-win.

You can get rid of an entire interface window — the preset loader — that confuses users and duplicates some parts of the patch editor interface. Much less confusion, much less code. Major double-win.

You can get rid of the now redundant two-button interface chooser window. Less confusion, less code. Mid-sized double-win.

You can now also get rid of the Switch Interfaces button in the patch editor. Again, less confusion, less code. Small after-dinner snack double-win.

 

Selecting Patches

It would seem that choosing patches is likely the most frequently-used operation in all of the application. It would have to be, if the separate preset loader interface was dropped.

However, to select a patch from within the patch editor, you have to click a menu — one of two menus, actually — containing sub-menus that contain the selectable patches.

This is bad in several ways:

  • Menus hide interface options — you have to click on a menu before you can see its contents.
  • Sub-menus make it worse — they hide content even deeper, and you need more clicks or mouse interactions to navigate the menu structure.
  • Having two separate menus implies that they contain fundamentally different items, but they are actually just two categories of patches — examples and tutorials.

(The last point looks like this was an attempt to avoid having a patch selector menu with two levels of nested sub-menus, but it’s still inconsistent and confusing.)

So, users are forced to dig through an intransparent menu structure that is effectively hiding something they would likely need quite often. At the same time, the patch editor window permanently shows interface sections that seem much less important (login fields, console, help).

That doesn’t make any sense.

If we have seven panels in the patch editor already, another one wouldn’t hurt much. Why not add a patch selector panel, so patch selection is the first thing users see in the patch editor?

UI/UX suggestion: Move the patch selector out of the menu into an a new panel within the editor window. I can imagine that a full-height vertical panel at the left edge of the window makes the most sense.

To replicate the menu hierarchy, you could re-use the accordion-like control that you used for the nodes browser — it’s a good choice to present sub-selections.

 

The Workspace is Fun (The Good Parts)

There are a lot of very unfortunate choices in the application’s user interface. It’s time to mention what works well.

I really enjoyed the workspace. If you’ve ever used a graph-based interface like this before, you’ll be able to work with it almost immediately. The node-graph metaphor is already well established and very intuitive.

I spent quite some time just moving things around in the workspace, and looking at all the different node types and their parameters. It seems a lot of work has been put into this, and from what I can tell, the system is enormously flexible and powerful.

Other than suggesting to use Markdown (or something else that is simpler than HTML) to represent text frame contents, I can’t see how the workspace could be significantly improved.

I did come across a small bug in the graph rendering code: When moving a node very quickly past the edge of the workspace, the connecting line gets stuck outside of the screen, while the node keeps moving with the mouse pointer. (The graph is correctly redrawn once you let go of the node.)

Most patches come with quite a bit of text to read, and could be cleaned up a little. I don’t recall if the workspace provides a grid to which nodes and text frames could snap. Grids are a very efficient and intuitive way to help users clean up their graphical documents.

 

Patchblocks Still Mute

I still didn’t get sound out of my Patchblocks module, and the patch editor did not help me find the problem either.

As I tried different things to get it to work, I came across two problems that eventually made me give up trying to use the app.

 

Showstopper Bug Number One

Every time I load a patch into the editor, I get this error message:

patchblocks-app-ui-3

Ouch. This looks like a setup issue where the initial setting was bad, rather than an actual bug. It’s probably easy to fix if I look it up online. (Nevertheless, it shouldn’t happen.)

But this is only what I think is going on, from my experience using software and a hunch about what those things it’s talking about could be. I have no clue what the actual message text means, literally.

What is a local block description? How is it different from the one used in this patch? What is it about the audio output that every single patch I load seems to have this problem?

What does it mean that these blocks won’t operate as expected? Does it mean that my module is broken? How? Can I fix it? Why blocks (plural) — can’t the app tell that there is only one module connected?

I don’t mind the issue itself so much as the fact that the error message is so utterly useless, it’s actually insulting. Why didn’t you even make an attempt at allowing a new user to understand what’s going on?

Not only do I not have a chance at solving the issue from reading that error message — it seems to imply that there is something more serious going on. This message is so confusing and irritating, it’s making me feel stupid.

That is you — the developer — making me do your work. It should be your responsibility to make error messages comprehensible without having prior technical knowledge about the internals of a system I am using for the first time.

 

Intermission: Language Issues

There is a missing period at the end of the last sentence in the error message above. That’s nitpicking, of course. But I’ve only looked at part of the user interface and a small part of the website contents, and I saw at least a handful of spelling and punctuation mistakes.

Those mistakes leave a bad impression. I would never judge someone because they personally make language mistakes. (I make a lot of mistakes in a lot of areas, and I don’t like people judging me for any of them. Nobody does.)

But it’s something else if the content represents a business. Again, you’re sending implicit messages: We’re not seeing these errors (bad), or We’re seeing them, but  we don’t bother to fix them (worse).

The basic message you’re repeatedly sending by not paying attention to any of this is that you don’t care. If you send that message even subconsciously, you’re hurting your business.

UI/UX lession: It’s a good idea to get someone to proofread all of your text content. It’s best to have all text written by professional writers who know the language at a native-speaker level.

 

Showstopper Bug Number Two (Probably an Edge Case)

I came across another problem soon: The console would suddenly tell me something about having to re-run the app as root (or using sudo) to acknowledge some license restrictions.

Double-ouch! Why is there suddenly a license requirement, and why isn’t there a more user-friendly way to deal with it? (Do your users know how to run an app as root? Do they know how to use sudo?)

But wait — I’ve seen this message before. It comes from outside of the Patchblocks app.

A developer on macOS, who is almost certainly using Apple’s Developers Tools, will probably rarely see it, which is why I think this could have slipped through.

The message originates with said Developer Tools, and it’s only shown if a) you happen to have said Tools installed, b) you’ve installed an update of the Tools, and c) you haven’t used the Tools after that update.

That’s very unlikely for a macOS developer — or probably most people who have downloaded the Tools —, but it was the case on my system.

The fix is easy enough and takes about ten seconds: launch Xcode, and you’ll be asked to agree to the software license, which entails entering your login password.

What I don’t understand is why it would show up in the Patchblocks app’s console. This would mean that the app is internally running some tool — or accessing some system service — that happens to be part of the Dev Tools.

It would imply in turn that the Patchblocks app relies on the Tools being installed. That can’t be right, and there is no mention about it on the website (as far as I could tell from a glance), so this points to some other non-obvious issue.

Even though Patchblocks isn’t (directly) to blame, at this point, I had enough.

I quit the app and put the synth module back in the box. Instead of trying to fix any more problems or understand confusing user interfaces, I decided to write this very long post.

 

Why not Talk to the Developers?

Why did I not submit an issue report, but write a blog article instead? It’s a fair point. I’ll try to keep it brief:

It’s not my responsibility. Developers or businesses should ship products that work. Waiting for your users to point out and fix issues is an unhealthy business attitude that I reject (as should you).

It’s often futile, or I’m expected to invest additional work, for free. My experience with reporting issues is that they are ignored half of the time anyway, and businesses often expect me to do their work when dealing with issues, such as opening tickets on their issue trackers, or registering for and posting to some forum to get help.

I refuse. This is not an open-source or community effort; it’s a commercial product. I paid for it, and I expect it to work. I refuse to perform troubleshooting, debugging, testing or issue reporting before I can use a product that I paid for.

That said, picking out Patchblocks to make an example out of these kinds of problems seems really inappropriate. It is.

They are a small company that hasn’t been around for long. They probably don’t have the resources or the experience that larger companies have — and I have had much, much worse user experiences with much more expensive products from much larger companies.

If you work for Patchblocks and you’re reading this, let me restate that I don’t mean to pick out your business specifically, and I don’t mean to hurt your business. On the other hand, if you had provided me with a product that worked, I would not have written this post.

 

Conclusion

The product has serious problems in the user experience area.

As to the editor software, I’d have to say that it’s not production-ready. It appears to be in more of a beta stage of development.

I have no problem at all with using products that are in beta, but if so, I’d like to know ahead of time. There is no such information on their website — or it was too easy to overlook —, so I was led to believe I’m using a finished product.

As a customer and user-to-be, I feel somewhat betrayed, maybe disrespected, possibly both. The issues I encountered are significant enough that I could probably lawfully argue to return the product and request my money back.

It’s true that I did not actually pay any money for the editor software — it’s a free download. But the software is required to actually use Patchblocks modules, so I consider it part of the product that I paid for.

As to my module — maybe it really is defective, which could explain why I’m not getting any sound, or why it’s not reliably recognised by the editor app. Unfortunately, there is no obvious way to tell.

Assuming the module is bricked, how is it possible that faulty modules can reach end users? Do you not perform hardware tests before shipping?

Could it be that I did something to break my module inadvertently? If so, should there not have been an obvious warning to prevent such accidents?

I wish I didn’t have to guess all these things all the time.

 

About the Author

I’m a designer and developer from Germany, with a focus on user interfaces and user experience. I’m professionally trained in Visual Communications and received a Diploma in that field from Bauhaus University in 2001.

I’ve spent six years abroad while at public schools — three years in Port Elizabeth, South Africa, and another three years in Brussels, Belgium. I currently live in Northwestern Germany.

I’m self-taught in writing, software development and hardware tinkering. I’m also a musician who plays keyboards, some drums and a little bass. My other areas of interest include chemistry, pharmacology, physics, psychology and philosophy.

Currently, I’m teaching myself various fields of programming, including 2D and 3D graphics, game development, user interfaces and operating systems.

Unwilling to submit to bloated and proprietary systems that try to lock in their users, I’m always looking for ways to make things more lightweight, efficient, open and accessible.

 

Edit history:

  • 2017-02-13, 18:40 CET — Initially published.
  • 2017-02-13, 19:50 CET — Updated to make it clear I’m not critical of Patchblocks specifically, but using their product platform to illustrate some points about user experience.
  • 2017-02-14, 13:10 CET — Updated again to stress some points and add some details.
  • 2017-02-14, 16:05 CET — Updated once again, re-writing some sections to tighten the focus, and adding some more observations.
  • 2017-02-14, 20:30 CET — Fixed some details, re-wrote most of the later parts of the post.
  • 2017-02-14, 22:00 CET — Another edit; trying to make many points clearer, but probably just adding words.
  • 2017-02-18, 00:00 CET — Almost completely rewritten. Changed from a rant to a more constructive criticism. Removed some entire sections that didn’t really belong here. Reduced from 8,000 to 6,500 words. (So far, anything between 25 and 30 hours went into this post. I wanted to spend a maximum of 1 to 2 hours on this, but for some reason I can’t stop working on it.)
  • 2017-02-18, 17:20 CET — Re-wrote large parts once again. Back up to 7,700 words (unfortunately); at least 30 hours of work in total. I think it’s done now.

Thoughts on writing native, cross-platform GUI applications

(3600 words; updated with some additional thoughts, and reworded a few sentences on 2017-01-09)

While turning various ideas for future side projects around in my head, I keep thinking about how to tackle the problem of writing independent, platform-agnostic applications with a graphical user interface (GUI).

Existing cross-platform GUI frameworks

There are already many ways to do this:

  • Use a cross-platform language/framework/environment that has some notion of GUI already built-in from the start. Java immediately comes to mind. (Are there any others?)
  • Use a toolkit/framework that is deliberately designed to create GUI applications that run across platforms. The two most abundant and popular ones appear to be:
    • Qt (on which the Linux desktop environment KDE is based, as well as a large number of more or less largish cross-platform GUI apps)
    • GTK+ (on which the Linux desktop environment GNOME is based, among many others, too)
    • Alternatives (likely incomplete): List of widget toolkits – I haven’t seen most of these in action; and those that I have at least had a glimpse of seem to be relatively niche solutions (which is not meant to be a judgement; they could still be great, but I know very little about them)
  • Use a (headless) web browser as a kind of shell or runtime and build the GUI using HTML, CSS and JavaScript. A popular example of this is Electron, with which the text editor Atom was built.

I have looked at all of these, and they all have their advantages and drawbacks.

Java

Java GUIs, for instance, have a bad reputation for being »ugly«, cluttered, and often sluggish. Following their progress for the last 10 to 15 years, that appeared (to me) to be mostly true, but seems less true today. Still, even today I don’t know any Java GUI application that feels particularly elegant to use, or particularly »at home« on any host platform or operating system. (I should add that, coming from macOS, I’m probably quite prejudiced – or spoiled, if you want to call it that).

And, of course, that isn’t to say that they aren’t useful. Just to give an example: I’ve spent some time working with JetBrain’s line of IDEs, which are (to my knowledge) completely written in Java, using no native UI components, and they are really impressive and powerful tools. (While it looks like they put a whole lot of work into building good user interfaces, they are still so cluttered that I just don’t enjoy using their products as much as I like using, say, Sublime Text or Atom, despite being aware that IntelliJ IDEA eats both of these for breakfast when it comes to features.)

Qt and GTK+

Qt and GTK+ have similar issues, while Qt additionally used to have some problems with licensing and the question of who exactly »owns« or controls that technology. It looks like it’s mostly liberal to use now, but not free or open-source by definition, if I understood correctly.

The latter three also seem to have another thing in common, which is their relative bloat, to put it disrespectfully. This may be an incorrect impression on my side, but from looking at real-world apps and code samples using these frameworks, it seems they are all somewhat heavy. They want to be as flexible and support as many features as possible (which makes complete sense), but this naturally comes at a price, namely that they are all quite large and complex. Too large, and too complex, for my taste.

At a glance, the APIs are huge, and it appears you’ll have to invest a very substantial amount of time before you can write anything beyond the most trivial »toy« or »demo« apps. As a consequence, you become somewhat locked-in to using one framework or the other. It’s not very likely that you’ll end up being able to do the same thing in all of these frameworks, due to the effort required to get there. It’s very possible that I’m just lazy (or even misguided or misled), but this prospect has kept me very reluctant to seriously learn Qt, GTK+, Java GUIs, or anything similar. I guess I’m just too afraid that once I do decide on one and dive into it, I may find out it’s not what I was looking for after all, but by then I’ll have spent months or maybe years, and I may not be able to turn back. I’ll be sitting in a truck that’s so huge and heavy I wouldn’t know how to stop it anymore. (Is that a reasonable fear? I’d love to hear your feedback on this.)

The web browser as a GUI framework

The relatively recent possibility of using a browser engine to build a GUI on top of it is very exciting. For one, it’s extremely approachable (particularly if you have experience building web applications) and flexible. Basically, you can do anything that you can do on a web page (something that I think no native GUI framework can directly compete with). It’s also probably the easiest way to build a GUI app that runs natively today. But, some significant drawbacks of this approach are immediately obvious:

  • Performance. There is of course a very large overhead of running a GUI inside a browser. (Open a 2-megabyte text file in Atom to get an impression.) This overhead is likely to become smaller as browser engines become more efficient all the time, but these apps will probably never perform as well as an app that runs more natively on the host operating system. Sure, if performance is not that much of an issue, then this point is of little relevance. But I do remind myself that there is a reason we haven’t done this kind of thing 5 or 10 years ago. Browsers weren’t as capable. Running JavaScript code was orders of magnitude less efficient. Hardware was much slower. (There is an enormous amount of effort that went into speeding up all of these parts so much that we can now do this, which is absurdly wasteful if you think about it. On the other hand: now we can, so what should be keeping us?)
  • Size and memory requirement. Even writing a »Hello World« app in Electron requires running the browser shell and node.js, which takes up significant space on disk and uses significant memory. Let’s look at some numbers, taking Atom on macOS as a popular example:
    • The entire Atom app bundle takes about 260 MB (current release, 1.13.1).
    • The Electron framework alone (which contains a complete Chromium browser engine) is just over 100 MB.
    • Almost all of the remaining weight, namely about 150 MB (across ~5,600 files) resides in the Resources directory of the app bundle.
    • Of this 150 MB, the node.js executable is about 23 MB in size, and the node_modules directory (which is where practically all of Atom’s functionality resides) contains the majority of the files (~5,300 in total; about 23 MB as well).
    • For comparison, another popular text editor, Sublime Text 2 (release 2.0.2 on macOS), uses about 27 MB on disk, which is roughly one-tenth of Atom, or about the size of just node.js on its own.
    • You may not care about any of these numbers, and on most today’s hardware, they probably don’t matter. About memory usage, I’m not so sure, but I haven’t checked that.
  • Openness of the code. This may be the most critical issue in case you intend to build an application that you want to sell to users. As all of the functionality is implemented using HTML, CSS and JavaScript, all of which is essentially human-readable plaintext, anyone can just take that code, read it, copy it, change it, do whatever they want. With compiled code, this is not possible, at least not for non-specialists (you would have to disassemble or decompile it, which requires much more advanced knowledge, experience and specialised tools). I’m not sure if there are ways to encrypt the code to protect it, but I don’t think that’s something the developers of Electron & Co. had foreseen. An Electron-based app is open-source by design, and of course that is not a bad thing at all, in and of itself. However, if you are an independent developer who has a vision of making some kind of income from selling their software (instead of just selling services around it), it’s probably not an option.
  • JavaScript. A lot of people have many bad things to say about JS. Personally, I think this is the least relevant criticism. There are many ways to get around the limitations and quirks of the language. For instance, you can write your code in TypeScript or CoffeeScript or Dart and have it automatically cross-compiled to JS. Also, JS is evolving very rapidly, and with standards such as ES6, JS is becoming increasingly developer-friendly.

 

Is there another way?

Having considered all of this, I’ve been thinking about how else to approach writing GUI apps. To set the stage, this is what my ideal solution would look like:

  • It is truly platform-agnostic, i.e. it makes no assumptions on what hardware, operating systems etc. it should run on. It should be relatively little work to get it to work on a different platform.
  • It is independent, i.e. it’s not controlled by a corporate entity, or by any body that has control over it in a way that it can coerce you into doing things a certain way that leans more toward serving its own interests than that of its users or developers. This probably implies that it would have to be open-source.
  • It hits some kind of sweet-spot in trading off performance, flexibility, feature-richness, complexity and approachability (i.e. ease of use). It seems like you can’t have a solution that is maximally performant, flexible, approachable, has all the features you could think of, and is minimally complex, all at the same time. Acknowledging this, I’d immediately sacrifice feature-richness. Reducing features to a minimum would get rid of complexity and, consequently, probably improve both approachability and performance. It would not necessarily affect flexibility.
  • It’s compact and fast. Did I mention that I don’t like bloat? I’d like my GUI layer to be as lightweight (in terms of memory usage and executable size) and performant as possible, while still being pragmatic. I do actually want to build some useful real-world apps with it, so there is a lower bound to the minimalism — and one aspect of this experiment would be to find out exactly where that bound lies. (All of this is already implied in the previous point, but I’d like to stress it.)
  • It’s easy to use. You should be able to set it up and produce a working app in a few lines of code. Again, this is implied in the point above, but needs emphasis.
  • It allows me to keep my code closed. While I’m all for the ideals of free and open software, I don’t want to be forced to open-source my app, or to give it away for free. I think it’s important to be able to offer a product and get paid for it, in whatever appropriate way. People have to pay bills, and it’s naive to expect that someone will happily hop on and sponsor your work. However, this point refers to applications written with the GUI framework in question, not the framework itself. The framework itself should be free and open-source according to generally accepted standards.

An approach to a minimal GUI layer …

I believe that real-world usable GUIs can be built from a very limited set of very limited building blocks, in particular, if they can be combined or nested in suitable ways.

For instance, imagine a GUI framework that offers only four primitives:

  • some notion of a container (for positioning, i.e. layout),
  • a static text label,
  • and two types of controls, namely
    • a text-input field,
    • and some notion of a clickable surface, i.e. a button or switch.

You could probably implement the large majority of existing GUIs using just these four things, and they would not look or behave drastically differently. Most of the controls that are more complex than this could simply be implemented as nested combinations of the above. For example, a list of clickable items (such as a menu, select or similar controls) could be made from a container full of clickable surfaces.

(You could even get rid of the static text label as a separate primitive, and just define a surface that has some kind of displayable content and can optionally react to certain kinds of events. That is essentially the concept of a UI view in object-oriented UI frameworks such as macOS’s AppKit.)

… and implementing it

How would I go about implementing such an idea? I still need some kind of more fundamental framework or library that will handle the most low-level tasks for me, while abstracting them from the underlying platform.

At the moment, I am considering the following setup:

  • The SDL library (Simple DirectMedia Layer) as a foundation for supplying me with events and giving me screen surfaces to draw on. This library is mainly used in cross-platform games — and not just »toy« games; big ones! I’m not aware that it’s being used regularly to build GUI apps, but I don’t see a reason why it shouldn’t be. It is quite low-level, which I see as an advantage, because, from what I can tell, it is extremely efficient (i. e. fast), thanks to the low overhead. (Also, while it’s most often used with C or C++, there are bindings for many languages, including ones which liberate you from manual memory management, while still compiling to very fast code, such as Go.)
  • The Cairo library for 2D drawing. SDL only has very primitive drawing routines, which is deliberate, as actual drawing is mostly outside of its scope. Cairo offers all the drawing calls that you’d realistically ever need, but it’s still relatively compact, approachable, and from what I gather, also very fast. It’s quite popular as well; many applications and frameworks use it for 2D drawing under the hood. There is explicit support for using SDL together with Cairo. In addition, there is a lot of overlap between Cairo and SVG (Scalable Vector Graphics), which is extremely welcome, as SVG is an excellent open and cross-platform text-based format for representing 2D [vector] graphics. Having the option to easily translate Cairo drawing calls to SVG and back is sure to become useful.
  • The Pango library for rendering text. Pango in turn uses HarfBuzz, which is a library for drawing text shapes (glyphs). Rendering text is extremely demanding if you want to support even just a fraction of the writing systems used in the world today. I don’t even want to think of getting into all the intricacies involved, so it’s wonderful that there exists a library that abstracts this away. Cairo explicitly encourages using Pango for text rendering, as its own text capabilities are limited. (So, SDL plays nice with Cairo, Cairo plays nice with Pango — it would seem like all the really painful low-level stuff is taken care of.)
  • For apps that need it, 3D rendering would be handled using OpenGL. SDL basically assumes that you’ll be using OpenGL anyway, as it’s mainly used for games. The two work together very well from what I gather.

All of these libraries are open, free and independent in (most) every sense, and while offering everything within their scope that you might need, they are sufficiently manageable that I believe it’s possible to learn and understand their respective APIs completely within reasonable time. (This would be the advantage of using something that does one thing only instead of trying to do everything at once.)

Should we go deeper?

I don’t think it would make sense to go more low-level than this. You could write your own 2D drawing code instead of using something like Cairo, but SDL doesn’t even know how to draw lines (it can only set pixels or fill rectangles), and you’d have to implement your own line-drawing code, which seems a bit crazy. Also, say, you only want to support English text in the UI and are fine with having only very limited typographical control, then you could do without Pango. You could even create your own simple bitmap (or even vector) font and draw text »manually«, foregoing Cairo as well, but then we’re seriously heading into crazyland.

Additional limitations and things to consider

(This section was added after a reader pointed out some of these points to me on Twitter; thank you!)

First of all, though I think it follows from the above, I’d like to make it clear that I don’t intend to replicate the breadth of something like GTK+ or Qt. That would be outright silly and of course completely unrealistic.

On the contrary — the original thought that got me started on this was »What would be the minimal scope of a GUI layer, written from scratch, so that you can build real-world-useful applications with it?«.

The kinds of applications I have in mind would probably be single-window apps, by which I mean that they would be contained in a single window of the host system. It would still be possible to have a notion of »window« (such as tool palettes) within that window surface, using the aforementioned container primitive, which may overlap other containers.

Handling multiple windows entails dealing with the windowing system of the host platform, which would take me away from my goal of platform-agnosticism. If SDL has support for managing multiple windows at the host system level, it’s not a problem, but even then this would be a very low-priority goal, and I may even decide not to support it by design. It would make things needlessly complicated, and encourage clutter by making it possible for an app to have lots of separate windows (which I personally think is a very bad UI design choice*). I don’t have any application concepts in mind that could not be built within a single window surface of the host platform.

* The only situation I can think of where having multiple windows actually makes sense is in a document-based application (say, a text editor) where you want to look at several documents side-by-side. However, this can be emulated by allowing content areas to be split and arranged next to each other within the same window.

Besides, complex document-based applications are not my design goal. I explicitly intend to keep things very simple. If you wanted to build the next OpenOffice or something like that, you’d go for a full-featured UI framework anyway. You’d need a wealth of other functionality that a simple system like the one I’m thinking of can’t — or doesn’t want to — offer. To reiterate, there’s no point in trying to imitate what’s already available in that area.

That said, a reader pointed out that my concept would be lacking in three important areas:

  • localisation/internationalisation
  • scriptability
  • accessibility

I thought about these points (which are very valid), and I believe basic localisation and internationalisation would be relatively straightforward to achieve. Static text (labels) can be run through a translation map, and we can have locale-aware formatters for certain variable things like datetimes, currencies, or numbers in general. As I’ll be using Pango to render text, I won’t have to deal with the complexities of handling international text. At least I hope so.

On to scriptability. One way to achieve this would be to design the application in such a way that the UI performs calls to some kind of internal API or library that implements the actual functionality. This would be advantageous in other ways, as we could then uncouple this API from the application and expose it, so that it may be used via any other (non-graphical) interface, such as a command line. I don’t think it makes a lot of sense to have the GUI itself be scriptable. The point of scriptability would be to automate things, and »remote-controlling« parts of the GUI would be a needlessly roundabout and inefficient way to do that.

Accessbility is a tough one, and I don’t have an idea how to support it directly. On the other hand, if we have scriptability, then we can get accessibility, too: A user with accessibility requirements could use the application via its scripting API, which may be wrapped through some means that makes use of the host platform’s accessibility support. However, if accessibility is a high priority, it makes more sense to use a full-featured UI framework that already supports it. (Again, I won’t be competing with what these frameworks offer. If it were even realistically possible, I’d end up recreating something that resembles GTK+ or Qt, including all of their complexity. If this was were I was heading, I would be using GTK+ or Qt in the first place. You get the point.)

So far, so good

The above setup, I believe, would be a solid foundation to build a simple UI layer on, using just the primitives I mentioned. I imagine this would already take you quite a long way. Of course you’d be very far removed from the sophistication of GTK+, Qt, or even the macOS AppKit, but it would be a very interesting starting point.

If it turns out to be workable, then I can see lots of incentive to turn this into a framework to build apps with, always keeping in mind that the learning curve should be low.

As for actually building a GUI app using this setup: no, I haven’t written a single line of code yet. Right now, it’s just an idea that I’m exploring.

I’d love to hear your thoughts, and thanks for reading!

 

How to give up Twitter (but get all the data out first)

Why I want to give up Twitter

I have been maintaining a Twitter account that will become 10 years old this year (2017). It contains over 93,500 tweets as of today. Yes, ninety freaking thousand. That’s roughly 9,300 tweets per year, or an average of 25 tweets posted on every single day. Assuming that my average tweet comprises 100 characters, this is just under 9 megabytes of plain text, the equivalent of roughly two 1,000-page books. So much for a bit of statistics.

Despite still using it as some kind of notebook to quickly jot down ideas and thoughts or capture something that grabbed my attention, I have been increasingly unhappy with the way that Twitter has developed, and I have been wanting to give up using Twitter altogether for many years.

That has not happened (yet), despite quite a few earnest attempts to quit.

This article gives insight into my reasons for wanting to quit, and the (possibly long and winded) road to reach that goal. (If it sounds like I’m talking about some kind of addiction, then that’s probably not far from the truth.)

Part of the reason is that is still the most convenient way to do the things I am using it for. But that is just laziness on my part; I’m still using it simply because it’s there.

I have even stopped following people on Twitter because I either get too upset by what I read, or too lost in it, which causes me to spend way too much time there, dramatically cutting in to my productivity and creative output. Of course, cutting the conversation on one side does not exactly help keeping it up, but there are other reasons.

There is no longer any dialogue on Twitter, it’s all broadcasting

The social aspect of Twitter (exchanging and discussing ideas, making connections) has been working less and less well for me — and this development began long before I started unfollowing everyone (which was actually more of a reaction to this development rather than the cause; if it were the latter, I’d be a fool to complain about it).

Twitter used to be a place where I could engage with others about stuff that I was interested in, and it was great fun for many years, but that is no more.

Most people who are still active on Twitter (and who regularly post high-quality and original content) use the service as a one-way broadcast channel. Unless you have some kind of active following or social network outside of Twitter, you will essentially be holding a monologue (which is what I’ve been doing for the last one or two years).

If that is so, I don’t need Twitter; I might as well publish my notes to any other platform. Ironically, I have been having much more engaging and interesting discussions on Facebook lately. (Why ironically? Because when I was still using Twitter very actively, Facebook was a place where people sent each other Farmville requests.)

Single-theme accounts probably still work

There are situations where Twitter still works well as an engagement platform. If you limit the content of your tweets to a very restricted set of themes or topics, you are still likely to find and build your audience, and there will be a fruitful exchange of ideas.

But I personally never wanted to tweet monothematically. On the contrary, I enjoy hopping wildly between the most diverse areas of interest to me, and I think this is something that only works on Twitter if, again, you already have a following outside of Twitter, such as when you are a celebrity or have made yourself a name in other ways — because then people tend to be more interested in the person than what the person says.

I’m no celebrity, and with every new tweet, it seems I am alienating a good part of the people that still follow me. I believe most of my followers are interested in either X or Y or Z (but not all of them), and since only about every 10th or 20th tweet of mine is about one specific and recurrent topic, my Twitter updates are not interesting to my followers most of the time, so eventually they leave — and I don’t blame them at all.

Content sharing in a post-Snowden world

There is another issue that not only pertains to Twitter. It’s the fact that when using a content sharing platform, I’m essentially supplying the companies running the service with loads of free data for them to perform mass-scale data-mining on. I understand full well that this is their business model, and it’s the price I’m paying while I’m using their service at no monetary cost.

But in a post-Snowden world where we know all this data could one day be used against us, no matter how well these companies protect this data (or claim to do so), I feel increasingly uncomfortable with posting any content to any service that is not fully under my own control.

(Yes, I’m aware I’m publishing this article on wordpress.com, which is just that very kind of service. Until about five years ago, I used to be all post-privacy. I used to have literally hundreds of online accounts, and I uploaded tons of data everywhere, and I liked it. Today, I think I may have made a mistake, and it will take a lot of time to undo all of this — if it can be undone, because once the data is out there, it’s out.)

So — Twitter is all but dead to me, and I seriously want to leave. What’s holding me up?

 

I want to keep my data — how do I do it?

To be able to move on to something else, I will have to make it inconvenient to myself to still be using Twitter, and the only way I believe I will achieve this is if I delete my account altogether. This is where the other reason comes in: there is too much data in this account that is important to me, and I couldn’t stand losing it all. If I’m going to delete my account, I want to save my data first — all of it.

Downloading your tweet archive

It’s pretty straightforward to get all tweets out of an account. You can download a tweet archive that contains everything you have ever posted. It’s an official feature by Twitter. You request the archive download, and it usually takes about a day or two until you get a download link. The download is a zipped archive that contains a small web application shell to view the archive, while the archived tweet data itself is contained in a directory full of JSON files. That data is already very cleanly structured and complete; there is nothing left to ask for.

So where’s the problem? The archive only contains my own tweets. It does not comprise any of the interactions I have had with other people on Twitter. It does not contain fav or retweet counts, nor the IDs or screen names of people who faved or RT’d my tweets. It does not contain replies or threaded conversations, which naturally involve other people’s tweets — that is, content that is not my own. However, without this interaction data, a Twitter archive is only half of the story. I could download my tweet archive and delete my account, and all of this other half would be lost. Many people would be happy with that; I am not.

Using Favstar to get Fav and RT data

For a couple of years, I have been attacking this problem from different angles. For favs and RTs, one solution is Favstar. This non-free service keeps track of all favs and RTs for an account (including lists of who fav’d or RT’d), and it looks like it would be easy to get that data out, either using their API, or by scraping their web content. I’ve not explicitly attempted this, but it appears feasible. (The only reason I keep renewing my Favstar subscription is that I’m hoping I’ll get easy access to this data to accomplish my larger goal.)

Using web scraping to get conversations

For replies and conversations, the only viable solution I have come up with is to scrape Twitter’s public web content. The advantage is that it’s actually public, that is, you don’t even need to be logged in to access this data. All you need is a Twitter account’s user id or screen name, and a list of the tweet IDs for which you want to get the associated conversations (be it just a single reply or a long thread of back-and-forth replies). The tweet IDs are easily obtainable from your tweet archive, as each tweet is identified by its ID (which is unique across all of Twitter’s userbase). It would not be too challenging to hack a script that downloads the HTML for each tweet ID and then performs a bit of nested regular expression processing to extract the parts that I want.

(Yes, I know it’s not possible in computer science theory to actually parse HTML using regex, but this only applies to arbitrary HTML, but not if the nesting structure is already known and constant. I’ve written lots of concise and easy-to-understand code that correctly and efficiently extracts data from HTML and XML using nothing but regular expressions, so academics who keep repeating the old HTML-and-regex advice: please go back to your enterprise applications with your 2 megabyte XML parsing library behemoths if it makes you happy and gives you that nice smug feeling of the righteous who know best.)

Flying under the radar

With web scraping, I’d be cautious to fly under the radar. I’d expect any service such as Twitter to have measures in place that detect large numbers of automated requests (as would be the case if I tried to download 90,000 HTML pages all at once). To avoid getting detected, it’s best to design your requests in such a way that it would be hard to discern them from any regular access to the public web content. This isn’t hard to do, either. Usually, it appears sufficient to simulate the request headers that your web browser would transmit; in particular, the user agent, and, where applicable, a refer(r)er URI. (To see a full set of headers, take a look at the network pane of your browser’s developer tools when you submit a particular request.)

To make automated requests appear non-automated, it is probably a good idea to randomise them across time. To achieve this, I simply space them out using a call to PHP’s sleep() or usleep() (or something equivalent in your language of choice), with sufficiently random pauses inbetween requests that it would be non-trivial to detect a pattern on the server side. (I use the cURL library for scraping. In PHP, you could simply use file_get_contents(URL), and you can set the user agent for URL downloads via a configuration option (an ini setting), but cURL is the easiest-to-use choice if you need to go beyond that.)

Using Twitter’s API — or probably not

What about Twitter’s public API? Isn’t it the first place I should have looked? Well, after studying it for a while I came to the conclusion that it’s not extremely helpful for what I want to achieve. First of all, it uses an API key/token access mechanism involving their OAuth system — something that I couldn’t be bothered to deal with, despite the existence of various third-party libraries that handle all of the intricacies. (However, you can manually create API access tokens using their API explorer. These tokens expire after a while, so you’d have to keep renewing them.)

Still, once you have API access, you’re quite limited in what you can do. The API (understandably) limits the number of requests you are allowed to make in a certain timeframe, which is less of a problem, since you could just spread your requests out over time, but with 90,000 tweets, we are talking about a long time (days, weeks, possibly months).

The more serious restriction is that the API will by default only grant you access to the latest 3600 tweets — at least this was the case when I last checked the documentation (years ago). Using cursors, you can tell the API the range of tweet IDs you want data for, and it appears that this would allow you to go beyond the latest-3600-tweet limit, but I have not tested this sufficiently. All in all, using the API did not seem to be an efficient approach at all — even though it gives you the data in a very well-structured format —, so I mostly gave up trying.

Using Twitter’s non-public (?) internal (?) REST interface

Only quite recently, I have discovered that there is a more direct way to access Twitter’s content than full page scraping. When you scroll down someone’s timeline in the web UI, additional timeline items are loaded by the web application via background requests to a number of REST-like URLs.

Like the regular HTML content, and unlike Twitter’s API, these URLs can be accessed publically and do not require a login session. They return snippets of JSON code that contain up to 20 items (tweets, etc.) at once. For some reason, the items are not provided as structured data, but as pre-rendered HTML, so you’d still need to distill that HTML back into clean data.

The flying-under-the-radar advice applies here, too, because I assume that these URLs are not meant to be accessed from outside of Twitter’s web UI. However, and somewhat to my surprise, I was able to call these URLs directly via cURL (even without setting a referer header, if I recall correctly) and got the expected data back.

I have some working PHP code that implements this for media timelines, but it’s still too hackish for publication — the screenshot I used as the article image shows a small part of this hack (it contains a stupid bug in the last line; do you see it?).

All in all, it looks like this is the most promising approach, and it’s the one I’m most likely to pursue further.

Using 3rd-party »backup« services

There are numerous third-party web services that promise to backup all of your Twitter data. I have looked at a number of them, and none actually keep this promise. They simply store any tweets you post from the time you register with them, but they don’t access older tweets at all, and none of them accesses anything other than the tweets themselves, so there is exactly zero benefit from using these services over Twitter’s own archive feature. As such, I did not venture any deeper into this territory.

That’s all I have, so far

If you came to this post in the hope of finding a recipe or even a ready-to-use tool, I am sorry to disappoint you. This is still very much a work in progress. However, in case you’re trying to do the same thing I’ve described here, why don’t we join our efforts? In any case, I’d appreciate your feedback, and thanks for reading.

(By the way, I’m trying to do the same kind of thing for my Facebook account as well; and for my Flickr account, and… you know what I’m getting at. Any ideas or suggestions are highly welcome.)

Leben wir in einer Simulation?

Überlegungen zu und Implikationen aus der Simulationshypothese, Gegenüberstellung mit beobachteten Eigenschaften der Realität sowie Verknüpfungen mit Konzepten aus Kosmologie, Quantenphysik, Game Design, Philosophie, Spiritualität und der Natur des Bewusstseins

Titelbild: Standbild aus »Morphy’s World – Mandelbulb 3D Fractal animation« von Arthur Stammet (auf YouTube).

Teil I: Wie würden wir eine Simulation erfahren?

Beobachtbare Effekte von Optimierungen

Angenommen, die von uns erfahrene Realität ist eine Simulation, und diese wird durch ein Rechenmodell in einem wie immer auch gearteten System (Computer) abgebildet; und weiter angenommen, dass zur Berechnung der Simulation endliche Ressourcen (Energieumsatz, Speicherkapazität) zur Verfügung stehen, dann bestünde seitens der Betreiber der Simulation ein Interesse, die Simulation möglichst effizient zu gestalten – das heisst, den Rechenaufwand durch geeignete Optimierungen so gering wie möglich zu halten.

Eine naheliegende Optimierung wäre die dynamische Anpassung der Simulationstreue (Detailreichtum oder auch Auflösung) an die Umstände ihrer Beobachtung. Stellen wir uns vor, dass wir (Menschen) die Spieler in einer solchen Simulation sind, durch die sie überhaupt bewusst erfahren werden kann, dann müsste die Simulation nur die Ausschnitte der Spielwelt (Realität) hinreichend genau darstellen, die momentan (lokaler Ort, lokale Zeit) von Spielern beobachtet wird. Je geringer die Aufmerksamkeit, die einem bestimmten Raumzeit-Ausschnitt durch einen Beobachter (Spieler) zuteil wird, desto ungenauer (unschärfer) kann die Simulation sein, ohne Widersprüche zu erzeugen, also konsistent zu bleiben.

Die Analogie zu virtuellen Realitäten in Computerspielen ist offensichtlich, zum Beispiel in MultiplayerOpen-World-Umgebungen. Game Engines für solche Spiele berechnen stets nur den lokalen Realitätsausschnitt, in dem sich ein Spieler gerade aufhält. Je ungenauer Teile des lokalen Realitätsausschnitts durch den Spieler erfahrbar sind (z.B. da sie zu weit entfernt sind), desto weniger präzise müssen sie abgebildet werden (Level of Detail).

Diese Vorstellung passt zu einer Interpretation der Quantentheorie, nach der die Eigenschaften der Realität auf kleinsten Skalen anscheinend von den Umständen der Beobachtung abhängig sind (Beobachtereffekt, Consciousness Causes Collapse). Die quantenphysikalische Unschärfe wäre demnach möglicherweise ein Artefakt einer geringeren als der maximal möglichen lokalen Simulationstreue.

Eine solche Simulation würde weiterhin bestreben, möglichst oft möglichst grosse Realitätsblöcke mit konstanten Eigenschaften abzubilden, anstatt die Realitätsberechnung bis zur maximalen Auflösung durchführen zu müssen. Dies entspricht der Erfahrung unserer makroskopischen Alltagsrealität, in der Objekte die meiste Zeit statisch und homogen erscheinen anstatt unbestimmt und fluktuierend (Rauschen). Je weiter wir uns von den kleinsten Skalen entfernen (also je größer die Zusammenstellung von Materie, die wir beobachten), desto schärfer (definierter) und stabiler erscheint uns die Realität. (Beispiel: um ein Stück Würfelzucker zu beschreiben, ist es für den Alltag ausreichend, die makroskopischen physikalisch-chemischen Eigenschaften zu bestimmen, anstatt den Würfelzucker auf der Skala seiner Elementarteilchen begreifen zu wollen.)

Anstelle Realitätsberechnungen wiederholt durchzuführen, würde ein effizient gestaltetes System bereits berechnete (Zwischen-)Zustände wiederverwenden (Caching). Dies passt zu der Beobachtung, dass sich in der Natur viele Muster und Formen zu wiederholen scheinen. (Rupert Sheldrake postuliert die Existenz eines morphischen Feldes, durch das die Wahrscheinlichkeit des Auftretens eines bestimmten Musters oder einer bestimmten Form in der Natur steigt, je öfter sich ein solches Muster bereits in der Vergangenheit gezeigt hat. Mit anderen Worten, wir sehen in der Natur häufiger Strukturen mit bekannten Mustern als völlig neue Strukturen. Aus der Sicht einer in Software abgebildeten simulierten Realität entspräche dies der Wiederverwendung eines zuvor berechneten und zwischengespeicherten (gecacheten) Musters.)

Quantisierte Raumzeit und Bits als Atome (im ursprünglichen Sinn)

Gehen wir weiter davon aus, dass alle Eigenschaften der simulierten Realität in einem Informationsspeicher abgebildet sind, dann liegt die Überlegung nahe, dass das finale Elementarteilchen dieser Realität eine Speicherzelle mit einem Informationsgehalt von 1 Bit ist (vgl. It from bit). Daraus ergäbe sich auch, dass die fundamentale (unterste) Struktur der Realität (Raum und Zeit) kein Kontinuum sein kann, sondern quantisiert sein muss, da Bits nicht weiter unterteilt werden können. (Eine weitere Konsequenz wäre, dass der Gesamtinhalt an Materie bzw. Energie im Universum äquivalent wäre zu der Speicherkapazität, die das simulierende System zur Abbildung der Simulation bereitstellt.)

Das Konzept einer quantisierten Raumzeit entspricht einer wesentlichen Konsequenz aus der Schleifenquantengravitation, der momentan am weitesten entwickelten Alternative zur Stringtheorie; beides Kandidaten für die Vereinheitlichung von Allgemeiner Relativität und Quantenphysik und für die Entwicklung eines Allem zugrundeliegenden physikalischen Prinzips (Theory of Everything [TOE] oder auch Weltformel). In der Schleifenquantengravitation gibt es für Raum und Zeit keine weiter unterteilbaren Einheiten als die Planck-Länge (~10−35 m) bzw. Planck-Zeit (~10−43 s), welche das Grundgerüst der Realität darstellen. Unterhalb dieser Begrenzungen verlieren Raum und Zeit ihre Bedeutung; die physikalische Realität existiert also quasi auf einem Raster mit der Auflösung einer Planck-Länge für die Raumdimensionen sowie einer Planck-Zeit für die Zeitdimension. (Eine einfacher vorstellbare Analogie wäre ein monochromes Bitmap-Display, in dem jedes Bildelement (Pixel) nur schwarz oder weiß sein kann; die dargestellte Bildrealität entsteht aus der Summe der entweder schwarzen oder weißen Pixel, deren Anzahl theoretisch beliebig groß sein kann, jedoch ist keine feinere Unterteilung oder Farbabstufung möglich.)

Das Leveldesign unserer Realität und die Weltformel

Es ist vorstellbar, dass die Betreiber einer Realitätssimulation diese mit vorgegebenen Inhalten befüllen würden, ähnlich wie Level-Designer die Inhalte (3D-Modelle, Texturen, Verhalten etc.) der Spielumgebung bereitstellen. Dies entspräche gewissermaßen einer Schöpfung im Sinne des Kreationismus, in der bestimmte oder alle Eigenschaften der Realität an einem bestimmten Punkt in der Zeit plötzlich manifest wurden. Da jede solche konstruierte Realität alle Eigenschaften haben könnte, die mit unserem (naturwissenschaftlichen) Weltmodell übereinstimmt (einschließlich – scheinbar paradoxerweise – der Interpretation beobachteter Phänomene, dass die Realität gerade nicht das Ergebnis einer Schöpfung sein könne), ist diese Möglichkeit nicht ausschließbar.

Eine andere – und elegantere, da einfachere – Annahme ist, dass das Leveldesign unserer Realität nicht vorgegeben wurde, sondern auf der Basis eines mathematischen Modells (Algorithmus) kontinuierlich errechnet wurde und wird – in Analogie zu Computerspielen mit parametrisch bzw. prozedural generierten Umgebungen wie Elite (1984) oder No Man’s Sky (2016). Ein solcher Algorithmus würde vergleichsweise einfach gewählt werden, um effizient berechnet werden zu können, müsste aber in der Lage sein, alle Phänomene der Realität hervorzubringen. Aussichtsreiche Kandidaten wären demnach Algorithmen, die eine besonders reichhaltige (detaillierte, komplexe) Realität auf der Basis möglichst simpler (effizient berechenbarer) mathematischer Konstrukte erlauben. Bekannte Beispiele, auf die dies zutrifft, sind zelluläre Automaten sowie Fraktale. In der Tat beobachten wir im Universum häufig selbstähnliche (fraktale) Strukturen auf allen Skalen, woraus man ein zugrundeliegendes universelles Formbildungsprinzip ableiten könnte.

Als Beispiele für fraktale Strukturen, die aus der iterativen Berechnung einer einzigen, sehr einfachen mathematischen Gleichung entstehen, seien die seit den frühen 1980er Jahren bekannte Mandelbrot-Menge genannt, sowie neuere, davon abgeleitete Varianten in drei Dimensionen, die je nach Wahl der bestimmenden Parameter beeindruckend komplexe räumliche Gebilde (siehe auch hier, hier und hier) erzeugen können. (Bemerkenswerterweise gibt es eine große Ähnlichkeit zwischen fraktalen dreidimensionalen Strukturen dieser Art und der typischen Erscheinung bestimmter geometrisch-abstrakter Visionen, die von Personen unter dem Einfluss serotonerger Halluzinogene wie LSD oder DMT erlebt werden. Mir ist bisher keine Deutung oder Erklärung dieser Übereinstimmungen bekannt.)

Die Entdeckung eines solchen (mathematischen) Konstrukts, aus dem sämtliche Eigenschaften unserer Realität ableitbar sind, wäre äquivalent zur Entdeckung einer Weltformel (TOE). Unser gegenwärtiges, unvollständiges und nicht widerspruchsfreies physikalisches Weltmodell ist sehr wahrscheinlich nur ein Grenzfall oder eine fehlerbehaftete Interpretation eines solchen, viel grundlegenderen Prinzips. Falls tatsächlich eine Weltformel gefunden wird, in der numerische Faktoren oder Koeffizienten (Naturkonstanten) auftauchen, dann wären diese Konstanten nicht auf ein noch grundlegenderes Prinzip zurückführbar und damit nicht erklärbar – so wie dies auch jetzt auf eine Vielzahl von Naturkonstanten zutrifft (wobei ihre große Zahl eher ein Hinweis auf die Unvollständigkeit oder Fehlerhaftigkeit unserer Theorien ist). Solche Naturkonstanten wären jedoch begreifbar, in dem man sie als die Parameter versteht, mit denen die Realitätssimulation gestartet wurde.

Es ist vorstellbar, dass die unermessliche Reichhaltigkeit und Komplexität unseres Universums auf ein sehr einfaches Prinzip zurückgeführt werden kann, in dem ein ausbalancierter Grundzustand (entsprechend minimaler Entropie) durch Einbringung einer infinitesimal kleinen Störung alle Aspekte der Realität ausgebildet hat. Diese Vorstellung ist im Einklang mit dem aktuell favorisierten Modell zur Entstehung des Universums (Big-Bang-Theorie), in deren Verlauf sich Materie-Antimaterie-Paare bildeten, die sich jedoch sofort gegenseitig auslöschen, so dass eigentlich keine Materie hätte entstehen können. Allerdings bildete sich durch einen bisher nicht genau bekannten Mechanismus ein winziger Überschuß an Materie – in der Größenordnung von circa eins zu 30 Millionen, oder 0,000003 Prozent. Dieser Mechanismus könnte eine solche von außen in die Simulation eingebrachte Störung sein.

Bugs und Glitches – Fehler in der Matrix

Eine Realitätssimulation ist möglicherweise nicht perfekt. Auch wenn sowohl die hypothetische Hardware als auch Software des Systems, in dem sie abläuft, nach höchsten technischen Standards entwickelt wurden, Algorithmen (oder sogar der gesamte Code) auf mathematische Korrektheit überprüft wurden und das System alle denkbaren Fehlerkorrektur- und Selbstreparaturmechanismen bereitstellt, so ist nicht auszuschließen, dass in der Simulation dennoch gelegentlich Fehler auftreten. Dies lässt sich aus der Erfahrung ableiten, dass wir trotz unserer enormen Fortschritte in diesem Bereich nahezu keine perfekten (d.h. absolut fehlerfreien) Computersysteme erschaffen können. Je komplexer ein System, desto größer die Wahrscheinlichkeit von Fehlern. Selbst in von uns erschaffenen Umgebungen mit allerhöchsten Anforderungen an Korrektheit, Sicherheit und Verfügbarkeit – zum Beispiel bei Banken oder im Luftverkehr – treten gelegentlich Computerfehler auf.

Eine Zivilisation, die in der Lage ist, eine komplexe Realität zu simulieren, die von unserer Realitätserfahrung nicht unterscheidbar ist, hat möglicherweise Wege gefunden, Systeme zu entwickeln, in denen solche Fehler entweder praktisch nicht entstehen, oder automatisch korrigiert werden, bevor sie bemerkbar werden. Trotz alldem würde eine solche Simulation aufgrund ihrer hohen Komplexität (gemessen am Umfang des verwalteten Speichervolumens) vielleicht sehr selten fehlerhafte Ergebnisse produzieren, die wir erkennen können. Einzelne Bitfehler im Realitätsspeicher wären zum Beispiel eher nicht meßbar, sofern die Speicherzelle nicht dauerhaft defekt bliebe. Doch wie würde sich ein transienter Bug manifestieren, der uns nicht unentdeckt bleibt? Wir würden punktuelle Ereignisse beobachten, die im Widerspruch zu unseren physikalischen Theorien stehen – einzelne Ausrutscher, die wir nicht erklären können, die sich aber auch nicht wiederholen. Solche punktuellen Glitches wären womöglich die einzigen potenziellen Hinweise darauf, dass unsere Realität simuliert ist. Allerdings wären solche Ereignisse nur durch Zufall bemerkbar, und da sie mutmaßlich eher selten und nicht reproduzierbar aufträten, wäre in den meisten Fällen kaum zu belegen, dass es sich nicht um einen Meßfehler oder menschlichen Irrtum handelt.

Anders verhält es sich, wenn wir eine Unstimmigkeit entdecken, die wiederholt – also reproduzierbar – zu Beobachtungen führt, die nicht im Einklang mit unserem physikalischen Weltmodell stehen. In so einem Fall sehen wir uns üblicherweise veranlasst, das Weltmodell anzupassen, um die abweichenden Beobachtungen erklären zu können. Solche Abweichungen waren und sind ja gerade der Motor für naturwissenschaftliche Erkenntnis. So lange wir jedoch keine Weltformel gefunden haben, gibt es wohl keine Möglichkeit, zu erkennen, ob es sich bei einem tatsächlichen Bug um einen solchen handelt, oder ob die Beobachtungen Teil des regulären (also korrekten) Verhaltens der Realität sind.

Teil II: Die Rolle des Menschen und die Frage nach dem Zweck von Bewusstsein

Welche Rolle haben wir als bewusste Beobachter?

Da wir (Menschen) offensichtlich unabhängig voneinander dieselbe in sich konsistente Realität erleben, muss das Realitätsmodell unabhängig von unserer jeweiligen lokalen Erfahrung existieren. Aus Sicht der Betreiber der Simulation wäre kein bewusster Beobachter innerhalb der Simulation erforderlich, da der Zustand der Simulation zu jeder Zeit exakt bekannt wäre – schließlich müsste ein äußerer Simulationsbeobachter lediglich die gewünschte Information aus dem Speicher des Systems abrufen.

Der Umstand, dass Menschen in dieser Simulation existieren und die Fähigkeiten haben, diese bewusst zu beobachten – und uns zum Beispiel Gedanken darüber zu machen, ob wir Bewohner einer Simulation sind –, wirft die Frage auf, welche Rolle wir in diesem System spielen. (Es ist freilich nicht bekannt, ob wir die einzigen bewussten Beobachter sind, die in dieser postulierten Simulation existieren. Bisher haben wir keine Möglichkeit, das Vorhandensein einer bewussten Realitätserfahrung bei anderen Lebewesen auf unserem Planeten nachzuweisen; zumindest können wir eine solche Erfahrung nicht messen. Ebensowenig liegen uns bislang Belege für die Existenz bewusster Lebensformen außerhalb der Erde vor. Ob wir die einzigen bewussten Beobachter der Simulation innerhalb der Simulation sind, lässt sich gegenwärtig nicht beantworten.)

Einblick in die Simulation von innen

Eine mögliche Vorstellung ist, dass wir bewusste Agenten oder auch Avatare darstellen, die den Betreibern einen Blick in die Simulation aus einer bestimmten Perspektive ermöglichen. Interessanterweise existieren wir (sowie die uns umgebende Alltagswelt) auf einer Größenskala, die sich sehr grob in der Mitte zwischen den kleinsten (Planck-Länge, ~10−35 m) und grössten Strukturen (gesamtes Universum, mutmaßlich mindestens etwa ~1027 m, sofern es endlich ist) befindet. Das ermöglicht uns, in beide Richtungen zu blicken und eine vollständigere Erfahrung der Realität zu haben als ein hypothetischer Beobachter, der an einem der beiden Extreme existiert – wobei dieses Argument zwangsläufig anthropozentrisch ist und wir uns schwer vorstellen können, welche Erfahrung eine hypothetische, bewusste Entität haben möge, die die Ausdehnung von Elementarteilchen oder Galaxien hat.

In jedem Fall stellt sich die Frage, warum wir in der Lage sind, unser Universum bewusst zu beobachten. Warum existiert überhaupt Bewusstsein, und wie lässt es sich begreifen? Ist Bewusstsein eine emergente Eigenschaft unseres Nervensystems und materiell, also vollständig physikalisch erklärbar? Ist es immateriell und existiert unabhängig von oder auch außerhalb der physikalischen Realität?

Materie und Geist – eine Dualität?

Im ersteren Fall wäre Bewusstsein ebenso eine Eigenschaft oder ein Aspekt der Realität wie alles andere, was wir beobachten, und ließe sich auf das zugrundeliegende Prinzip (die Weltformel bzw. den prozeduralen Realitäts-Berechnungs-Algorithmus) zurückführen. In diesem Fall gäbe es keine Abgrenzung zwischen Geist und Materie. Im letzteren Fall wäre es ein Phänomen, das von etwas anderem bestimmt ist als dem Teil des Systems, der die materielle Realität der Simulation berechnet. Es wäre vorstellbar, dass die Simulation aus zwei mehr oder weniger unabhängigen Komponenten besteht: einem Realitätssimulator und einem Bewusstseinssimulator. In diesem Fall gäbe es eine direkte Entsprechung für die Geist-Materie-Dualität.

Die Frage, welche Vorstellung richtig ist, lässt sich gegenwärtig nicht beantworten. Die mechanistische Sichtweise ist, dass nur die physikalisch beschreibbare Realität tatsächlich real ist; alles Andere ist entweder durch sie erklärbar oder illusorisch. Problematisch an dieser Sichtweise ist, dass es bisher kein physikalisches Modell gibt, mit dem sich Bewusstseinsphänomene zufriedenstellend erklären lassen. Verschiedene Ansätze, wie makroskopische quantenmechanische Wechselwirkungen in Neuronen laut der Orchestrated Objective Reduction (Orch-OR)-Theorie von Penrose und Hameroff sowie Versuche, Bewusstsein als elektromagnetisches Feld zu beschreiben (McFadden, Pockett et al.), finden bisher keine allgemeine Akzeptanz. Bewusstseinsphänomene sind bislang nicht direkt messbar, sondern praktisch nur über subjektive Beschreibungen zugänglich, weshalb entsprechende Forschungsarbeiten typischerweise nicht die Anforderungen an wissenschaftliche Erkenntnis (Überprüfbarkeit, Wiederholbarkeit, etc.) erfüllen.

Das Vorhandensein zweier unabhängiger Simulationskomponenten wäre eine mögliche Erklärung dafür, dass sich Bewusstseinsphänomene so vehement einem naturwissenschaftlichen Zugang widersetzen (natürlich vorausgesetzt, dass Bewusstsein tatsächlich real ist und nicht nur eine Illusion). Das könnte bedeuten, dass Bewusstsein prinzipiell nicht mit wissenschaftlichen Methoden ergründbar ist, da es außerhalb der physikalischen Realität existiert. Da wir (Menschen) jedoch sowohl Teil der physikalischen Realität sind als auch bewusst sind, muss es eine Schnittstelle – ein Interface – zwischen diesen beiden Komponenten der Simulation geben. Augenscheinlich befindet sich die physikalische Seite dieses Interfaces in unserem Gehirn.

Bewusstsein als Verbindung nach außen

Welchen Grund gäbe aus Sicht der Simulationsbetreiber, zusätzlich zur (materiellen) Realitätssimulation eine Bewusstseinssimulation zu betreiben? Denkbar ist, dass es einen Mechanismus darstellt, durch den die Betreiber von Realitätssimulationen einen Kommunikationskanal herstellen können, der ihnen einen Zugriff auf die direkte Erfahrung der simulierten Realität durch Entitäten erlaubt, die selbst Teil dieser Simulation sind. Es ist vorstellbar, dass ein solcher Zugang zu einer direkten Erfahrung für die Simulationsbetreiber besonders interessant oder wertvoll ist. Daraus würde folgen, dass Simulationsbetreiber sich in besonderem Maß für solche Simulationen interessieren, die Entitäten (Wesen) mit den notwendigen Voraussetzungen für Bewusstsein hervorbringen – ein entsprechend entwickeltes Gehirn, sofern es tatsächlich die Bewusstseinsschnittstelle darstellt. Möglicherweise ist unsere Simulation gemäß dem anthropischen Prinzip auf die Entwicklung von bewusstseinsfähigen Beobachtern abgestimmt, oder die Betreiber starten eine große Zahl Simulationen mit mehr oder weniger zufälligen Startbedingungen, und beschäftigen sich dann ausschließlich oder besonders mit solchen, aus denen bewusstseinsfähige Beobachter hervorgehen.

Möglicherweise sind diese Simulationsreihen Teil eines Forschungsprojekts; vielleicht dienen sie aber auch lediglich der Unterhaltung ihrer Betreiber. Über den postulierten Bewusstseins-Kommunikationskanal könnten die Betreiber in beiden Fällen an unserer Realitätserfahrung teilhaben, das heisst, sie könnten uns quasi beim Leben zuschauen. Eine solche Übertragung wäre womöglich wesentlich aufschlußreicher oder auch unterhaltsamer als die Beobachtung des Simulationszustands von außen. Falls eine Kommunikation in die eine Richtung möglich ist, dann ist davon auszugehen, dass sie auch in die andere Richtung funktioniert; dass also die Simulationsbetreiber die Möglichkeit haben, Einfluss auf unseren Bewusstseinszustand und im Speziellen auf unsere Gedanken, Überzeugungen und Handlungen zu nehmen – zum Beispiel, um bestimmte Entwicklungen anzustoßen und damit den Verlauf der Simulation zu verändern, ohne dazu die simulierte Realität selbst manipulieren zu müssen. (Es ist jedoch auch vorstellbar, dass eine solche Einflussnahme seitens der Betreiber zwar möglich ist, aber entweder vermieden wird oder unzulässig ist – z.B. aufgrund von ethischen oder sogar gesetzlichen Vorgaben oder schlicht Spielregeln.)

Sind wir allein?

Bisher habe ich angenommen, dass die Simulationen ihrerseits von bewussten Wesenheiten betrieben werden; also in irgendeiner Form beobachtet werden. Stattdessen ist vorstellbar, dass Simulationen zwar durch irgendeinen Mechanismus angestoßen werden, jedoch anschließend sich selbst überlassen werden. (In diesem Fall käme jedoch wieder die Frage auf, wozu wir ein Bewusstsein haben. Wenn es keine äußeren Beobachter gibt, aber Bewusstsein real ist, dann ist die obige Hypothese offensichtlich falsch.)

Möglicherweise sind wir Teil einer endlosen Progression, die bewusste Wesenheiten innerhalb einer simulierten Realität hervorbringt, die ihrerseits früher oder später die Fähigkeit erlangen, Realitätssimulationen zu betreiben, aus denen potenziell Wesenheiten hervorgehen, die wiederum Realitätssimulationen betreiben, und so weiter. Dies ist eine der drei Möglichkeiten in Bostroms Trilemma zur Simulationshypothese: Falls es Zivilisationen gibt, die die Fähigkeit entwickeln, Realitätssimulationen zu betreiben, und dies auch tun, dann wäre eine solche Verschachtelung von Simulationen-in-Simulationen-in-Simulationen sehr wahrscheinlich – oder zumindest die Häufigkeit solcher Simulationen sehr groß –, und es wäre sehr unwahrscheinlich, dass wir ausgerechnet in einer nicht-simulierten Realität existieren sollten.

Gibt es eine Ur-Realität, oder nur Simulationen innerhalb von Simulationen?

Wir haben keine Möglichkeit, zu erkennen, ob unsere Wirklichkeit real oder simuliert ist (es sei denn, wir entdecken wiederholt Glitches und erkennen sie zweifelsfrei als solche; siehe oben). Vielmehr stellt sich sogar die Frage, ob überhaupt eine ursprüngliche, nicht-simulierte Realität existiert. Möglicherweise ist unser Universum Teil einer unendlichen Verschachtelung aus Simulationen. Daraus würde folgen, dass keine objektive, materielle Realität im eigentlichen Sinn existiert; allerdings gäbe es ohne eine Ur-Realität auch nichts, das die initiale Simulation (und damit alle inneren Simulationen) berechnet. Jede Simulationsebene müsste damit auf irgendeine Weise quasi aus sich selbst heraus entstehen. Eine Auflösung für ein solches Paradoxon, die allerdings eher nicht weniger Kopfzerbrechen bereitet, böte die Vorstellung, dass die Verschachtelung der Simulationen rekursiv ist, das also eine innere Simulation die Umgebung für eine äußere Simulation sein könnte. Mit anderen Worten, die Verschachtelung ist möglicherweise nicht linear (hat einen Anfang und ein Ende), sondern besteht aus einer Schleife.

Falls dies zutrifft, und wir wären Teil einer solchen Rekursion, dann müssten wir bereits eine Realitätssimulation betreiben. Weiter könnte man postulieren, dass die Schleife nur aus einer Ebene besteht – dass wir uns also selbst simulieren. Zumindest augenscheinlich ist das nicht der Fall. (Dieser Gedanke hat jedoch eine andere Implikation, auf die ich am Schluß des Textes komme.)

Wo stehen wir?

Wir befinden uns in einem Prä-Simulations-Stadium unserer Entwicklung. Es sind jedoch Anzeichen erkennbar, dass wir uns auf dem Weg dahin befinden. Wir erleben eine exponenziell ansteigende Fähigkeit, virtuelle Realitäten zu erschaffen. Betrachtet man die Entwicklung von simulierten Realitäten in Computern, dann begann diese zunächst sehr zaghaft mit der Konstruktion der ersten speicherprogrammierbaren Digitalrechner (Ende der 1940er Jahre) und erreichte gegen Ende der 70er Jahre ein Niveau, das zwar noch außerordentlich primitiv war, aber dank der Verfügbarkeit preiswerter Hardware und kulturell-ökonomischer Faktoren allgemein zugänglich wurde (Videospiele-Boom). Ab diesem Punkt beschleunigte sich die Entwicklung rasant, und diese Beschleunigung nimmt weiter zu. Innerhalb von nur drei Jahrzehnten fand ein Sprung von abstrakten und sehr groben Darstellungen zu einer Qualität statt, die es erlaubt, nahezu fotorealistische, immersive Umgebungen in drei Dimensionen und in Echtzeit zu berechnen, wobei diese Berechnungen zunehmend präzise auf dieselben physikalischen Modelle zurückgreifen, die unsere tatsächliche Realität beschreiben.

Es findet also eine Art Konvergenz statt: auf der einen Seite erarbeiten Naturwissenschaftlicher mathematische Modelle, die unsere echte Realität zunehmend genau beschreiben; auf der anderen Seite werden diese Modelle zunehmend präzise in Software abgebildet, um virtuelle Realitäten zu errechnen, die zunehmend weniger von der echten Realität unterscheidbar werden. Diese Entwicklung findet beschleunigt statt, wobei es scheint, dass der Fortschritt in unserer Fähigkeit zur Realitätssimulation unseren Fortschritt in der Modellierung (also dem Verständnis) der echten Realität bereits überholt hat. Mit anderen Worten: unser Vermögen, virtuelle Realitäten zu erschaffen, in denen wir Gesetzmäßigkeiten aus der realen Welt abbilden, entwickelt sich schneller als unser Wissen über die Gesetzmäßigkeiten der realen Welt.

Sehr wahrscheinlich werden wir innerhalb der nächsten zwei Jahrzehnte in der Lage sein, ganze physikalische Theoriegebäude – und nicht nur meist stark vereinfachte Einzeltheorien (Beispiel:  Millennium-Simulation) – durch Abbildung in Software zu überprüfen, bevor wir sie empirisch mit der tatsächlichen Realität abgleichen. Dies wird mittelfristig dazu führen, dass die Grenze zwischen äußerer Realität und virtuellen Realitäten verschwimmt. Die Genauigkeit unseres physikalischen Weltmodells wird sich dann daran messen, inwieweit eine virtuelle Realität auf der Basis dieser Theoriegebäude von der äußeren Realität unterscheidbar ist. Es ist denkbar, dass wir uns einer Weltformel nicht so sehr durch Aufstellen mathematischer Konstrukte nähern, sondern durch schrittweise und möglicherweise heuristische und generative Verfeinerung von Realitätssimulationen.

Ist alles ganz anders – stehen wir auf der falschen Seite?

Oben stellte ich die Frage, was Bewusstsein ist, und ob Bewusstsein überhaupt real ist – und wenn ja, ob es materiell und damit physikalisch erklärbar, oder immateriell und damit für naturwissenschaftliche Methoden grundsätzlich unzugänglich ist. Etwas später kam die Frage auf, ob es überhaupt eine materielle Realität gibt, wenn wir uns innerhalb einer möglicherweise zyklischen Verschachtelung von Simulationen befinden.

Vielleicht betrachten wir diese Fragestellungen in typisch westlicher Denkmanier auch von der falschen Seite.

In den Upanishaden, die mutmaßlich bereits vor etwa 2200 bis 2800 Jahren verfasst wurden, findet sich das Konzept von Māyā, nachdem die äußerlich erfahrbare (materielle) Realität nicht das ist, als was sie uns erscheint, sondern vielmehr eine Art Schauspiel oder Magie, die uns verblendet. Māyā verschleiere nur die tatsächliche Wirklichkeit, die versteckten Prinzipien, die tatsächlich wahr sind. Vielleicht haben die Hindus vor sehr langer Zeit ja schon etwas gewusst, das wir uns heute erst wieder erarbeiten müssen?

Die Illusion von gut und schlecht und die Freiheit von Schmerz

Unser Denken – unser Blick auf die Welt – ist bestimmt von einer Illusion, und zwar die, dass wir Erlebnisse, Erfahrungen, Dinge, Menschen, alles … in die Kategorien »gut« und »schlecht« unterteilen müssen. Wir tun das nicht bewusst, aber wir tun es unaufhörlich, permanent.

In uns sitzt eine Art mentaler Türsteher, der einen neuen Eindruck oder ein neues Erlebnis (z.B. etwas, das wir sehen, lesen, oder Aussagen eines Gesprächs, dem wir folgen) fortwährend in diese zwei Schachteln einsortiert. Die »guten« Eindrücke lassen wir durch, wir nehmen sie an, wir akzeptieren sie, wir lernen durch sie, wir wachsen an ihnen. »Gute« Eindrücke sind verknüpft mit »guten« Gefühlsregungen: Freude, Spaß, Zufriedenheit, Genuß.

Die »schlechten« oder »bösen« Eindrücke dagegen werden ausgesondert. Sie kommen in eine Art geistiges Verlies, in das wir am liebsten keinen Blick wagen. Diese Eindrücke erzeugen in uns »schlechte« Gefühlsregungen wie Angst, Wut, Scham, Schuld, Trauer. Wir wollen diese Eindrücke nicht. Wir wehren uns gegen sie. Wir bekämpfen sie. »Böse« Eindrücke erzeugen in uns Schmerz. Wir wünschen uns, dass es sie gar nicht gäbe, denn dann hätten wir diesen Schmerz nicht. Der Wunsch bleibt jedoch unerfüllt, und wir sind überzeugt, dass ein Leben ohne diesen Schmerz nicht möglich ist.

Ich habe eine gute Nachricht. Ein Leben ohne diesen Schmerz ist möglich, denn die Unterteilung von Eindrücken in gut und schlecht ist ein Fehler in unserer mentalen Software. Die Kategorien »gut« und »schlecht« existieren nämlich überhaupt nicht. Sie sind eine Illusion, eine mentale Konstruktion. Die Dinge sind nicht von sich aus »gut« oder »schlecht«. Dies sind Attribute, die wir ihnen zusprechen, ihnen aufstülpen – weil wir sie so bewerten.

Die Dinge – alles, was passiert; das Leben, das Universum, und der ganze Rest – sind nicht »gut« oder »schlecht«. Sie sind einfach. Sie existieren.

Die Bewertung »gut«/»schlecht« entstammt unserem eigenen Bild von der Welt, und dieses Bild ist ein Konstrukt. Es ist unser Modell von der Welt, das wir uns über die Jahre zurechtgelegt haben, um uns in ihr orientieren zu können. Es ist eine Konstruktion, die uns immer mal wieder geholfen hat, bestimmte Situationen zu bewältigen oder einschätzen zu können. So weit ist unser Weltmodell auch hilfreich, nützlich und gesund.

Die Probleme – und damit die Schmerzen – entstehen, wenn wir vergessen, dass unser Weltmodell eben nur ein Modell (ein Konstrukt aus Überzeugungen und Glaubenssätzen) ist und nicht die Realität. Anstatt die Realität so zu sehen, wie sie ist, sehen wir sie durch die Filter unseres Modells.

Das ist an sich gar nicht problematisch – wir können gar nicht anders; so sind wir als Menschen mit unseren überragenden kognitiven Fähigkeiten nun einmal gestrickt. Das Problem ist, dass es uns an irgendeinem Punkt nicht mehr bewusst ist. Dieser Punkt liegt bei den meisten von uns schon recht früh in der Kindheit.

Unbewusst gleichen wir also laufend unsere Sinneseindrücke ab mit dem Bild, das wir von der Realität haben. Eindrücke, die mit unserem Modell übereinstimmen, werden als »wahr/gut« aufgenommen, während die, die ihm widersprechen, als »falsch/schlecht« ausgesondert werden. Unser mentales Weltmodell formt unsere Erwartungen. Es verschleiert unseren Blick und verhindert, dass wir der uns umgebenden Realität gegenüber tatsächlich frei sind und sie so willkommen heißen, wie sie ist. Denn die Realität ist. Sie ist nicht x oder y. Sie ist.

Alle Wertungen, die wir in uns tragen, sind nicht Eigenschaften dieser Realität. Es sind Attribute, die wir ihr selbst irgendwann zugewiesen haben und an denen wir seitdem festhalten.

Wenn wir das verstehen, wird uns klar, dass wir nicht gezwungen sind, diese Wertungen aufrecht zu erhalten. Wir können uns von ihnen lösen – sie loslassen. Wir müssen die Dinge nicht mehr in »gut« und »schlecht« oder überhaupt irgendwie bewerten. Wir können sie einfach sein lassen und akzeptieren, dass sie so sind, wie sie sind.

Das heisst nicht, dass wir resignieren müssen – dass alles so bleiben muss, wie es ist. Wir sind und bleiben intelligente, schöpferische Wesen, die die Fähigkeit und Macht haben, unsere Umwelt zu verändern.

Aber es ist uns jetzt möglich, ohne den Schmerz zu leben, der entsteht, wenn wir uns einer zwiegespaltenen Realität aus »gut« und »schlecht« bewegen. Wir werden offener, freier. Unser Bewusstsein kann sich erweitern. Wir können uns wieder erlauben, mit unseren Gedanken Regionen zu erkunden, die wir uns verboten hatten, da sie in unserem mentalen Modell das Etikett »schlecht« trugen. Wir werden ruhiger, ausgeglichener.

Uns kann weniger erschüttern, denn erschüttert zu werden, impliziert, dass wir etwas erleben, das unser Weltmodell nicht zulässt oder nicht vorgesehen hat.

Stattdessen betrachten wir die Welt wieder wie das kleine Kind, das wir früher einmal waren – das mit großen Augen all das aufgesogen hat, was »da draußen« so passiert. Wir müssen nicht mehr bewerten. Wir können uns freuen, dass wir hier sind, und dass wir das alles erleben dürfen.

 

Ein Test für Homöopathie

Durch einen Kommentar auf meinen Artikel Homöopathie ist Wahnsinn bin ich auf eine Idee gekommen, wie man vermutlich einen überzeugenden Nachweis oder zumindest starken Hinweis erbringen könnte, dass behauptete oder tatsächlich beobachtete Effekte von Homöopathie vollständig durch Suggestion oder Autosuggestion erklärbar sind.

Kontext und Hintergrund (für die, die den Artikel nicht gelesen haben): ich bin unentschieden (agnostisch/fragend) gegenüber Homöopathie. Offensichtlich erzielen genug Leute damit Erfolge, als dass man sie ignorieren könnte. Andererseits gibt es keine schlüssige Erklärung für den Mechanismus, die nicht im Widerspruch steht mit dem aktuellen physikalischen Verständnis der Welt. Wenn es einen tatsächlichen Mechanismus gibt, ist dieser etablierten Nachweis- und Verifikationsmethoden nicht zugänglich, sonst hätten wir ihn beobachtet. Daraus schließen Viele, dass es einen solchen Mechanismus nicht gibt. Eine andere mögliche Schlußfolgerung wäre jedoch, dass unsere Methoden nicht so allumfassend oder universell sind, wie wir denken. Eine wichtige Einschränkung ist, dass unsere Methoden im Rahmen unseres vorherrschenden Verständnisses von Materie und Energie arbeiten, welches möglicherweise unvollständig ist.

Mein vorgeschlagener Test ist folgender: ein signifikanter Anteil (sagen wir: 50%) sämtlicher homöopathischen Präparationen wird aus reinem Träger, also Zuckerkugeln oder Wasser bzw. anderem Lösemittel produziert. Der Träger kam an keinem Punkt in Kontakt mit dem homöopathischen Grundstoff. Er wurde auch ansonsten in keiner Weise bearbeitet oder beeinflusst, sondern direkt abgepackt und in den Verkauf gebracht. Ein Code (Hash) auf dem Präparat oder der Charge verweist auf eine Datenbank, in der vermerkt ist, welches Medikament »leer« bzw. »fake« ist und welches nach homöopathischen Verfahren hergestellt wurde; diese Datenbank wird jedoch für die Dauer der Studie, sagen wir, 20 Jahre, vor jeglichem Lesezugriff geschützt, also versiegelt. Unabhängige Prüfer müssen dafür Sorge tragen, dass das Verfahren eingehalten wird. Einzelne Verfahrensschritte müssen protokolliert und idealerweise kryptographisch signiert werden, so dass man Manipulationen nachträglich vollständig ausschließen kann.

In den nächsten 20 Jahren kommen die Präparate zum Einsatz. Es wird jeweils die Kennung notiert zusammen mit den Rückmeldungen der Patienten und weiteren Daten wie tatsächlich nachweisbaren Behandlungsergebnissen. An keinem Punkt ist bekannt, ob ein Patient ein homöopathisches »Verum« oder »Placebo« bekommen hat. Da in den meisten Fällen kein Molekül des Grundstoffs mehr im Träger nachweisbar ist, nützt auch eine chemische Analyse nicht zur nachträglichen Unterscheidung.

Nach Ablauf der Studienzeit wird die Kennungs-Datenbank zum ersten Mal geöffnet und mit den Behandlungsdaten abgeglichen. Stellt man hier keinen Unterschied zwischen Fake- und echten Homöopathika fest, dann dürfte als gesichert gelten, dass Homöopathie vollständig auf Suggestion reduzierbar ist. Im anderen Fall muss man davon ausgehen, dass ein Suggestionseffekt nicht allein verantwortlich gemacht werden kann, sondern der Homöopathie ein tatsächlich wirksamer Mechanismus zugrunde liegt, den es weiter zu erforschen gilt.

Falls ersteres, spräche vieles dafür, Suggestion als therapeutischem Mechanismus sehr viel mehr Aufmerksamkeit zu widmen als dies zurzeit noch getan wird. Denkt man dies weiter, liegt es nahe, eine solche Studie gar nicht nur auf homöopathische Präparate zu beschränken, sondern auf Medikamente im Allgemeinen. Wir könnten dabei entdecken, dass auch zahlreiche nicht-homöopathische Medikamente keinen Effekt jenseits von Suggestion haben.

In der Praxis ist die Durchführung einer solchen Studie sehr schwierig, da man dazu auf die Kooperation aller Mitwirkenden angewiesen wäre, insbesondere die der Hersteller, für die das Studienergebnis sehr nachteilig sein könnte.

Falls eine solche Studie jemals durchgeführt werden kann, ist meine Vorhersage, dass die Abweichung zwischen Verum und Placebo in vielen Fällen sehr gering sein wird, dass also die Wirkung von Suggestionseffekten dominiert ist – sowohl bei vielen Homöopathika als auch allopathischen Präparaten.

Die Schlußfolgerung wäre dann, dass ein wesentlicher Anteil medikamentöser Effekte nicht, wie angenommen, auf mechanistischen Interaktionen auf biochemischer Ebene basiert, sondern auf Suggestion – also den Einfluß unseres Bewusstseins.

Nach dieser Vorstellung würden sowohl Homöopathika als auch Allopathika im Wesentlichen lediglich eine Art Trigger oder Impuls an das Bewusstsein liefern, durch dessen Einfluß die eigentlich heilsame Veränderung stattfindet. Ein triviales Beispiel für einen solchen Impuls ist Geschmack: wenn ich etwas zu mir nehme, und es schmeckt bitter oder auf andere Art unangenehm, dann wird eine Assoziationskette ausgelöst: bitter => Medizin => Medizin wirkt => mir geht es besser. (Man könnte durch ein Experiment wie oben beschrieben herausfinden, ob schon der bittere Geschmack alleine denselben Effekt hat. Dazu muss das Placebo exakt denselben sensorischen Eindruck erzeugen wie das Verum, ohne den eigentlichen Wirkstoff zu enthalten.)

Zu einigen dieser Gedanken hat mich ein Buch von Andrew Weil inspiriert, im US-amerikanischen Original mit dem Titel »The Natural Mind« (In der deutschen Übersetzung u. a. »Das erweiterte Bewusstsein«). In der deutschen Ausgabe vom AT-Verlag hat es den etwas irreführenden Artikel »Drogen und höheres Bewußtsein«.

 

Recreational drugs are more dangerous than alcohol, or are they?

I’ve witnessed many times that when drug-prohibition proponents talk about the effects of psychoactive drugs, they will give some kind of very extreme example of a particular drug’s effects – in an effort to convince people how dangerous these substances are and that drug prohibition is therefore justified.

For example, for psychedelics and hallucinogens, the most extreme delusions would be taken as an example of »what the drug does«. For drugs with a sedative effect such as opiates, benzodiazepines or some dissociatives, you’d be shown someone barely responsive or completely passed out. The effects of amphetamines (including »bath salts«) are often exemplified by users who have been binging on high doses for an extended period, typically becoming psychotic and very very unhealthy. (I could list many more examples, but I’m sure you have the idea.)

People will read these descriptions and think: hm, that’s really bad. I wouldn’t want that kind of thing to happen to me. Those drugs really are dangerous. They must remain prohibited.

But let me show you what it would look like if we took alcohol and did the same thing. Imagine someone chugging down a couple of bottles of wine or spirits in a short span of time. After about an hour or two, that person would become extremely uninhibited, display severe loss of motor control, slurry speech, incoherent thinking, possibly even violent behaviour. That person would become rather unpleasant. After some more time, the person would likely experience strong nausea, vomiting, and still later, possibly pass out.

Of course, if you drink extreme amounts of alcohol, your behaviour becomes extreme. But that’s not how you would describe the general effects of alcohol to your friend. You’d say that you can drink a glass of wine, maybe two, and still remain a pretty nice person to have around. The effects of the alcohol would be noticeable, but things would still be pleasant for everyone involved.

You see, the same is true for practically all of the other drugs that many people think are prohibited for a reason.

Here is the thing: most, if not all, recreational drugs can absolutely be used in a way that is as harmless and pleasant as your occasional glass or two of wine at night. But this requires knowledge of the drug’s properties and some experience with using them. Yes, we can argue about toxicity, dependence and addiction potential, and other risks. But the way that these issues come into effect depends very largely on how you use the drug. They are not an automatic property of the drug per se, even if 40 years of anti-drug propaganda have told you otherwise.

If you use alcohol in a bad way, you are likely to have severely negative health effects. Remember how it took you some time before you knew how to use alcohol in such as way that the experience was controlled and pleasant? Again, the same is true for illicit drugs. If you are going to use cannabis, amphetamines, cocaine, psilocybin mushrooms, LSD, heroin, etcetera, for the first time, you should be prepared. If you use more than you can handle, you are in for a challenge and possibly a bad time – just as you would be with alcohol.

I didn’t start seriously drinking alcohol at parties until I was 17. (Yeah, I was always a very late developer. By the way, I wasn’t particularly interested in alcohol. I hated the taste at the time, but I didn’t want to be the uncool kid. You know how this goes.) I often drank way too much and got sick and threw up, I misbehaved and lost control, etcetera. I didn’t yet know how to use alcohol in a good way. Everyone who has ever touched alcohol will share the same stories. But I learned how to enjoy alcohol responsibly, as most people do. In the very same way, you can learn how to use any drug responsibly and safely.

Let’s not underestimate the potential risks associated with any substance that alters your state of mind/consciousness (which includes alcohol, nicotine and caffeine!). But let’s be honest when we talk about drug effects. After all, it is not the drug itself that is potentially dangerous, it is the behaviour of the person who uses the drug.

If we want people to be safe when using drugs – any drug – the best thing we can do is to educate them. But first, we need to educate ourselves. The more we know what we’re doing, the safer we are, the safer are the people around us, and the more pleasure can be had by everyone.