Social Media and Heisenberg

Many of you are familiar with Heisenberg’s Uncertainty Principle.  It basically states that, on a quantum level, the more accurately we measure one quantity, the less accurately we can measure others.  The most common measures cited are location and velocity (oh, and velocity is both speed and direction, by the way).  The most famous visualization of this principle, and some of its consequences (specifically the role of the observer in all this Uncertainty mess), is Schroedinger’s Cat.  Never intended to be run as an actual experiment, it puts an imaginary cat in a box with food and water (to keep it alive), plus a vial of instant-acting poison which will be released at a random, unpredictable time (Schroedinger mentions a radioactive-decay-based trigger mechanism, but really any random trigger will work for purposes of the visualization).  You know the location of the cat with absolute certainty (it’s in the box), but without opening the box you can’t know whether it’s alive or dead.  Additionally, if it’s still alive when you open the box, you no longer know where it is, because it’ll take off at hyperspeed and hide in a pocket dimension for a while, as cats do.  Not exact, but useful for visualization.

The Uncertainty Principle only applies to quantum behaviors, but it can be used as a starting point to describe other behaviors of other, non-quantum, things.  In this case, I’ll use it as an analogy for different forms of security: physical safety, access control, privacy, and convenience.  Many of you already understand this, but I wanted to address it anyway to add my own perspective to the conversation.

Security is a really difficult goal to achieve.  The most secure computer in the world is the one that is never even built, with the second being the one that is never plugged in, even to power.  The most secure vault is one with no door, the most secure password is one never stored anywhere, even in the memory of its creator.

All of these things are effectively useless, though.  So we compromise slightly on the security in exchange for convenience.  We build, then plug our computers in, so we can turn them on and actually use them.  We build doors into our vaults so we can put things into them, and take them out later.  We create passwords that we can actually remember, or store them someplace where they can easily be retrieved.  Each of these compromises requires a lot of extra work to bring the security back up anywhere near what it was before the compromise, but too far and we lose all the convenience as well.  We can have absolute security or absolute convenience, but not 100% of both simultaneously.

This is equally true online.  Social media has made staying in touch with friends and family much more convenient – just post updates about events in your life once, and everyone gets it automatically.  Much faster and easier than that yearly update “newsletter” your aunt sends to everyone in the family, and it can be much more detailed and interactive, too.  This is where privacy comes in.  Privacy is a form of security for your life choices and experiences. Since those status updates are stored on someone else’s servers, you’ve lost most of the privacy your aunt’s letters have – only your family even gets copies of them – in exchange for the convenience.

But at that point, convenience is a form of security for your ability to actually do the things you’d like to do in your life.  The lower the convenience of an activity, the more difficult it is to actually do that activity, and the less likely you are to successfully complete it.  Eventually, it becomes so inconvenient it isn’t even worth the attempt.

Picturing convenience as a form of security might be a bit difficult, so how about a scenario.  Let’s say you’re standing watch over a facility of some kind.  It doesn’t really matter what kind of facility, it could be a shopping mall or a military weapons depo, but whatever facility you’re guarding, someone wants inside to cause damage (rob the mall, blow up the weapons to prevent their use, etc.).  When you detect this person attempting to access the facility, and they don’t respond to verbal force (“Stop!”, “Stay back!”, and similar are generally very effective for most assailants, as they’re trying to avoid detection, not kill everyone, and this is the required initial level of force when responding to threats), your responsibility is to step up the levels of force until they do respond.  Most of these levels require no special equipment, but eventually you get to hard controls (blunt weapons intended to disable the assailant and reduce their desire to cause harm).  If you, as the watch stander, don’t happen to have any hard control equipment on your person, your options are limited.  You could go get one from an armory – a secure location to keep such things when not actively in use – but in the time it would take to do so, the assailant would likely already be inside.  So you trade the security of keeping the equipment in the armory for the security of having it on hand when needed – that is, you check it out at the beginning of your shift, before you relieve the previous watch-stander, and then check it back in at the end of your shift, when the next watch-stander relieves you.  The same principle applies to weapons at the deadly force level, which are strictly prohibited outside a combat zone unless the other levels of force have been unsuccessful.

It’s easy to see, in this scenario, how convenience is its own form of security.  But we can apply that to our other examples from before.  The computers we’ve built and plugged in can give us access to information we need to do (and thereby keep) our jobs.  The vaults we’ve added doors to allow us a way to place valuables beyond reach of unauthorized persons.  And the passwords we’ve stored for later reference (assuming we’ve stored them securely, of course) allow us to ensure we still have access to our own data.  This approach can be applied to all kinds of convenience to see where an increase provides additional security.  The big question is always what form of security we care most about.  Ranking various forms of security from most to least important will help us make good choices about which tools are best for which tasks.

So physical safety – the doorless vault, the unbuilt or unplugged computer, the person standing watch – is one form of security, and among the most obvious.  Access control – the combination on the vault door, the password on the computer, the watch stander’s request for ID – is another, also fairly obvious.  Privacy – being the only one with the password to data which is only available through the password’s use, a closed door with no surveillance tech inside, the watch stander only allowing certain people through at any given time – is another, albeit a tiny bit less obvious than the other two.  Convenience – the computer being built and plugged in, the vault having a door, knowing the password, the watch stander having the required response tools on their person while on duty – is the least obvious, but like privacy, no less important than the others.  At least, not in general.  But how to balance them?

Well, that comes back to which tool is best for a given task.  Each scenario has different requirements for which form of security is most important, which is second-most, and so forth.  That ranking will be different for each scenario, even if it does end up being very similar.  Which brings me back to social media.

Physical security, in this case, becomes about data centers where your social activities are stored.  Access control is generally via username and password combinations (the username tells the system who you are, while the password helps ensure you actually are the person associated with that username), though many platforms have added additional layers to their access control, generally in the form of a semi-random code that changes frequently.  Both of these are considered the highest priorities, in no small part because they are among the simplest to implement, though neither is perfect in any case.  Privacy and convenience, however, muddy the waters a bit.  Platforms can prevent others from seeing your data, but then you lose the convenience of being able to say something once and have the whole world – or at least, the portion of it you care about – be able to see it.  They give you control over this part of the process by letting arbitrarily group others, then control which groups see what information.

But there’s still the issue of your data being stored on their systems.  How do you address that?  One option is to trust that the platform’s owners and operators will not use the data you’ve supplied for anything beyond making sure your intended audience can see it.  Of course, that rarely happens, mostly because it’s hard to make enough money to keep your servers running that way.  So for many people, trust isn’t an option.  What then?  Well, you can choose not to use the platform itself at all.  That satisfies the privacy concern, but sacrifices convenience.  So maybe you set up your own server(s) to provide a similar platform.  Nothing wrong with that – you control the server, so you know the data won’t be used for anything nefarious.  But you still haven’t recovered your convenience, because your platform doesn’t have all of the same users as the platform you just left.  So now you have to break the problem up differently.  What information are you comfortable sharing with the entire world?  What information will you need to present carefully in order to get the most of the convenience with minimal impact on privacy?  What info is so sensitive that the convenience isn’t more important than the privacy?  Then, you can start to use both systems – the public platform you don’t quite trust, and the private platform you trust implicitly – to their fullest potential.

But much like Heisenberg’s observation that knowing everything about a given quantum particle within a given instant is impossible, getting 100% of all types of security at once is beyond our grasp.  Like scientists observing quantum interactions have to prioritize which properties of any given particle they’re interested in most, we have to prioritize our activities online by what is most important to gain from them.  Often, we don’t need to completely abandon any given platform, so much as temper our interactions thereon for what we expect the platform to do with the data we’re generating.

Choose Your Own Adventure: Computing Platforms!

The Great War

There’s a war on.  It’s been fought for decades, and there’s little hope for an end to the war any time in the foreseeable future.  Just as with any other war, it’s tallied up a cost beyond the average human mind’s ability to actually visualize.  Its weapons aren’t as easily recognized as being lethal, but there have been many casualties over the long years.  On the surface, there are only two contenders, but the reality, as always, is much more nuanced.  You’ve heard of this war, even if you know nothing about it.  It is the computing platforms war.

The exact nature of the conflict is intentionally obscured by all parties, because spin is the only way they can win or lose their battles.  Each competitor has arguments for why their platform is better than anyone else’s, and these arguments aren’t usually false, but they are frequently misleading.  The truth of the matter is that each platform is great at some things, and completely incapable of others, while being decent at everything else.  Which platform is actually “the best” depends on what you intend to use it for, and how.

What follows is not an exhaustive guide, but can be used as a starting point in making your own decisions about which platforms to use in which scenarios.  I currently plan to expand it as time permits, but feel free to provide your own thoughts in the comments.  Of course, let’s keep this civil.  I haven’t had reason to remove any comments yet in other posts on this blog, but I reserve the right to do so if necessary.

Right, all that out of the way, here’s a quick overview of what this war currently looks like.

The major players:

  • Desktop/Server Arena
    • Windows (Microsoft)
    • Mac (Apple)
    • Linux – which is actually several less-major players:
      • Ubuntu (Canonical)
      • Debian (Debian Project)
      • RHEL / CentOS (Red Hat / CentOS Project)
      • Chrome OS (Google)
  • Mobile Arena
    • iOS (Apple)
    • Android (Google)
    • Windows Phone (Microsoft)
    • Blackberry (RIM)

There is a bit of overlap between the arenas, as Android is technically a specific Linux “flavor”, and many mobile Windows devices actually run full versions of Windows, rather than Windows Phone, but overall, these are the main camps, roughly in order of market share in each arena.  Ranking is subject to change, of course, and may already be different than the numbers I used when listing them here.

The problem we face, as computer/device users, is that there are so many choices, and each of them is poised against the others in a battle for survival.  There are many smaller players on the field, and many others who have fallen for one reason or another in the past.  But let’s see what we can figure out about the players listed above, and try to determine what the relative strengths and weaknesses are of each.

Desktop/Server Platforms


Microsoft’s focus has long been businesses, and their systems are designed and built around that.  They easily support a wide array of business tasks, and do what they can to make developing new software as easy as possible, though often at the cost of speed and simplicity of the architecture.  The complex ways in which one piece of code relies on sometimes hundreds of others makes the task of keeping each piece of software running properly a bit tricky, especially when one piece of software uses the same pieces of shared code as several other pieces of software, but all of them use different versions of that shared code.

Still, the business-friendly approach has made Windows PCs fairly ubiquitous in the business world, which improves the market share at home as well – because you’re more likely to use what you’re already familiar with.  That means the system has also developed great support for gaming, to give users further reason to have a computer at home in the first place (though the Internet did this far more effectively when it finally came along).

While the day-to-day functionality of the system isn’t terribly optimized for speed, the gaming functionality is – brutally so.  More games are released for the Windows PC platform than any other, even mobile ones. Granted, there are cross-platform games that support Windows as well as Mac and/or Linux, and there are a preponderance of web-based games, which have the browser as their platform, but Windows is still the gaming king when it comes to target platforms.  It even outperforms consoles, which I’m choosing not to cover here, mostly for space.  In short, if business and/or gaming are chief among your desired uses, Windows is probably a safe bet.

In the server realm, though, things start to look a little different.  Microsoft has improved greatly in the server market in recent years, as they’ve started adopting open technologies instead of simply creating their own from scratch.  They can integrate very tightly with Windows desktops, and even mobile Windows devices, giving them a bit of an edge in business environments where lots of systems are managed by a central team.  But if you’re looking to use them for much of anything else, Windows servers just can’t keep up with many of their competitors in the ability to do lots of things at once.  Also, Microsoft’s pricing has never been great for small budgets.  I’d recommend a couple of these for central management of other Windows systems in your company (and I only say a couple because you want some redundancy to prevent terrible things from happening if one of them goes down), but otherwise, there just isn’t enough bang for buck here on the server side.


Apple has been making computers since before anyone else figured out what the future of computing would actually look like. While they rarely venture into untested waters any longer (the iPod being their latest example of such a venture), they still emphasize ease of use throughout their systems. They control the hardware as well as the OS, so they can ensure everything fits together neatly and tightly, meaning things are (almost always) more stable than they might be otherwise.

Their attention to detail over the years has made them an ideal environment for multimedia tasks, so this is where most of the polish has gone. And it doesn’t matter, much, which type of media you’re working in – audio, video, photography, and illustration are all tasks Macs excel at, and not just because Adobe develops their Creative Suite for Mac first. Rather the opposite, in fact – Adobe focuses on the Mac first because new features can more easily be built, and expected to operate reliably across myriad systems. So if you’re working with multimedia, your best bet is a Mac system.

In the server realm, the differences between Mac and Linux offerings start to blur into each other. Mac OS has been built on a UNIX core (specifically, a slightly modified version of NetBSD) since moving to version 10 (a.k.a. Mac OS X), meaning it has a large number of similarities to Linux, since the Linux core was basically a reimplementation of UNIX in the beginning. Things have evolved in different directions in some areas, but the fundamentals are still the same, meaning most software will run on either platform with little to no tweaking. Ultimately, this means Mac and Linux servers are essentially identical in ability and performance, so price will likely be your deciding factor, here.


For years, Linux was used only by trained technical personnel, mostly because the user interfaces were so difficult to learn and use. Part of the reason for this was the sheer number of interface options – several groups of people each working on their own interfaces, with only minimal collaboration between teams, despite (nearly) all of them being open source, and thus fair game for adopting code from each other. While this has the benefit of giving many more options for how to interact with your computer, so you can select whichever will work best for you and your own habits, it also slows development somewhat.

This has been changing more and more rapidly in the last decade or so, and current interfaces are quite modern. Linux systems are quite capable, and perform tasks at nearly the same level of performance as the other platforms, though they don’t quite match up. But they have spent decades being used by programmers and other tech experts for various purposes, and their strengths undoubtedly lie there. They also have the lowest price tag: the OS itself is free.

As a server system, Linux generally blows past the competition. The BSD network stack is a bit better, so BSD-based routing and switching equipment tends to fare better, but for most other server tasks, Linux leads the way, especially in virtualization. And they’re really your only option if you want to use 100% open software.

Mobile Platforms


Apple’s mobile device OS is essentially a minimalist version of OS X, with some extras for mobile-only functionality that the desktop and server versions don’t need/have. Not as optimized for multimedia work as the full system (mostly due to more limited space and power on smaller devices), it still handles such tasks well. The focus here is more on ease of use, and stability, than much else. Apps for iOS devices have a more rigorous approval process to be accepted to the App Store than the other platforms have, because Apple has more exacting standards for what they’ll endorse installing on their devices.

This stability comes at the cost of adoption lag. It takes Apple devices longer to incorporate new technologies than their competition. So if you’re looking for that nifty heart-rate monitor, or (until recently) near-field communication to, say, pay at the register by tapping your phone on the payment terminal, you have to wait longer to get it than you would elsewhere. Still, the extra wait is generally worth it – though it’s almost always a good idea, with any of the platform war players, to wait a bit after new releases to let the initial bugs get ironed out before buying.


Easily the most flexible option here, Android devices allow phone makers to charge less for the software on their phones, lowering the overall price, and letting phone makers make up the difference with more, better, or just cooler hardware. Google doesn’t really have much to say about how the devices themselves are built, focusing instead on what they’re good at – the software. The Linux core means a lot of the hard work is already done for them.

Android apps are almost too easy to add to the Play Store (formerly the Android Marketplace), leading to tens or hundreds of apps to do the same thing, often in essentially the same way. This level of selection is similar to the spectrum of software options for Windows desktop systems – there are so many, finding the one that works exactly the way you want it to is a matter of simply trying a few out. It also has the same problems – finding apps of high enough quality, and which you trust not to be doing nefarious things behind your back, is pretty difficult in most cases.

Windows Phone

Microsoft has had a simplified version of Windows designed for portable devices for longer than anyone else listed here. They entered the market around the same time Palm was producing their first PDAs. So they have a great deal of experience in mobile systems technology. When the handful of manufacturers using Windows on their mobile devices decided to add cell phone tech to their PDAs (rebranding them as “smart phones” since more people wanted phones than wanted digital assistants), the Windows Phone OS was an easy tweak to the existing system. Relatively speaking, of course – adding any feature to an operating system is a complex and time consuming task.

Boasting the greatest integration with Windows desktops and servers, as well as (in many cases) being able to run a lot of the same software (I wouldn’t load up World of Warcraft or Photoshop on a mobile device), these are an easy choice in a number of setups. However, low market share has limited the number of people writing apps for this mobile contender, so the selection isn’t as good as it is elsewhere, especially on the devices which can’t run the same software as the desktop version. Though Microsoft has actually done something pretty brilliant about that, by adding support for running mobile apps to their desktop systems. This strategy may not be enough to shift market share their way, but it is a leg up.


Research In Motion entered the mobile device arena around the time Microsoft and Palm devices started becoming phones. Their goal was to provide smart phones for enterprises, letting the other two companies provide for the consumer market. For years, they were the enterprise option of choice, taking a very similar approach to the one used by Apple – they made the hardware as well as the software, and everything was designed to be stable and easy to use. The ease with which enterprises could manage their mobile devices from a central location, in much the same way they’d already been managing desktop systems for years, made them the obvious choice.

RIM’s focus assured their position for a long time, but as their competitors caught on to the things they were doing in the enterprise space, and started providing the same options for their own products, they started losing ground fast. Today, most Blackberries are used by companies that have been using RIM for a while, and either can’t afford to switch (either due to raw costs or personnel costs) or are still under contract. The things that once set them apart no longer do. Also, as an enterprise-focused option, their app selection is very limited, though the apps available tend to be just as solid as the system itself.

Quick Reference

So in short, there are a few deciding factors that will make one choice or another better than the others for any given purpose, but general use is served equally well by all platforms. If you are doing one of the activities listed below, your choice is pretty easy, but otherwise, any of these systems will serve you well.

Business/Gaming – Windows
Multimedia – Mac OS
Programming – Linux

Integration/Management – Windows
Anything Else – Linux/Mac OS

Solid/Stable/Easy – iOS
Flexibile/Cutting-edge – Android
Integrated – Windows Phone
Legacy – Blackberry

The secret that none of the contenders in the war will tell you is that there is a “right tool for the job”, and that you can have more than one tool in your toolbox. As soon as I can afford one, I’ll be adding a Mac to my own tools – Linux and Windows have been in my toolbox for years now. Don’t let yourself become a casualty in someone else’s war. Choose your own adventure, and the right platform for your needs.

Another Thrilling Episode of Trust Nothing Day

I strongly dislike April Fools’ Day.  I mean, sure, some of the pranks are amusing, but most are just vicious, and trying to do/say anything with any seriousness to it requires a lengthy disclaimer that you aren’t joking, this is legit.  And some have abused the day’s “celebrations” so much, even that can’t be trusted all the time.  Leaving everyone else in a state of skepticism so severe it becomes cynicism.  And I don’t like being a cynic.

I suppose all I really want is for everyone who chooses to participate to pick one joke for the entire day, make the effort to ensure it isn’t simple trolling, and deliver it well – then get on with the day like any other.  Let’s be tasteful, here.  Like we would with anything.

I guess I’ll just have to keep wishing…

The Good Doctor

I am ashamed of myself.  I have waited until just now, over twenty-five and a half years into my entire life, more than a third of the time I can expect to live, to start watching episodes of Classic Doctor Who.  My repentance is late in coming, perhaps, but thorough nonetheless.

I started my journey as any clueless wanderer ought – by asking Wikipedia to list off the episodes in correct order.  The explanatory material shocked me.  Episodes gone missing?  However could that have happened?  A policy to destroy old episodes of shows?  How barbaric.  I was appalled, of course.  A series popular enough to run for 26 seasons – yes, twenty-six of them – and the people responsible for its very creation and existence had a policy to destroy the older installments?  I knew there had to be a reason, but what reason could they possibly have had which would make any logical sense?  How could the destruction of such great material be justified?

As it turns out, though film and even broadcasting technologies weren’t exactly brand new, they were still, as the world transitioned from black-and-white to color, operating under the same contructs as the stage.  If you wanted to broadcast a story, the players would perform it for you, allow you to record it for that broadcast, and expect you to rehire them for subsequent rebroadcasts.  The ability of film to reduce the workload of everyone involved wasn’t entirely overlooked, however; time- and number-limited broadcasting licenses were usually attached to each piece, so it could be rebroadcast up to the set number of times within the set amount of time.  This time period was generally fairly short, amounting to only a couple of years.

When these licenses expired, the film copy was no longer of any use to the purchaser, since they no longer had the rights to use it, so these were destroyed to make space for other, frequently newer films.  If the originals were kept on tape instead of film – and many were – these tapes were erased and reused for other projects.  This had the added effect of reducing overall costs, as the amount of storage space required was kept low, and what space there was remained free of old projects which could no longer see a profit.

The idea that broadcast television material might serve a cultural purpose rather than simply a financial one eventually caught hold enough that preserving these older recordings became the policy, even when the rebroadcasting rights had expired.  There was, it had been determined, a cultural duty to preserve them.  From then on, the hunt for destroyed episodes was on – not just for Doctor Who, but for every series that had met with this unfortunate end.  Many such episodes had been sent overseas when broadcasting rights to them had been purchased there, though only copies were sent out; never originals.  Over the next several years, continuing to the present, most of the missing episodes returned, and Doctor Who is (among) the most compeletely recovered of such series.

Doctor Who is also peculiar in that it is the only series of that era for which every single episode has survived in at least an audio form – thanks mostly to viewers who didn’t have VCRs (this is before VHS/Betamax had their now-legendary war), and so had to accept merely recording the audio component during various broadcasts.  These audio versions are of course in varying states of quality and repair, but every episode’s audio still exists today, regardless of whether the video exists alongside it.

That bit of background absorbed, I then learned that each epsiode was generally considered merely part of a larger story, a “serial”.  Essentially, Classic Doctor Who is a collection of mini-series tied together only by the common character of the Doctor (though many other characters can be considered recurring at various points throughout).  I find this format to be fascinating, as it presents some interesting opportunities for storytelling.  Still, this format choice meant that a single missing episode would effectively ruin several adjacent as well, at least to the point where a video version of the surrounding episodes would probably not be released until the missing one(s) were restored.

Armed with my list, I set to Netflix to watch them all in order.  And discovered that the streaming service, at least, didn’t offer but a small handful of the full 155 serials originally broadcast, nor the 1996 TV movie which aired 6.5 years (approximate) after the last serial, and nearly 9 before the introduction of the Ninth Doctor in the presently-airing series.  Still, the theme song had now been running through my head incessantly for at least a week by this point, so I dove into the earliest of these I could find – Doctor Who: The Aztecs, the sixth serial, which can be found in Season 1.  I then proceeded chronologically by broadcast date through the paltry selection until arriving at Season 16, which is composed of six serials, themselves tied more closely together in a single arc called The Key To Time (if you find a DVD by that title, you have the entire 16th season of Classic Doctor Who in your hands).  And discovered that four of the six stories were actually available, including one written by none other than Douglas Adams, of Hitchhiker fame.  Called The Pirate Planet, this is the serial which I have just finished.  Downright amazing, and perhaps surprisingly coherent by Adams’s standards.  Many of the concepts Adams brought to Doctor Who, especially if they never actually made it to the screen, were later reused in his published works.

The true tragedy, though, is that none of the serials available on Netflix have anything to do with the Daleks at all, despite the fact that Daleks are perhaps the true icons of the series – after the Tardis, of course.  Still, the fact that any of these classic episodes are available to begin with is satisfying, so I can’t complain too loudly for too long.

Have you met the good Doctor yet?  Have you braved the Classic series, or stayed safely in the confines of the modern version?  So long as you expect material from the 1960s through the late 1980s, I suspect you’ll enjoy the Classic episodes just as thoroughly – and gain a greater insight into what’s really going on here.  But you don’t have to take my word for it.