Things I’m Up To, 2nd Edition

Six years later…

What am I up to right now? Well, let’s take a look…


The collaborative writing project I mentioned before has …. stalled. I haven’t actually heard from my collab partner in a couple of years, so … no idea what’s up, there. Which leaves me just my own projects to focus on.

I’ve had ideas for more novels and stories to write, and accomplished essentially nothing on any of them aside from writing the ideas down for later exploration. So, sadly, not much to report, here.


I’m not doing the clients thing anymore, at least not really. I have full time work for a company doing some pretty amazing things in the field of application infrastructure creation and management, so not really much time for freelance work. It’s enjoyable stuff, so no complaints from me.

IDLX has taken a break for a while in favor of other projects I’ve come up with since. Top of that list is Calends, a library for handling arbitrary calendar systems and converting between them. It can handle dates 140-billion-some years into the future or past, at resolutions smaller than Planck Time (which is, by definition, the smallest unit of time with any actual meaning). It’s mostly working already, but I’m having a small issue with the PHP extension that gives access to the shared library itself (DLL or Shared Object file), so that’s holding things up a bit. Once that’s ironed out, the next project is a mobile-first offline-friendly web app for managing story elements relative to moments in time. I need Calends complete, first, so that StoryLines can properly handle date/time inputs across the various real-world and constructed calendars it will be presented with.

There are a handful of other projects, too, that I’ll start working on at various points, but I’ll come back to those later, when they start to actually happen.


GoodReads has been neglected a lot, but is currently up to date for the most part. Most of the other stuff I’m reading takes the form of web comics. You can check what web comics I’m currently reading here, if you like.

In the world of roleplaying, I don’t really have anything to report other than I would love to assemble a team, and I’m currently commissioning artists to do a piece for me that’s related to a setting I created, with some idea bouncing among friends, and some fleshing out via playtest. If you’re interested in that kind of thing, hit me up!

I did not re-enlist. My hair appreciates this decision. There are some drawbacks, but I’m ok with them.


My marriage is still in the slow process of coming to a close. I have two wonderful children whom I adore to pieces, and I spend weekends with them as often as I can without depriving their mother of weekends with them entirely. It’s not an easy path we’re on, but things are better in all the ways that matter.

To Be Continued…

Print Friendly, PDF & Email

Darkness Within

This post was originally written back in August of 2015. I’m not entirely certain why it wasn’t posted then, but here it is now:

So it’s been a while since my last post. I figured it wouldn’t hurt too much to write another. This will be another along the lines of TMI, but probably with less TMI in it. I’ll let you know when I finish.

For those who aren’t yet aware, I’ve finally gotten in to get some meds for my depression and anxiety. I take them regularly, and have only been late three times, though one of those times was, admittedly, by about 14-16 hours. They help a ton. Most of the time, I’m perfectly stable, which is so ridiculously awesome I can’t even begin to explain it to those who’ve never experienced the depths that depression can plunge a person to. I have no basis for comparison.

At any rate, and as with any medication, the pills I take aren’t perfect. There are times when the depression overwhelms them, and the distance between myself and the surface of the world skyrockets. The three millimeter pothole I was walking past becomes a three kilometer chasm I’ve somehow fallen into, and the sides are made of something frictionless and smooth. There’s no way out, no way back to the surface, to the happiness I quickly forget I ever actually felt, or was ever even capable of feeling to begin with. My medication – with its ability to restore perspective to the landscape of my mind, to help me see this chasm is really still a tiny pothole, but I’ve simply reduced myself to the point it seems bigger than me – just isn’t doing the job in those moments.

It’s hard to think in these chasms. Hard to recall the times when you weren’t trapped by your own perceptions. Hard to take any action at all. It’s too much effort, and it probably wouldn’t do any good anyway. Your perspective is so twisted, so fundamentally changed, that you don’t recognize the ways out even when you see them.

And the few times you do recognize escape routes, they seem insurmountable. Because depression isn’t sadness. It’s defeat. It’s the kind of defeat that reinforces itself, that actually causes the brain to reward negative thinking more than positive thinking in many cases, keeping you defeated perpetually. “Every situation is unique,” it reminds you, then twists that into “what works for others can’t possibly work for me”. And so forth.

So if the meds stop working, for whatever reason, and for however long, how do you get out of these chasms on your own? Well, in my case, I’ll have to get back to you once I manage it again. I can’t remember anything that was effective in the past, at the moment, because of the emotional state’s self-reinforcing nature. Maybe writing things down helped? Seems to have been the case when I wrote TMI a couple years back. I dunno, yet.

Honestly, this is where therapy usually comes in. Medication isn’t effective 100% of the time. It can’t be, without some way to carefully monitor the body and regulate the distribution of more or less of any given substance as the body requires it to operate normally. Generally, that’s the brain’s job, but it’s malfunctioning, so we have to do the best we can to take over.

Therapy, too, isn’t 100% effective. I don’t recall the exact numbers (perhaps they’ll manifest in the comments), but neither approach, alone, is always enough. Using both together brings the success rate considerably higher – still not 100%, but close. A lot of this comes from being able to identify and process the underlying emotional triggers of the stronger attacks. Of course, it also comes in the form of tools to escape, methods which may even have been practiced beforehand (say, during one or more sessions) to be a ready, automatic response to situations that need them. There are likely other benefits I’m not aware of.

That’s largely because I haven’t had any therapy, yet. A bit of self-help in the form of a book called Feeling Good, but nothing involving a live professional. So I have to find my own way out in the meantime. Which is fine. I’ll be fine. I’ve found ways out before. I just need to keep with it until I find another.

The bit I’m wondering about now, though… What’s causing these episodes where the pills aren’t enough?

Print Friendly, PDF & Email

Patreon Launched!

A few of you may be aware that I have been putting together a Patreon campaign over the past couple of weeks. Well, today I’m finally ready to launch it! It’s bright and shiny and new and you can find it at – or just hop over to the right side of the page (or scroll below the post(s) if you’re viewing this on a phone or other narrow display) and click the “Support on Patreon” button!

So wait, what the heck is this “Patreon campaign” thing, anyway?

Well, it’s like Kickstarter in that patrons pledge a certain amount which isn’t paid right away to whatever campaigns they like, and can expect certain rewards for doing so at various amounts. It’s like GoFundMe in that the money paid doesn’t necessarily go toward a product or service the patron will be able to get or use after the campaign. And it’s like neither of those in that it’s recurring – that is, it’s not a one-time deal, but an ongoing project.

In my case, whenever I reach a progress checkpoint – a preselected point in the process of whichever project I’m tackling at a given moment – my patrons will be charged whatever they pledged soon after I let them know I did it. I’m planning to space these out approximately monthly, but chose to charge by checkpoint instead of month to give myself a little extra accountability – and to avoid charging my patrons for months where I didn’t actually accomplish anything related to the projects they’re paying me for.

So basically, it’s a paycheck for me, from everyone wants to chip in on it.

So hop on over to my Patreon page and have a look through what I’m up to. And I’ll see my future patrons in the updates!

Print Friendly, PDF & Email

Hybrid Chat Monstrosity

This project probably won’t take shape for quite some time, but I wanted to get down the basic ideas for it now, while I’m thinking about them. And I figured, why not here, where others can chime in? I’m certain there are plethoras of things I’m overlooking that will make this more tricky to actually implement than I currently believe, but I also think it’ll be worth having in the end, so I’d like to at least try to resolve them. Also, maybe something like this already exists, and I just haven’t been able to find it. Anyway, here goes.

While spending some time on IRC (something I was getting back into after a few years of being away from it), a visitor started talking about features they missed from newer protocols. The conversation that followed was … less than productive, we’ll say … but once the dust settled, it left me wondering how I’d go about bringing those features to the community server the conversation took place on at the time. Various bridges either already exist, or are simple to construct, to allow using (for example) XMPP/Jingle clients to talk to IRC servers, or IRC clients to talk to XMPP/Jingle servers. But those bridges leave out numerous features of the protocols they bridge (often resorting to lowest common denominator), and require an extra piece of software be running, along with all the other drawbacks of proxy server interfaces. Even ZNC, the leading attempt to extend IRC’s feature set to include more modern capabilities, is itself a proxy (the specialized sort known as a bouncer). What I really wanted to see was a single server that spoke both protocols.

Here, too, there are a couple of options. Each revolves around a plugin of some sort or another that basically glues the secondary protocol onto the internal workings of the server itself, which is only actually designed to handle one of them. This works, in many cases, but the overall architectural differences between protocols, and between the servers designed to support them, make this approach just as hacky and incomplete as separate gateway/bouncer/proxy servers. Things will be left out, and users connecting through the plugin will have a limited experience compared to connecting to a network that speaks their protocol natively. So that further refined my goal – I wanted a server that spoke both protocols natively, and handled things internally in a way that allowed both to provide the full spectrum of their feature sets, without compromise. (And if I could emulate the features present in one but missing in the other in a clean way, so much the better.)

Not having found any such server, I set out to gain a solid enough understanding of the protocols to properly design such a project. I’m currently in the early stages of this process, so I fully expect a number of my design decisions to need changing as I learn more, but I feel I have a good enough grasp to get started. Feel free to comment if you have any blanks to fill, or corrections to make!

Base protocol support:

  • IRC
  • XMPP/Jingle

Potential expanded protocol support:

  • BitMessage
  • Ring (SIP + DHT)
  • Slack (and/or Mattermost)
  • Discord (this is a big maybe; the API is awkward)
  • Probably others

Basic architecture:

  • Users
    • Store identifier components individually so they can be reassembled into whatever pattern a given protocol expects.
    • Since all protocols besides IRC require registration of user credentials prior to being able to access the network, IRC services will be built in, allowing direct access to the common user data set. Channels can be set to only permit registered users access, but for maximum flexibility, any info required by other protocols that isn’t available from an unregistered IRC user will be filled with placeholder info indicating the user is unregistered.
  • Chatting
    • All messages sent to the server are translated into a protocol-agnostic format. Each protocol handler is responsible for translation between it and their own distinct protocol.
    • Content messages are stored for later retrieval by backlog requests.
    • Event messages (join/part/login/quit/etc) are also stored in the backlog.
    • Control messages are not stored, but rather handled directly by whichever subsystem they reference.
  • Group Chats
    • Group conversations, no matter their name in a given protocol, will be exposed via all protocols.
    • Private chats will generally be hidden from listings, and users without invitations (in whatever form those take in a given protocol) will be denied access if they attempt to join anyway. Protocols which don’t support joining unlisted chats will have them listed, but with indicators of privacy attached.
    • The name of a given group chat must be unique across the server. Though some protocols allow non-unique naming, others – IRC in particular – do not.
  • Direct Chats
    • Some protocols support a p2p chat style, bypassing the server entirely. If both users are connected via the same protocol, the system doesn’t have to do anything, but obviously, attempting to contact a user who isn’t connected using the same protocol in this fashion simply won’t work. In these cases, the server will need to act as a middle man, so there are a number of caveats to how it works in this mode:
      1. No messages are saved to the backlog.
      2. The server signals both sides to activate message encryption, so that even if a message leaked somehow, it wouldn’t be readable by any but the two clients.
      3. This means translation of certain message artifacts won’t be possible, so users would see these artifacts directly (mostly formatting codes, in this case).
      4. Most control messages would be disabled until p2p mode was ended, as the server shouldn’t be able to hear them anyway.
  • Advanced Features
    • Features not directly supported by all protocols should be emulated in those which don’t.
    • For certain features, such as voice and video chat, this amounts to a simple rejection of the request to start using said feature, though most protocols also support indications of when such features are not enabled by the other user.

Obviously, as stated earlier, this is in the earliest stages of development, so there are many of the finer points I’m either missing entirely, or simply misunderstanding. But I still feel this can be done, and should. What are your thoughts?

Print Friendly, PDF & Email

Social Media and Heisenberg

Many of you are familiar with Heisenberg’s Uncertainty Principle.  It basically states that, on a quantum level, the more accurately we measure one quantity, the less accurately we can measure others.  The most common measures cited are location and velocity (oh, and velocity is both speed and direction, by the way).  The most famous visualization of this principle, and some of its consequences (specifically the role of the observer in all this Uncertainty mess), is Schroedinger’s Cat.  Never intended to be run as an actual experiment, it puts an imaginary cat in a box with food and water (to keep it alive), plus a vial of instant-acting poison which will be released at a random, unpredictable time (Schroedinger mentions a radioactive-decay-based trigger mechanism, but really any random trigger will work for purposes of the visualization).  You know the location of the cat with absolute certainty (it’s in the box), but without opening the box you can’t know whether it’s alive or dead.  Additionally, if it’s still alive when you open the box, you no longer know where it is, because it’ll take off at hyperspeed and hide in a pocket dimension for a while, as cats do.  Not exact, but useful for visualization.

The Uncertainty Principle only applies to quantum behaviors, but it can be used as a starting point to describe other behaviors of other, non-quantum, things.  In this case, I’ll use it as an analogy for different forms of security: physical safety, access control, privacy, and convenience.  Many of you already understand this, but I wanted to address it anyway to add my own perspective to the conversation.

Security is a really difficult goal to achieve.  The most secure computer in the world is the one that is never even built, with the second being the one that is never plugged in, even to power.  The most secure vault is one with no door, the most secure password is one never stored anywhere, even in the memory of its creator.

All of these things are effectively useless, though.  So we compromise slightly on the security in exchange for convenience.  We build, then plug our computers in, so we can turn them on and actually use them.  We build doors into our vaults so we can put things into them, and take them out later.  We create passwords that we can actually remember, or store them someplace where they can easily be retrieved.  Each of these compromises requires a lot of extra work to bring the security back up anywhere near what it was before the compromise, but too far and we lose all the convenience as well.  We can have absolute security or absolute convenience, but not 100% of both simultaneously.

This is equally true online.  Social media has made staying in touch with friends and family much more convenient – just post updates about events in your life once, and everyone gets it automatically.  Much faster and easier than that yearly update “newsletter” your aunt sends to everyone in the family, and it can be much more detailed and interactive, too.  This is where privacy comes in.  Privacy is a form of security for your life choices and experiences. Since those status updates are stored on someone else’s servers, you’ve lost most of the privacy your aunt’s letters have – only your family even gets copies of them – in exchange for the convenience.

But at that point, convenience is a form of security for your ability to actually do the things you’d like to do in your life.  The lower the convenience of an activity, the more difficult it is to actually do that activity, and the less likely you are to successfully complete it.  Eventually, it becomes so inconvenient it isn’t even worth the attempt.

Picturing convenience as a form of security might be a bit difficult, so how about a scenario.  Let’s say you’re standing watch over a facility of some kind.  It doesn’t really matter what kind of facility, it could be a shopping mall or a military weapons depo, but whatever facility you’re guarding, someone wants inside to cause damage (rob the mall, blow up the weapons to prevent their use, etc.).  When you detect this person attempting to access the facility, and they don’t respond to verbal force (“Stop!”, “Stay back!”, and similar are generally very effective for most assailants, as they’re trying to avoid detection, not kill everyone, and this is the required initial level of force when responding to threats), your responsibility is to step up the levels of force until they do respond.  Most of these levels require no special equipment, but eventually you get to hard controls (blunt weapons intended to disable the assailant and reduce their desire to cause harm).  If you, as the watch stander, don’t happen to have any hard control equipment on your person, your options are limited.  You could go get one from an armory – a secure location to keep such things when not actively in use – but in the time it would take to do so, the assailant would likely already be inside.  So you trade the security of keeping the equipment in the armory for the security of having it on hand when needed – that is, you check it out at the beginning of your shift, before you relieve the previous watch-stander, and then check it back in at the end of your shift, when the next watch-stander relieves you.  The same principle applies to weapons at the deadly force level, which are strictly prohibited outside a combat zone unless the other levels of force have been unsuccessful.

It’s easy to see, in this scenario, how convenience is its own form of security.  But we can apply that to our other examples from before.  The computers we’ve built and plugged in can give us access to information we need to do (and thereby keep) our jobs.  The vaults we’ve added doors to allow us a way to place valuables beyond reach of unauthorized persons.  And the passwords we’ve stored for later reference (assuming we’ve stored them securely, of course) allow us to ensure we still have access to our own data.  This approach can be applied to all kinds of convenience to see where an increase provides additional security.  The big question is always what form of security we care most about.  Ranking various forms of security from most to least important will help us make good choices about which tools are best for which tasks.

So physical safety – the doorless vault, the unbuilt or unplugged computer, the person standing watch – is one form of security, and among the most obvious.  Access control – the combination on the vault door, the password on the computer, the watch stander’s request for ID – is another, also fairly obvious.  Privacy – being the only one with the password to data which is only available through the password’s use, a closed door with no surveillance tech inside, the watch stander only allowing certain people through at any given time – is another, albeit a tiny bit less obvious than the other two.  Convenience – the computer being built and plugged in, the vault having a door, knowing the password, the watch stander having the required response tools on their person while on duty – is the least obvious, but like privacy, no less important than the others.  At least, not in general.  But how to balance them?

Well, that comes back to which tool is best for a given task.  Each scenario has different requirements for which form of security is most important, which is second-most, and so forth.  That ranking will be different for each scenario, even if it does end up being very similar.  Which brings me back to social media.

Physical security, in this case, becomes about data centers where your social activities are stored.  Access control is generally via username and password combinations (the username tells the system who you are, while the password helps ensure you actually are the person associated with that username), though many platforms have added additional layers to their access control, generally in the form of a semi-random code that changes frequently.  Both of these are considered the highest priorities, in no small part because they are among the simplest to implement, though neither is perfect in any case.  Privacy and convenience, however, muddy the waters a bit.  Platforms can prevent others from seeing your data, but then you lose the convenience of being able to say something once and have the whole world – or at least, the portion of it you care about – be able to see it.  They give you control over this part of the process by letting arbitrarily group others, then control which groups see what information.

But there’s still the issue of your data being stored on their systems.  How do you address that?  One option is to trust that the platform’s owners and operators will not use the data you’ve supplied for anything beyond making sure your intended audience can see it.  Of course, that rarely happens, mostly because it’s hard to make enough money to keep your servers running that way.  So for many people, trust isn’t an option.  What then?  Well, you can choose not to use the platform itself at all.  That satisfies the privacy concern, but sacrifices convenience.  So maybe you set up your own server(s) to provide a similar platform.  Nothing wrong with that – you control the server, so you know the data won’t be used for anything nefarious.  But you still haven’t recovered your convenience, because your platform doesn’t have all of the same users as the platform you just left.  So now you have to break the problem up differently.  What information are you comfortable sharing with the entire world?  What information will you need to present carefully in order to get the most of the convenience with minimal impact on privacy?  What info is so sensitive that the convenience isn’t more important than the privacy?  Then, you can start to use both systems – the public platform you don’t quite trust, and the private platform you trust implicitly – to their fullest potential.

But much like Heisenberg’s observation that knowing everything about a given quantum particle within a given instant is impossible, getting 100% of all types of security at once is beyond our grasp.  Like scientists observing quantum interactions have to prioritize which properties of any given particle they’re interested in most, we have to prioritize our activities online by what is most important to gain from them.  Often, we don’t need to completely abandon any given platform, so much as temper our interactions thereon for what we expect the platform to do with the data we’re generating.

Print Friendly, PDF & Email

Choose Your Own Adventure: Computing Platforms!

The Great War

There’s a war on.  It’s been fought for decades, and there’s little hope for an end to the war any time in the foreseeable future.  Just as with any other war, it’s tallied up a cost beyond the average human mind’s ability to actually visualize.  Its weapons aren’t as easily recognized as being lethal, but there have been many casualties over the long years.  On the surface, there are only two contenders, but the reality, as always, is much more nuanced.  You’ve heard of this war, even if you know nothing about it.  It is the computing platforms war.

The exact nature of the conflict is intentionally obscured by all parties, because spin is the only way they can win or lose their battles.  Each competitor has arguments for why their platform is better than anyone else’s, and these arguments aren’t usually false, but they are frequently misleading.  The truth of the matter is that each platform is great at some things, and completely incapable of others, while being decent at everything else.  Which platform is actually “the best” depends on what you intend to use it for, and how.

What follows is not an exhaustive guide, but can be used as a starting point in making your own decisions about which platforms to use in which scenarios.  I currently plan to expand it as time permits, but feel free to provide your own thoughts in the comments.  Of course, let’s keep this civil.  I haven’t had reason to remove any comments yet in other posts on this blog, but I reserve the right to do so if necessary.

Right, all that out of the way, here’s a quick overview of what this war currently looks like.

The major players:

  • Desktop/Server Arena
    • Windows (Microsoft)
    • Mac (Apple)
    • Linux – which is actually several less-major players:
      • Ubuntu (Canonical)
      • Debian (Debian Project)
      • RHEL / CentOS (Red Hat / CentOS Project)
      • Chrome OS (Google)
  • Mobile Arena
    • iOS (Apple)
    • Android (Google)
    • Windows Phone (Microsoft)
    • Blackberry (RIM)

There is a bit of overlap between the arenas, as Android is technically a specific Linux “flavor”, and many mobile Windows devices actually run full versions of Windows, rather than Windows Phone, but overall, these are the main camps, roughly in order of market share in each arena.  Ranking is subject to change, of course, and may already be different than the numbers I used when listing them here.

The problem we face, as computer/device users, is that there are so many choices, and each of them is poised against the others in a battle for survival.  There are many smaller players on the field, and many others who have fallen for one reason or another in the past.  But let’s see what we can figure out about the players listed above, and try to determine what the relative strengths and weaknesses are of each.

Desktop/Server Platforms


Microsoft’s focus has long been businesses, and their systems are designed and built around that.  They easily support a wide array of business tasks, and do what they can to make developing new software as easy as possible, though often at the cost of speed and simplicity of the architecture.  The complex ways in which one piece of code relies on sometimes hundreds of others makes the task of keeping each piece of software running properly a bit tricky, especially when one piece of software uses the same pieces of shared code as several other pieces of software, but all of them use different versions of that shared code.

Still, the business-friendly approach has made Windows PCs fairly ubiquitous in the business world, which improves the market share at home as well – because you’re more likely to use what you’re already familiar with.  That means the system has also developed great support for gaming, to give users further reason to have a computer at home in the first place (though the Internet did this far more effectively when it finally came along).

While the day-to-day functionality of the system isn’t terribly optimized for speed, the gaming functionality is – brutally so.  More games are released for the Windows PC platform than any other, even mobile ones. Granted, there are cross-platform games that support Windows as well as Mac and/or Linux, and there are a preponderance of web-based games, which have the browser as their platform, but Windows is still the gaming king when it comes to target platforms.  It even outperforms consoles, which I’m choosing not to cover here, mostly for space.  In short, if business and/or gaming are chief among your desired uses, Windows is probably a safe bet.

In the server realm, though, things start to look a little different.  Microsoft has improved greatly in the server market in recent years, as they’ve started adopting open technologies instead of simply creating their own from scratch.  They can integrate very tightly with Windows desktops, and even mobile Windows devices, giving them a bit of an edge in business environments where lots of systems are managed by a central team.  But if you’re looking to use them for much of anything else, Windows servers just can’t keep up with many of their competitors in the ability to do lots of things at once.  Also, Microsoft’s pricing has never been great for small budgets.  I’d recommend a couple of these for central management of other Windows systems in your company (and I only say a couple because you want some redundancy to prevent terrible things from happening if one of them goes down), but otherwise, there just isn’t enough bang for buck here on the server side.


Apple has been making computers since before anyone else figured out what the future of computing would actually look like. While they rarely venture into untested waters any longer (the iPod being their latest example of such a venture), they still emphasize ease of use throughout their systems. They control the hardware as well as the OS, so they can ensure everything fits together neatly and tightly, meaning things are (almost always) more stable than they might be otherwise.

Their attention to detail over the years has made them an ideal environment for multimedia tasks, so this is where most of the polish has gone. And it doesn’t matter, much, which type of media you’re working in – audio, video, photography, and illustration are all tasks Macs excel at, and not just because Adobe develops their Creative Suite for Mac first. Rather the opposite, in fact – Adobe focuses on the Mac first because new features can more easily be built, and expected to operate reliably across myriad systems. So if you’re working with multimedia, your best bet is a Mac system.

In the server realm, the differences between Mac and Linux offerings start to blur into each other. Mac OS has been built on a UNIX core (specifically, a slightly modified version of NetBSD) since moving to version 10 (a.k.a. Mac OS X), meaning it has a large number of similarities to Linux, since the Linux core was basically a reimplementation of UNIX in the beginning. Things have evolved in different directions in some areas, but the fundamentals are still the same, meaning most software will run on either platform with little to no tweaking. Ultimately, this means Mac and Linux servers are essentially identical in ability and performance, so price will likely be your deciding factor, here.


For years, Linux was used only by trained technical personnel, mostly because the user interfaces were so difficult to learn and use. Part of the reason for this was the sheer number of interface options – several groups of people each working on their own interfaces, with only minimal collaboration between teams, despite (nearly) all of them being open source, and thus fair game for adopting code from each other. While this has the benefit of giving many more options for how to interact with your computer, so you can select whichever will work best for you and your own habits, it also slows development somewhat.

This has been changing more and more rapidly in the last decade or so, and current interfaces are quite modern. Linux systems are quite capable, and perform tasks at nearly the same level of performance as the other platforms, though they don’t quite match up. But they have spent decades being used by programmers and other tech experts for various purposes, and their strengths undoubtedly lie there. They also have the lowest price tag: the OS itself is free.

As a server system, Linux generally blows past the competition. The BSD network stack is a bit better, so BSD-based routing and switching equipment tends to fare better, but for most other server tasks, Linux leads the way, especially in virtualization. And they’re really your only option if you want to use 100% open software.

Mobile Platforms


Apple’s mobile device OS is essentially a minimalist version of OS X, with some extras for mobile-only functionality that the desktop and server versions don’t need/have. Not as optimized for multimedia work as the full system (mostly due to more limited space and power on smaller devices), it still handles such tasks well. The focus here is more on ease of use, and stability, than much else. Apps for iOS devices have a more rigorous approval process to be accepted to the App Store than the other platforms have, because Apple has more exacting standards for what they’ll endorse installing on their devices.

This stability comes at the cost of adoption lag. It takes Apple devices longer to incorporate new technologies than their competition. So if you’re looking for that nifty heart-rate monitor, or (until recently) near-field communication to, say, pay at the register by tapping your phone on the payment terminal, you have to wait longer to get it than you would elsewhere. Still, the extra wait is generally worth it – though it’s almost always a good idea, with any of the platform war players, to wait a bit after new releases to let the initial bugs get ironed out before buying.


Easily the most flexible option here, Android devices allow phone makers to charge less for the software on their phones, lowering the overall price, and letting phone makers make up the difference with more, better, or just cooler hardware. Google doesn’t really have much to say about how the devices themselves are built, focusing instead on what they’re good at – the software. The Linux core means a lot of the hard work is already done for them.

Android apps are almost too easy to add to the Play Store (formerly the Android Marketplace), leading to tens or hundreds of apps to do the same thing, often in essentially the same way. This level of selection is similar to the spectrum of software options for Windows desktop systems – there are so many, finding the one that works exactly the way you want it to is a matter of simply trying a few out. It also has the same problems – finding apps of high enough quality, and which you trust not to be doing nefarious things behind your back, is pretty difficult in most cases.

Windows Phone

Microsoft has had a simplified version of Windows designed for portable devices for longer than anyone else listed here. They entered the market around the same time Palm was producing their first PDAs. So they have a great deal of experience in mobile systems technology. When the handful of manufacturers using Windows on their mobile devices decided to add cell phone tech to their PDAs (rebranding them as “smart phones” since more people wanted phones than wanted digital assistants), the Windows Phone OS was an easy tweak to the existing system. Relatively speaking, of course – adding any feature to an operating system is a complex and time consuming task.

Boasting the greatest integration with Windows desktops and servers, as well as (in many cases) being able to run a lot of the same software (I wouldn’t load up World of Warcraft or Photoshop on a mobile device), these are an easy choice in a number of setups. However, low market share has limited the number of people writing apps for this mobile contender, so the selection isn’t as good as it is elsewhere, especially on the devices which can’t run the same software as the desktop version. Though Microsoft has actually done something pretty brilliant about that, by adding support for running mobile apps to their desktop systems. This strategy may not be enough to shift market share their way, but it is a leg up.


Research In Motion entered the mobile device arena around the time Microsoft and Palm devices started becoming phones. Their goal was to provide smart phones for enterprises, letting the other two companies provide for the consumer market. For years, they were the enterprise option of choice, taking a very similar approach to the one used by Apple – they made the hardware as well as the software, and everything was designed to be stable and easy to use. The ease with which enterprises could manage their mobile devices from a central location, in much the same way they’d already been managing desktop systems for years, made them the obvious choice.

RIM’s focus assured their position for a long time, but as their competitors caught on to the things they were doing in the enterprise space, and started providing the same options for their own products, they started losing ground fast. Today, most Blackberries are used by companies that have been using RIM for a while, and either can’t afford to switch (either due to raw costs or personnel costs) or are still under contract. The things that once set them apart no longer do. Also, as an enterprise-focused option, their app selection is very limited, though the apps available tend to be just as solid as the system itself.

Quick Reference

So in short, there are a few deciding factors that will make one choice or another better than the others for any given purpose, but general use is served equally well by all platforms. If you are doing one of the activities listed below, your choice is pretty easy, but otherwise, any of these systems will serve you well.

Business/Gaming – Windows
Multimedia – Mac OS
Programming – Linux

Integration/Management – Windows
Anything Else – Linux/Mac OS

Solid/Stable/Easy – iOS
Flexibile/Cutting-edge – Android
Integrated – Windows Phone
Legacy – Blackberry

The secret that none of the contenders in the war will tell you is that there is a “right tool for the job”, and that you can have more than one tool in your toolbox. As soon as I can afford one, I’ll be adding a Mac to my own tools – Linux and Windows have been in my toolbox for years now. Don’t let yourself become a casualty in someone else’s war. Choose your own adventure, and the right platform for your needs.

Print Friendly, PDF & Email

Another Thrilling Episode of Trust Nothing Day

I strongly dislike April Fools’ Day.  I mean, sure, some of the pranks are amusing, but most are just vicious, and trying to do/say anything with any seriousness to it requires a lengthy disclaimer that you aren’t joking, this is legit.  And some have abused the day’s “celebrations” so much, even that can’t be trusted all the time.  Leaving everyone else in a state of skepticism so severe it becomes cynicism.  And I don’t like being a cynic.

I suppose all I really want is for everyone who chooses to participate to pick one joke for the entire day, make the effort to ensure it isn’t simple trolling, and deliver it well – then get on with the day like any other.  Let’s be tasteful, here.  Like we would with anything.

I guess I’ll just have to keep wishing…

Print Friendly, PDF & Email

Software Release Day!

It’s not a terribly common scenario. Mostly, the dataset is pre-defined and static, but sometimes you have no idea what it will look like from one day to the next. This is the scenario I found myself in, recently. So I built a library.

Print Friendly, PDF & Email

It’s not a terribly common scenario.  Mostly, the dataset is pre-defined and static, and your designer can weave everything together into a thing of beauty and elegance, with absolute control over placement and flow.  Usually, that interface is then solidified, changing only after the design is discarded for something better, and then only in small ways.  But sometimes you have no idea what the dataset will look like from one day to the next, and it’s likely to change and shift regularly.  This is the scenario I found myself in with two completely unrelated projects, recently.

With one, the UI is designed to manage and maintain a complex gateway application, which itself relays incoming requests to myriad third-party services and locations, then presents the results in a unified format.  The settings for each of these third-party data sources include auth data, the exact composition of which is different for every one.  Anywhere from one to four values (in my experience so far) must be presented for a request to go through successfully, and hard-coding these defeats part of the design – we must support multiple sets of credentials for any given data source.  The answer here is a dynamically-generated form, defined by the same parts of the code that allow access to each particular third-party data source.

The other is much less complex.  It consists of what is essentially a survey.  Of course, things are complicated by the fact that the questions will change over time.  This could be handled by changing the underlying code every time – but the client is a non-profit, so the more they can do without having to pay for my time doing it, the better.  So again, the answer is a dynamic form.

But it gets trickier.  The form definition needs to be simple and take up as little space as possible, but it also needs to retain human-readability.  Perhaps the best candidate for this is JSON (JavaScript Object Notation), which most programming languages – not just JavaScript itself – can work with fairly easily.  So where’s the trickiness?  Well, in both cases the backend is forbidden from generating the form itself – for numerous other reasons, both are constrained to speaking JSON.

That means the front-end becomes responsible for the actual construction of the form.  Which in turn means DOM-manipulation.  OK, there are a number of ways to do that, so no huge deal, but there didn’t seem to already be a library to do it automatically – I’d have to implement it directly in the application both times.  And then I’d have to maintain both.  And what if I encountered yet another project that needed such functionality?  I resolved to build a library that could do what I needed, then simply include that library in both applications.

Both applications are built with AngularJS, which has native support for DOM-manipulating libraries in what it calls directives.  Essentially, directives are a way to extend HTML by defining (and then handling) new elements and attributes.  For the validation-paranoid, attributes can be prefixed with x- or data-, which keeps most validators happy.  You can also specify such extended functionality with classes, which sometimes makes more sense if what you’re building is a presentational extension – or your validator is too stupid ancient to allow x- or data- prefixed attributes.

So the natural result of all this is that I built the dynamic forms library as an AngularJS directive.  It is hosted on both GitHub and BitBucket, because GitHub is awesome for getting projects seen and worked on, and BitBucket is what we use at work, so it kind of made sense to put it there, too.  Both repositories have existed for several weeks, but I just now reached the point where the project is releasable, though probably only at a mature alpha or early beta level.  Which is why the release version is tagged as v0.0.0.  I don’t anticipate a large amount of involvement, because thus is such an uncommon use case, but it’s good to mention the release so the project’s visibility goes up (even if only for the search engines…).

Either way, let me know what you think!

Print Friendly, PDF & Email


I don’t know if anyone will read this within a day, a week, or even a year of its posting, but I suspect that’s not the point.  It doesn’t matter if it ever gets read.  What matters is that I write it.

Those who know me personally probably already know that I am taking medication for migraines.  Most of those also know that the medication I was prescribed is an anti-depressant, this specific variation of which is sometimes used for its side effect of reducing the frequency and severity of chronic pain.  A few will be aware that this might be a good thing for me beyond handling the headaches, since I’ve shown several of the symptoms of depression off and on for many years.

Well, now I’m out, and can’t afford to get any more.  The medication itself isn’t particularly pricey, but the office visit to renew the refills is.

Now, before anyone pulls out the wallet and heads to PayPal or whatever, I’m not looking for help with this.  I just want to discuss a few things with myself, as it were.  Get my head back on straight.

See, the pills were beginning to lose effectiveness on the headaches, but they were actually doing wonders (apparently) on my depression-like symptoms.  (I’m trying to avoid self-diagnosis so as to not marginalize those who actually for-certain have depression.  Unless/until someone with the training to know says I have something, it’s only symptoms.)

At least, since the meds started effectively leaving my system, my mood has been less predictable and more negative.

I’ll spare anyone reading this the full details of my symptoms, but it does bring up a few other items I feel are important enough to put out there, even though they are of little importance otherwise.  These are facets of my life that I start to fixate on whenever my mood turns this direction, and if I can get the discussion on them out of my own head, it may do me some good.

First, my belief system.  It’s both very simple and very complicated, all at once.  The simple part: I believe that the sheer force of belief itself shapes reality.  It certainly shapes our actions, at least, and I don’t know many rational people who would argue that our actions have no effect on reality.  Then there is the effect of shared beliefs on how objective reality is perceived, and thus explained and explored.  Whether this cascade effect continues to the level espoused by spiritualism and religion is ultimately beyond the point that those beliefs shape the actions of those who hold them, which in turn shapes the world we share with them.  That in itself is often enough.

The complexity, if that isn’t obvious already, comes from how those myriad beliefs interact, and how to determine which beliefs are most true at any given moment.  I tend toward treating them all as equally effective, since I don’t have sufficient data to know for sure in any case.

Second is a personal understanding of my own nature which, frankly, can only be interpreted as insanity given current knowledge and understanding of various scientific principles.  Probably schizophrenia, or one of its relatives.  I feel strongly about its truth, but my certainty doesn’t help in my attempt to defend myself.  I’ll leave this one at that.

Last is a facet of myself that isn’t widely known (mostly because it doesn’t really matter in the vast majority of situations), but which shapes my own thoughts and actions, sometimes in ways that make others uncomfortable.  This bit will probably make many people even more uncomfortable around me than they already normally are, but I think it’s beyond time I say it.

I am a practicing bisexual.

What does that mean?  It means that I love my wife, and we are as intimate as our bodies will allow.  But it also means that I am attracted to men just as much, and enjoy such intimacy with them as well.  This is not a surprise to my wife, who is wonderful beyond what I could possibly deserve.  We discussed the matter long ago, and decided that the main issue with extramarital intimacy wasn’t the sex itself, but rather the damage of trust.  So long as neither of us tries to hide a sexual relationship from the other, if it happens, it happens.

Now, this doesn’t mean we’re out sleeping around.  I won’t speak on her sex life (aside from the one she has with me) because that’s her story to tell or not as she pleases.  For my part, however, I don’t tend to find myself in situations where I could take advantage of this arrangement anyway.  That said, I have had a few exhausting nights with a member of my own sex.  And I enjoyed every minute just as much as I do the ones I share with my wife.  So, no, I’m not just “bicurious”.  I know I like playing both sides of the field.

That was as non-graphic as I could make it while still being clear about that bit of myself.  I suspect some – if not most – of the people reading this (assuming, again, that anyone will) will place as much distance between me and themselves as they can manage.  Hell, it’s even still legal for me to lose my job(s) over it.  (Well, not so much legal as not illegal, but in practice, they’re about the same…)  I accept that as something I cannot change.  But I feel the need to have it said is far more important than maintaining friendships or employment where this aspect of me justifies such reactions.  Indeed, if this is enough to end these relationships I’ve built even while being this person I’ve now admitted to being, then those relationships probably weren’t worth the time to cultivate in the first place.

I hope, though, that the relationships I have with others are strengthened by this knowledge, if they are affected at all.  That would be the best scenario for everyone, I believe.  It would certainly do a lot for my faith in people in general.

We will see, I am sure.

With all that written down, I am indeed feeling more sure of myself, as hoped, and could probably store this away someplace where it would never be read by anyone other than myself.  That wouldn’t be particularly honest of me, though.  Not after what I’ve written here.  So here you go, world.  I accept whatever damage this will do to my career(s), my friendships, and even my family, as I take full responsibility for it.

And who knows.  Maybe my fears are misplaced.

Print Friendly, PDF & Email

The Good Doctor

I am ashamed of myself.  I have waited until just now, over twenty-five and a half years into my entire life, more than a third of the time I can expect to live, to start watching episodes of Classic Doctor Who.  My repentance is late in coming, perhaps, but thorough nonetheless.

I started my journey as any clueless wanderer ought – by asking Wikipedia to list off the episodes in correct order.  The explanatory material shocked me.  Episodes gone missing?  However could that have happened?  A policy to destroy old episodes of shows?  How barbaric.  I was appalled, of course.  A series popular enough to run for 26 seasons – yes, twenty-six of them – and the people responsible for its very creation and existence had a policy to destroy the older installments?  I knew there had to be a reason, but what reason could they possibly have had which would make any logical sense?  How could the destruction of such great material be justified?

As it turns out, though film and even broadcasting technologies weren’t exactly brand new, they were still, as the world transitioned from black-and-white to color, operating under the same contructs as the stage.  If you wanted to broadcast a story, the players would perform it for you, allow you to record it for that broadcast, and expect you to rehire them for subsequent rebroadcasts.  The ability of film to reduce the workload of everyone involved wasn’t entirely overlooked, however; time- and number-limited broadcasting licenses were usually attached to each piece, so it could be rebroadcast up to the set number of times within the set amount of time.  This time period was generally fairly short, amounting to only a couple of years.

When these licenses expired, the film copy was no longer of any use to the purchaser, since they no longer had the rights to use it, so these were destroyed to make space for other, frequently newer films.  If the originals were kept on tape instead of film – and many were – these tapes were erased and reused for other projects.  This had the added effect of reducing overall costs, as the amount of storage space required was kept low, and what space there was remained free of old projects which could no longer see a profit.

The idea that broadcast television material might serve a cultural purpose rather than simply a financial one eventually caught hold enough that preserving these older recordings became the policy, even when the rebroadcasting rights had expired.  There was, it had been determined, a cultural duty to preserve them.  From then on, the hunt for destroyed episodes was on – not just for Doctor Who, but for every series that had met with this unfortunate end.  Many such episodes had been sent overseas when broadcasting rights to them had been purchased there, though only copies were sent out; never originals.  Over the next several years, continuing to the present, most of the missing episodes returned, and Doctor Who is (among) the most compeletely recovered of such series.

Doctor Who is also peculiar in that it is the only series of that era for which every single episode has survived in at least an audio form – thanks mostly to viewers who didn’t have VCRs (this is before VHS/Betamax had their now-legendary war), and so had to accept merely recording the audio component during various broadcasts.  These audio versions are of course in varying states of quality and repair, but every episode’s audio still exists today, regardless of whether the video exists alongside it.

That bit of background absorbed, I then learned that each epsiode was generally considered merely part of a larger story, a “serial”.  Essentially, Classic Doctor Who is a collection of mini-series tied together only by the common character of the Doctor (though many other characters can be considered recurring at various points throughout).  I find this format to be fascinating, as it presents some interesting opportunities for storytelling.  Still, this format choice meant that a single missing episode would effectively ruin several adjacent as well, at least to the point where a video version of the surrounding episodes would probably not be released until the missing one(s) were restored.

Armed with my list, I set to Netflix to watch them all in order.  And discovered that the streaming service, at least, didn’t offer but a small handful of the full 155 serials originally broadcast, nor the 1996 TV movie which aired 6.5 years (approximate) after the last serial, and nearly 9 before the introduction of the Ninth Doctor in the presently-airing series.  Still, the theme song had now been running through my head incessantly for at least a week by this point, so I dove into the earliest of these I could find – Doctor Who: The Aztecs, the sixth serial, which can be found in Season 1.  I then proceeded chronologically by broadcast date through the paltry selection until arriving at Season 16, which is composed of six serials, themselves tied more closely together in a single arc called The Key To Time (if you find a DVD by that title, you have the entire 16th season of Classic Doctor Who in your hands).  And discovered that four of the six stories were actually available, including one written by none other than Douglas Adams, of Hitchhiker fame.  Called The Pirate Planet, this is the serial which I have just finished.  Downright amazing, and perhaps surprisingly coherent by Adams’s standards.  Many of the concepts Adams brought to Doctor Who, especially if they never actually made it to the screen, were later reused in his published works.

The true tragedy, though, is that none of the serials available on Netflix have anything to do with the Daleks at all, despite the fact that Daleks are perhaps the true icons of the series – after the Tardis, of course.  Still, the fact that any of these classic episodes are available to begin with is satisfying, so I can’t complain too loudly for too long.

Have you met the good Doctor yet?  Have you braved the Classic series, or stayed safely in the confines of the modern version?  So long as you expect material from the 1960s through the late 1980s, I suspect you’ll enjoy the Classic episodes just as thoroughly – and gain a greater insight into what’s really going on here.  But you don’t have to take my word for it.

Print Friendly, PDF & Email