Commit SHAs as dates

I’ve been going through a pile of old bitquabit posts. While many of them hold up over time, the more technical ones frequently don’t: even when I was lucky and happened to get every technical detail right, and every technical recommendation I threw out held up over time (hint: this basically never happens), they were written for a time that, usually, has passed. Best practices for Mercurial in 2008 are very much not best practices now. But it’s a bit tricky: whether something I wrote is genuinely out-of-date has less to do with how much raw time has passed, than how much churn in the project has happened.

To that end, I was happy to see that some of the blogs I follow have started using Git commit SHAs to date their post, alongside the calendrical date—serving as a kind of vector clock for the passionate. If you’re writing technical posts for an open-source project, this seems ideal to me: for casual observers, they can go with the calendrical date, and for people deeply involved in that arena or project, they can instead key off what has happened since the commit in question.

I’m not going to retrofit all my old posts, but it’s something I’ll keep in mind going forward.

Automating Hugo Deployments with Bitbucket Pipelines

As I mentioned in a recent post, I manage my blog using a static site generator. While this is great to a point—static site generators can handle effectively infinite traffic, they’re stupidly cheap to run, and I can use whatever editor I feel like—the downside is that I lose tons of features I used to have with dynamic blog engines. For example, while it’s almost true that I can use any editor I want, I don’t have a web-hosted editor like I would in WordPress or MovableType, and I likewise can’t trivially add any sort of dynamic content. Most of what I lose I can live without, but one that is genuinely annoying, and which has even bitten me in the past, is that I can’t publish without being on a computer that has both my SSH keys, and the publishing toolchain installed. Not only is that inconvenient; it means that publishing output can vary depending on which machine I use for a given publishing run.1

There’s a pretty easy fix for that: add continuous deployment. If it’s good enough for real software, it’s good enough for a personal blog. I can set up a single, consistent deployment environment on some server, drive all the deploys through that, and call it a day. The problems here being that a) setting up a continuous integration server is annoying, and b) I am lazy. There are cloud-hosted CI servers, but most of them either are overly complex, or are too expensive for me to justify using for my personal blog.

Enter Bitbucket. I’m already using them, since they’re by far and away the best Mercurial hosting game in town these days, and they recently2 added a new feature called Bitbucket Pipelines that fits all my requirements: cloud-hosted, free, easy-to-use, cheap, and it didn’t cost anything.3

And I’m glad I looked, because getting everything running turned out to be stupidly easy.

Step one: write the Dockerfile

Bitbucket Pipelines wants to base your deployment on a Docker image, so I had to write one. Thankfully, it’s so easy to make Docker images these days that pretty much everyone is making them—even when there is no conceivable reason why they should. So let’s set one up.

To deploy my blog, I need at least four things: Hugo, Pygments, rsync, and SSH. It took me a couple tries to get the Dockerfile just right (mostly because I straight-up forgot rsync and SSH on the first go), but the result is literally five lines, total:

FROM alpine:3.6

RUN apk add --no-cache bash git go libc-dev python py2-pip rsync openssh-client
RUN pip install pygments
RUN go get -u

About the only thing remotely interesting here is that I’m using Alpine Linux, which I selected based on it seemed to be what the cool kids were using these days and it was one of the smallest base Docker images I could find. I’m not honestly sure if bash is needed (I suspect /bin/sh would’ve been just fine), but I originally wrote my deployment script for bash, and I’m too lazy to figure out if I used any bashisms, so let’s just toss that in there anyway. What’s a paltry 34 MB between friends?

Tons of places host Docker images for free these days, and Bitbucket can use any of them; I kept it simple and pushed it to my Docker Hub account.4

Step two: write the build script

I actually already had a build script,5 so all I really had to do was tweak it slightly to be run on something other than my personal machine. The result’s genuinely not interesting, but for completeness, the functional part of it looks like this:


# Normal boilerplate (see e.g.
set -euo pipefail

# Add $GOPATH to the path so Hugo will be present
export PATH=$(go env GOPATH)/bin:$PATH
hugo --cleanDestinationDir
rsync -av --delete public/

Again, nothing interesting here. We’re at exactly ten lines, and even that only because I added some comments and some blank lines for readability. I called this file build and stored it unceremoniously in the root of my blog repository.

Step three: test it…if you feel like it

Since we’re going to deploy files to a real server in an automated fashion, the next step is to test everything.

Or not. It’s your server; I’m not gonna tell you what to do.

Myself, I decided to half-ass it a bit. Pipelines just launches your Docker image, copies your project into the container, sets your project to be the current directory, and begins running your script. I can do that:

$ docker run -it --volume=C:/Users/b/src/blag:/blag --entrypoint=/bin/bash bpollack/blag-builder:latest
$ cd /blag
$ ./build

The first line says to run a Docker container we built interactively (-i) on my terminal (-t), mount the Windows directory C:\Users\b\src\blag at /blag in the container, and then launch bash once the container is ready. In the next two lines, I demonstrate my amazing CS skills to change to the appropriate directory and run the script, proving that, even in this advanced day and age, I can still play the part of a computer.

This of course failed at the push step due to SSH keys not being set up (more on that in a second!), but otherwise seemed to work fine, so it’s good enough for me. Onwards!

Step four: create the pipeline

The pipeline spec is really simple: you give it a Docker image (which we just made), a condition of when to run (I’ll just have it run whenever there’s a new changeset, which is the default), and what steps to run when the condition is met (in our case, we need to run one single step, which is the build script we just wrote). So that file, in its entirety, is:

image: bpollack/blag-builder:latest

    - step:
          - ./build

Granted: being Yaml, this looks like the result of an editor with broken indentation rules. But it’s at least pretty self-explanatory: we give it a Docker image (it defaults to using Docker Hub, which is great, because so did we), we give it one pipeline, called default, and give it the sole job of running a one-line script that calls our real build script, which we wrote together in the previous heading after much struggle. Commit this as a file called bitbucket-pipelines.yml in the root of your repository and push.

Step five: add relevant SSH keys

Congratulations! If you did everything perfectly at this point, Bitbucket will create your pipeline, run the build, and it will fail!…because you don’t allow random people to push stuff to your server over SSH.6 Fair enough. For reasons I’m not honestly entirely clear about, Bitbucket won’t let you specify SSH keys to use for Pipelines until at least one pipeline exists. But now that we’ve got a pipeline—it’s the one that just failed—you’re good.

In your repository, click on the Settings tab, and then, under the Pipelines heading, there’s an entry called SSH Keys. Still with me? Good. These are SSH keys that will be loaded into your Docker container right before your script runs, and which will be used to push code to your server. I recommend following their advice, generating a key with them, and then adding that key to the ~/.ssh/authorized_keys file in the appropriate user account. You’ll also need to tell it what servers you’ll be using these keys with so that Bitbucket will detect if your server gets swapped out and can avoid deploying your precious secrets to some nefarious machine.

(Incidentally, I recommend using those Bitbucket keys only with a heavily locked-down account that’s dedicated purely to handling the deploy, but how to do that is a bit outside the scope of this particular post.)

Step six: you were actually done at step five

That’s it; we’re done. You do need to either re-run the pipeline manually at this point or push a dummy changeset to make sure, but everything should honestly Just Work™.

That’s honestly it; a hair over twenty lines of code got you free continuous delivery. You can get more fancy at this point if you’d like (I’m probably going to make sure the pipeline runs only when certain bookmarks are moved, rather than on every push, for example), but that’s the fundamentals. Three short files, each ten lines or less.

  1. I briefly had what I guess could qualify as an outage when I accidentally ran a deploy on a machine that didn’t have Pygments installed—which promptly deleted every single code snippet on the site. Oops. [return]
  2. Relatively speaking; the feature went into beta in March 2016. [return]
  3. It’s not free-free, but you get 50 minutes of build time with the free account, and building my blog with Pipelines takes about 16 to 25 seconds, so I figure I’ll be fine for awhile. [return]
  4. I won’t stop you from using this image, but I really discourage you from doing so; I make zero guarantees I won’t do horrible things to it in the future. [return]
  5. Two, actually—one for Windows and one for Unix—but since the Windows Subsystem for Linux has stabilized, all the Windows one does is call the Unix one. [return]
  6. I sincerely hope. [return]

The Paradox of Apple Watch

When the Apple Watch first came out, my initial reaction was basically disgust. Everywhere I looked, I saw people already Krazy Glued to their phones, missing the world around them to live instead in the small mini-Matrix in their pocket. Now, Apple was proposing to add additional distractions right on our wrist, making it even easier to ignore real life and stay focused on a screen instead. Not only was the Apple Watch not for me; it was a sad commentary on how tech was ruining our lives.

Yet I kept seeing more and more friends of mine falling victim to the Apple Watch. They insisted it was actually great, that I was the crazy one, that it was the next revolution in tech, that they loved how it kept them in touch with everyone even more easily, etc., etc., etc. I’ve heard this song before, and while I doubted I’d agree, it became equally obvious that the Apple Watch wasn’t going anywhere. In the interest of making sure I could stay not just with it, but also hip, I bought one a few weeks ago. I figured I’d play with it for a couple weeks and return it, getting a nice blog post out of it about how I was right and the Apple Watch made my life worse.

But what I’ve instead found something else: properly used, at least for me, the Apple Watch isn’t yet another distraction. Instead, it can allow me to stay informed, without constantly pulling me out of the moment. It’s actually freed me to leave my desk much more easily, without succumbing to staring at my phone instead. In other words, it’s had the exact opposite effect I anticipated.

The Problem with iPhone Notifications

Here’s my basic problem: I’m a manager. I have twelve direct reports spread across four disparate projects, plus I also provide management support to our Infrastructure project—you know, the one project at Khan Academy where even we have alerting and chatbots and whatnot to let you know when things have exploded. This means I have meetings constantly, and I’m pinged on Slack constantly, and I get an obscene volume of email. And each and every one of these constantly wants your attention, by default sending tons of notifications basically all over the place. Phone, computer, tablet, cyborg sitting next to you muttering about killing all humans, everywhere.

Some of these distractions I can easily disable while still doing my job. For example, since emails rarely require an immediate response, I turned off mail notifications completely, and only bother checking messages every hour or so. That’s socially acceptable, and keeps me available while also letting me get work done. I likewise killed notifications from tools like Trello, OneNote, Asana, and anything else that almost certainly could wait for a regularly scheduled check-in.

But Slack and meetings are trickier: while many Slack notifications can genuinely wait, many can’t, so I do need to actually read the notifications and make a decision on whether to respond. (I actually just ranted about this in detail if you’re bored.) My meetings likewise frequently shift radically during the day, so the fact I had been clear at 11 doesn’t mean I still am, nor does the fact I originally had an interview at 2 mean I still do.

I thus fell into this pattern where I’d get a buzz from Slack, take my phone out, read the alert, realize I had a pile of unread messages in some room or other, read through those, get distracted paging in context for the conversation, remember to recheck my calendar for any meeting changes, put the phone away, forget what I was doing, and then repeat. My spouse grumpily noticed that even on date nights, even when I was trying to stay in the moment and wasn’t honestly thinking about work, even when my phone was in Do Not Disturb mode and couldn’t have buzzed, I’d still sometimes mechanically take my phone out, look at the screen, and put it right back—just because I was so used to doing that motion during the day that it had become a habitual reflex.1

In this environment, adding the Watch seemed like a bad idea. I’d already cut down my notifications as far as I could; putting them on my wrist seemed like it’d make an existing problem even worse.

So I was quite surprised when exactly the opposite happened.

Enter Apple Watch

Here’s the thing: the Watch can’t actually do all that much—at least not in the way a smartphone can. It ultimately really does three things very well, and everything else very poorly:

  1. It’s a great way to track my jogs. That’s not why I bought it, but it turns out it’s great at it, and I use this feature a lot.
  2. It is indeed very good at giving you notifications, usually along with a small handful of possible response actions, if applicable.
  3. It is also quite good at taking certain kinds of very quick voice commands—basically the same subset Siri already handles well on the iPhone.

That’s it. Doing anything other than these is generally somewhere between painful and a genuine farce. Yeah, Todoist and other task lists exist on the Watch, but they fit maybe two to three things on the screen at once; you’d have to be a masochist to enjoy it. There’s a similar story with note-taking apps, like OneNote: yes, the app exists, and it honestly does the best it can with voice entry, but that gets old really quickly. Tools like Maps and Yelp are so limited that I’m forced to wonder why anyone bothered in the first place. And trying to read something long-form like an email on the Watch…I mean, yes, you technically can, but you’d have to be really desperate. Indeed, any use that requires reading or generating a substantial amount of information is either impossible or so difficult that I avoid it at all costs.

And…that weirdly turns out to be perfect. Fine; I can’t avoid real-time Slack and calendar notifications and do my job effectively, so they’re just going to be part of my life for now. But when I get them on the Watch, I glance down, make a snap decision on whether it requires me to do anything, and then either go back to doing what I was doing immediately (the overwhelming majority of the time), or, if the notification does require an immediate response, I walk back to my actual computer to handle it appropriately. In mere days, my habit of pulling my iPhone out of my pocket basically evaporated. Not only that; because I already try very hard to separate my work and personal devices, and because I was now responding to anything long-form on my work PC rather than my phone, I basically obliterated all of my media grazing habits overnight.2

The actual impact has been obvious to me: my work velocity increased, my iPhone battery lasts disturbingly longer, and I find myself much better able to focus whether we’re talking 1:1s with coworkers, or personal time with friends and family. Plus, I can now actually take a nice midday walk without having to stop every two minutes to check my phone. It’s honestly been an incredible win.

Mindful(ish) Notifications

I’ve been making a very deliberate effort for the last six months to pursue what I’ve been calling mindful computing—basically, trying to use technologies and develop habits that discourage distractions and that encourage and reward getting onto a computing device to do some specific action, and then putting the device away when you’re done.

I cannot quite say that the Apple Watch fits cleanly into this rubric. Indeed, as I noted, notifications are both one of the things it does best, and the explicit reason I ended up keeping it—and I don’t know that any person who would argue that seeking out a distraction-making device is a good example of mindful computing as I defined it.

But I do think that, properly used, the Apple Watch can be mindful-ish. If you are in a situation where you genuinely cannot fully avoid having some form of distracting notifications and still be effective, the Watch, specifically due to its incredibly limited abilities, can actually be an amazing compromise.

It’s one of the few recent technology purchases where I can say with a straight face that it meaningfully improved my quality of life. And while it didn’t do so in a fundamental way, and it may not be for everyone, I am surprisingly happy that I ended up ignoring my initial judgment and taking the plunge.

  1. There’s a valid question here of why these are on my phone this way in the first place; after all, if I’m at my PC, I could put the notifications there. And in truth, when I am sitting at my desk, I usually put my phone into Do Not Disturb mode for this exact reason. But one of the nice things about being remote is I can frequently attend meetings while taking a walk, or read through some emails or documents in the nearby park—but if I do that, then I do in fact need all these notifications on my phone in case I need to switch up my plans/head back to the house/get back to my laptop. [return]
  2. The unexpectedly positive impact of suddenly not reading reddit, Twitter, and the like anymore is a great topic for another day. [return]

Why I Hate Slack and You Should Too

Yeah, that’s right: there’s finally something I feel so negatively about that I’m unsatisfied hating it all by myself; I want you to hate it, too. So let’s talk about why Slack is destroying your life, piece by piece, and why you should get rid of it immediately before its trail of destruction widens any further—in other words, while you still have time to stop the deluge of mindless addiction that it’s already staple-gunned to your life.

1. It encourages use for both time-sensitive and time-insensitive communication

A Long Thyme Agoe, in the Days Before Slack, I had three different ways of being contacted, and they served three very different purposes, with radically different interrupt priorities. I had emails, which could wait; I had phone calls, which couldn’t; and I had the company IRC server, which was usually where I went to waste time by sharing links to things that either made me get very angry or made me laugh hysterically.1 In this system, the important, time-sensitive thing can interrupt me, and everything else can’t. That’s great for productivity and great for my sanity, and the people were happy and things were good.

Slack totally just trashed everything. It’s email and phone calls and cat pictures, all rolled into one. So sometimes Slack notifications are totally not time-sensitive (@here Hey I need coloring books for my niece, any suggestions? also she’s afraid of animals clowns food people and dinosaurs and also allergic to paper kthxbye!), and sometimes they require an immediate action (@here Dr. Poison just showed up and tl;dr maybe run for it idk?)—and until I’ve read the message, I have absolutely no idea whether it deserves my immediate attention. That order’s backwards and it makes me feel bad because it is bad.

This is actually a whole thing in psychology: if you give a mouse food every time they push a lever, they’ll eventually only push it when they’re hungry, but if you only give them food sometimes when they push a lever, then the “reward uncertainty” will actually cause them to push the lever more often.2 And hey! Here we are, all checking Slack 23,598 times a minute for each notification, because who knows, maybe this one matters. It’s all the pain of Vegas with none of the reward and somehow we’re still hooked.

So unlike before, now I get interrupted constantly, and I have to break my flow to figure out whether getting interrupted was worthwhile, and for some reason this is supposed to enhance business productivity.

Right. Sure. You go on being you, Slack.

2. It cannot be sanely ignored

“Okay, pea-brain,” you mutter, “so just turn off Slack notifications when you need to focus for awhile, and catch up later.”

I once thought as you did, but part of the reason you end up addicted to Slack is that catching up on what you’ve missed feels very similar to when you were back in college and were a day before the final and suddenly realized that your plan of not highlighting the book or taking notes all semester may’ve been a Bad Idea™. About the only way Slack bothers grouping information is by room3—and as anyone who’s been trapped in a heavily-used Slack system can tell you, the room names and descriptions are at best weak guidelines, so you can’t even necessarily prioritize what to catch up on even at that gross level of granularity.4 Nope: your only option is going to be to read the entire backlog, from start to finish, or else just accept that, at some distant point three months from now, you’re going to look like a complete idiot when you’re the only one didn’t know that all employee blood was now going to be collected for occult purposes.5

Granted, this isn’t Slack’s fault per se, at least insofar as every chat system has this problem, but Slack’s attempt to become your One True Source of Everything, from scheduling to reminders to SharePoint replacement to company directory, means that a huge amount of information that previously would’ve been in emails ends up in Slack, and only in Slack. And that’s a very deliberate decision by Slack to make themselves utterly indispensable, so I feel very comfy screaming at them until I go hoarse.

3. It cannot be sanely organized

Okay fine, so you read through the whole backlog from your vacation, which took you barely even 70 hours, and have extracted the six actual to-do items from it, one of which involves something about pentagrams and goats that you’ll decipher later. Great. Mazel tov. Phase one complete.

Now what? Slack has no meaningful way to organize those six messages. There aren’t folders. There isn’t a meaningful “do later” pile. (There’s /remind, to be fair, but, as noted previously, that just generates more notifications, which we’re trying to avoid. Theoretically.) So you’re left with…what, exactly? Right-clicking on each individual message at the end of the chain, copying the link, and pasting that into some external to-do app? Which, of course, when you click back on the link, will require you to re-read at least some amount of unstructured backlog, including a bunch of unrelated garbage about reconfiguring CARP on the edge servers and something about epoll and multithreading and a panda birth video that just happens to be there, just to remind yourself what everyone said?

Welcome to hell. Population: all Slack users.

4. It’s proprietary and encourages lock-in

In an ideal world, I could circumvent a lot of these issues in any number of ways. For example, I’m still active in open-source sometimes, and the open-source equivalent of Slack is (usually) still IRC. But IRC, being a well-documented6 older system, has tons of different tools to extract data from it. If I want to be nerdy, I can yank individual messages from ERC straight into org mode, or write custom scripts for WeeChat, or use any of literally dozens of clients written in Ruby and Python and Io and Java and C# and thousands of other programming languages plus also JavaScript and do really bespoke things. And even if I don’t, the plethora of macOS and Windows clients means that an off-the-shelf or trivially customizable AppleScript or WSH solution is never far away.

But Slack is Slack, and Slack is Electron, and Electron is Chrome—Chrome surrounded by an unscriptable posterior that eats up 100 MB of RAM per channel, plus an extra 250 MB for each Giphy.7 And while I can almost script my way out of this hell, I really can’t. Not as a mortal end-user, anyway. To the extent I can do anything, I need to write directly against the Slack API, rather than using something commonplace like XMPP or IRC, so goodbye portability. And even if I’m willing and able to write against the proprietary API, a lot of the more interesting things you can do require being an organization admin, and require being enabled globally for the entire instance. So goodbye, personalized custom integration points, and hello, one-size-fits-zero webhooks. This is my life now.

5. Its version of Markdown is just broken

I’m going to use up an entire heading purely to say that making *foo* be bold and _foo_ be italic is covered in Leviticus 64:128 and explicitly punishable by stoning until death.

6. It encourages use for both business and personal applications

All this would be merely infuriating and drive me into a blind murderous rage if it were just something I dealt with at work, but oh no, now the fun groups I interact with are turning to Slack! That’s right: the same application and environment that makes a full-blown Dementor-style kiss with my attention span for work can now corner me in a back-alley when I just want to shoot the breeze with friends.

I glance at the Slack icon. I have nine unread messages. Neat. Are they from work? I should probably actually go read those and see which ones require I do something. Are they all the ex-employees of that one company I used to work for? It’s probably a bunch of political screaming about stochastically sentient Cheetos that somehow won the presidency, and I’m honestly a bit tired of reading about that at this point.8 But at any rate, I can’t know until I take my phone out and read the notification—and sometimes even then I can’t, since of course some of the people I talk to are on multiple Slack instances and have a habit of saying things like “@bmp did you look at this it’s really concerning?” which requires I actually load up the freaking client and find the instance and the message and finally learn to my utter horror that I shall never be given up, let down, or run around/deserted.

Give up and yield unto Cthulhu Slack, destroyer of focus

Stop using Slack. I hate it; you also should hate it. It’s distracting. It murders productivity. It destroys old tools. It exploits psychological needs in such a way that it kills your soul and hangs it up to dry over a lava pit, where the clothesline catches fire and your soul falls into the fire and somehow you’re not dead, just a zombie, forever, reading zombie notifications on your zombie iPhone and wondering whether “@here brains?” is a lunch invite or an insult until you read the backlog. Friends do not let friends use Slack. I have been utterly convincing and you should listen to me in my capacity as low-grade Internet celebrity and do what I say because mindlessly obeying authority is the right thing to do.

But realistically? We’re all still using Slack, because it’s there, and we have to, and it’s the best option according to our collective judgment, which I do have point out may empirically be lacking at this point. So if we are stuck in Slack, then maybe, just maybe, we could start trying to restore Slack to a place where it’s genuinely for ephemeral ideas. Where it’s indeed the place for ad hoc conversations, but not a canonical store for their conclusions and action items. Where I don’t have to read the backlog when I come back from vacation, because anything actionable will at worst have been duplicated as an email or a Trello card or what have you. Where I can disable Slack notifications because I can know, with certainty, that any activity can wait until I’m back at my computer and actually want to spend time chatting on Slack.

In the meantime I’ll be right back because either the data center just exploded or someone posted a picture of a goat fainting and The Notification God must be placated.

  1. This function is now provided by reddit. [return]
  2. Aziz Ansari, Modern Romance (New York: Penguin Books, 2015), 59. Yeah, I could’ve given you a scientific paper, but this book is way less boring and made me stupidly happy I’m not in the dating pool anymore. [return]
  3. Slack honestly is trying to address this with threads, but the problem, which anyone who tried using a system like Wave or Zulip or something similar could tell you, is that the origami crane of organizing information neatly by topic runs basically head-on to the rabid bull of real-time chat and then everything falls apart, so these don’t actually get used effectively in practice. Hell, whether a conversation uses a thread or not in Slack in the first place—and whether a threaded conversation stays that way in Slack (thanks, “Also send to #channel” checkbox! may the fleas of a thousand camels infest your armpits!)—seems sufficiently random that I’d be comfy using it as the main entropy source for a digital slot machine. [return]
  4. They’re trying really hard to address this recently with concepts such as “All Unreads” and (very recently) “Important Messages,” but while these certainly make catching up go faster, they don’t actually resolve the issue unless you really trust how Slack’s deciding what’s important. Based on my experience, we’re very much not there yet. [return]
  5. Didn’t you see it? It was in #kitten-pics. You were @here-messaged, so that’s on you. Now roll up your sleeve and welcome in the Lord of Darkness, His Holiness Spirit Agnew. [return]
  6. I mean…as far as that goes, anyway. [return]
  7. I genuinely have no idea if this scales by channels, but since I’m in ten channels and wasting 1.2 GB, I’d honestly prefer to assume it’s by channel, rather than the alternative that Slack needs a gig of RAM just to run. Which it probably does. But let’s assume. [return]
  8. Not because they’re wrong, mind. I just can only handle so much ranting about a human/toupée hybrid before I start to zone out. [return]

JSON Feed with Hugo

Every couple of years months [checks wristwatch] weeks, we reinvent a file format for no particularly good reason. Don’t get me wrong; we come up with all kinds of reasons to justify what we’re doing—easier to read, better for the environment, It’s Got Electrolytes™—and sometimes, the new format does genuinely represent a meaningful or necessary improvement. But more often than not, we’re just reinventing things out of boredom and a nagging sense, deep down, that if we don’t keep changing everything constantly, normal people may grok that most of the reason programming is complicated and weird is because we put a lot of effort into making it that way.

So I was pretty psyched when JSON Feed came on the scene a couple weeks ago, because it’s pretty much the absolute rawest possible example of a file format that’s unrepentantly change for the sake of change. Literally every language I interact with has perfectly good tools, right in the standard library, for generating and consuming RSS and Atom. Until a few weeks ago, none had any tools for working with JSON Feed whatsoever because it didn’t even exist. But since, and I quote from the JSON Feed manifesto, “developers will often go out of their way to avoid XML,”1 JSON Feed is now a thing, and we’ve already entered the phase where every language I use has a pile of third-party libraries for the format, most of which will be unsupported going forward, and all of which have interesting quirks and bugs that no one fully understands yet. I thus figured it was high time to support JSON Feed on bitquabit.

There was unfortunately a caveat. Some time ago, I moved my blog over to Hugo, a static site generator, so that I wouldn’t have to spend time maintaining my own blog software. In general, that’s been brilliant, but whereas it’d have taken me about five minutes to add JSON Feed to my old blog, I had no idea how to add it to a Hugo site. The highest-ranked link on Google is just vague enough to make me think I should get it but not be able to, and I can say in retrospect that Hugo’s documentation on alternate output formats makes a ton of sense after you already know what’s going on—but not before.

So without further ado, here’s how you add JSON Feed to a Hugo site:

Add some magic to config.toml

We want to tell Hugo that there’s a thing called JSON Feed, which is a JSON file, and we want to assign it a file extension. That’s easy enough. In your config.toml, just slam the following lines at the end:

  mediaType = "application/json"
  baseName = "feed"
  isPlainText = true

mediaType is the file’s MIME type, baseName is just the name of the file template before the extension2, and isPlainText tells Hugo that it shouldn’t do any HTML-related shenanigans. Whatever you slap after the . in outputFormats at the beginning, combined with the media type, defines the expected file extension, so everything we just wrote applies to files that end with .jsonfeed.json. Putting everything together, we’ve now told Hugo that feed.jsonfeed.json files are JSON Feed templates. So far, so good.

Next up, we tell it that we would like it to generate a JSON Feed if one exists. If you already have a section in your config.toml labeled [outputs] (you don’t by default), you’ll need to alter it, but otherwise you can just this at the end:

  home = ["html", "jsonfeed", "rss"]

All that says is, “hey, when you’re generating my home page, in addition to HTML and RSS (which are defaults), also generate this "jsonfeed" thing,” which (conveniently) we just defined.

Add a template for the JSON Feed

We told Hugo that our JSON Feed templates would end in jsonfeed.json and that the base name would be feed, so go create a file called feed.jsonfeed.json in the root of your content/ directory and put this in it:

  "version": "",
  "title": "{{ .Site.Title }}",
  "home_page_url": {{ .Permalink | jsonify }},
  "feed_url": {{ with .OutputFormats.Get "jsonfeed" -}}
    {{- .Permalink | jsonify -}}
  {{- end }},
  "items": [
    {{ range $index, $entry := first 15 .Data.Pages }}
    {{- if $index }}, {{ end }}
      "id": {{ .Permalink | jsonify }},
      "url": {{ .Permalink | jsonify }},
      "title": {{ .Title | jsonify }},
      "date_published": {{ .Date.Format "2006-01-02T15:04:05Z07:00" | jsonify }},
      "content_html": {{ .Content | jsonify }}
    {{- end }}

Most of that’s boring if you’ve seen the JSON Feed format description, but a couple of things to point out:

  1. We’re programmatically grabbing the JSON Feed permalink, rather than hard-coding it. If you have multiple feeds on your site (e.g., one per category), that’ll help things work out
  2. The {{ range $index, $entry := ... }} silliness is the only way in Go templates to handle fence posts. In this case, because JSON does not allow trailing commas, we need to prevent having an extra comma at the end, and the easiest way to do that is to inject a comma before every entry except the first. Caching the $index lets us easily do that (and taking advantage of 0 being falsy in Go templates makes the conditional short, too).
  3. Finally, the hyphens on some of the {{ ... }} injections deletes preceding (if it’s directly after the opening brace) and trailing (if it’s directly before the close brace) whitespace, which mostly isn’t programmatically necessary here, but keeps the JSON looking clean.

The last step is to tell the world about your new feed. On your main index page, just add

  href="{{ with .OutputFormats.Get "jsonfeed"  }}{{ .Permalink }}{{ end }}"
  rel="alternate" type="application/json" title="{{ .Site.Title }}" />

There shouldn’t be anything surprising there. We’re reusing the {{ with .OutputFormats.Get ... }} trick from earlier to avoid hard-coding the feed URL, and the rest is straightforward templating.

So there you have it: that’s all it takes to add JSON Feed to your Hugo blog. I look forward to the next entry, in which we can explore how to add YAML Feed, EDN Feed, and maybe some custom Microsoft-specific extensions to both of those as well.

  1. No one tell them what HTML is. I really do not want to see JHTML. At least, not more so than I already have it with React. [return]
  2. "index" would’ve been another fine choice, and in line with other Hugo templates; I just found "feed" clearer. [return]

Working remotely, coworking spaces, and mental health

This should be a hard blog post to write–after all, it’s the one where I openly admit I had an emotional breakdown and saw a mental health professional–but it’s actually easy. And it’s easy because it has a good ending: facing long odds and a frustrating situation, I ended up turning everything around and getting a place where I love my job and I’m a happy person again.

But this is not one of those times where the journey was the fun part. No, I’d really preferred to have skipped the journey entirely.

So this is the post I wish I’d read myself back when I decided to work remotely. If you don’t want to read the whole thing, I can even summarize it for you, right here: different people like different kinds of work environments; “working remotely” doesn’t have to mean “working from home”; and if you’re going to work remotely, you should find the work environment that’s the right fit for you.

I demand infinite cake

A bit over a year ago, I moved out of New York City. It’d been great for a decade, and I had tons of friends, but I hit a point where it was draining the life force out of me. Simple pleasures, like going for a hike or joining friends for a potluck dinner, ended up these huge logistics nightmares that took so much effort they stopped being enjoyable. Knowing you theoretically could see 209 different Broadway shows being exciting if simply bringing a turkey to your friends four miles away can trivially turn into a three-hour hell of cramped subways, traffic jams, or The Hunt for the Mythical Available Taxi. Meanwhile, as my spouse and I thought more and more of having kids, the reality of just how much raising a child in NYC costs was something we felt we couldn’t ignore anymore.

This all posed a bit of a problem: I happen to enjoy making money; I do this best by working in tech; and the two hottest markets for that are New York City, which I wanted to leave, and the Bay Area, which is arguably worse. Where we wanted to move was the Raleigh/Durham/Chapel Hill area of North Carolina, known as the Research Triangle, but while this area has has tons of tech jobs, it doesn’t have some of the companies where I most wanted to work.

A decade ago, I would’ve had to pick which I cared more about. We’d either have stayed and dealt with NYC, or we’d have moved here anyway, and I’d have taken a job at one of the many great startups around here. But this was 2015, and there was a great way get everything I wanted without compromising on anything: I could leave management, go back to being a developer, and join the hoards of programmers who worked remotely. I had quite a few friends who had taken remote dev jobs and were having a blast, and it’d let me move wherever I wanted and still work for whomever I wanted. So, almost before I knew it, I left my job working on-site as a manager in NYC, and began working remotely as a developer for Khan Academy, a company I’d wanted to work at for literally years.

Which is how I ended up having an emotional breakdown this past February.

The sun, the moon, and the stars

That’s not how it was supposed to work.1 Working remotely is supposed to be the best thing since sliced bread.2 If you listen to people like Jason Fried in Remote, it’s basically a cure-all for everything remotely wrong in modern office culture. Modern offices are noisy and chaotic; your home office will be serene and peaceful. Modern offices are plagued with interruptions; your home office allows you to ignore the outside world and focus narrowly on code. Your commute need no longer bookmark your day, your coworkers’ illnesses need no longer presage your own, and you can even trivially work outside in an idyllic park surrounded by birds, nature, and psychotic hungry face-eating squirrels if the fancy strikes. Beyond these material benefits, a remote-friendly office has to change its work process in at least one key way that gives you a massive ancillary benefit: it must adopt asynchronous communication as the law of the land, which in turn means fewer meetings and a much easier time scheduling activities. Want to see your kids at a play? Want to ditch the play but go to a theme park at the same time slot? Want to check in with Nana at 2pm because it’s about time you played at least someone in Overwatch who’s at your skill level? Just work at a remote-friendly company, and all this can be yours.

So when I began my remote life, I had nothing but the highest expectations. And, to be honest, they were largely initially met. I not only got my serene and quiet office; it got some great new features, like the ability to make meals with long cooking times, or to customize my office exactly how I wanted it, even if that meant blasting 90s punk rock out my speakers at high volume. The flexible schedule was also indeed nice, and the mixture of that, plus the relative paucity of meetings, did initially drive up my developer productivity higher than it had been for years. It really did seem to be living up to the hype.

But then the cracks started to appear.

The dark side of remote

One of the first warning signs was that my “off days” began to get more common. Look: all developers have off days. I’ve even talked to developers from 60s and 70s who used to get them, and they didn’t get excuses like “a white nationalist egg avatar tweeted ‘ideas’ at me” to blame it on. But I was used to having, at most, a couple of off days a month. Suddenly, I was getting one to two per week. My colleagues didn’t seem to notice, but I sure as hell did, and I had no idea what to do about it.

Then I gradually stopped taking advantage of the benefits remote work was supposed to provide. At the beginning, I’d routinely go for walks or hit the gym midday, I’d echo an on-site Khan Academy tradition and make fresh loaves of bread, I’d take breaks to meditate. I’d even occasionally work from nearby parks, face-eating squirrels be damned.3 But, gradually, that stopped, eventually hitting a point where, more often than not, I would go multiple days of barely leaving my apartment. I created an off-color dent in the rug in front of my computer because I was moving so little. It was seriously that bad.4

Partly causing this, and partly a result of it, my already limited new social sphere began to shrink. I quit going to meetups. I stopped attending workshops. It became entirely possible for me to have no meaningful in-person interaction with anyone other than my spouse for days at a time.

Things finally came to a head one February evening when I had an emotional breakdown. I wrapped up a completely normal and uneventful day at work, did my fifteen foot commute from my office to my living room, and promptly found myself vomiting from stress, saying how much I hated–truly hated–my job, and crying as I realized how unhappy I was with my new life.

About those workplace interruptions

I calmed down, took a sick day, scheduled a therapist, and began trying to figure out what the hell was going on. I used to love being a developer, and working remotely was supposed to be the bee’s knees for that. Yet here I was, miserable. Many close friends of mine talked about how much their lives had improved since they began working from home, yet mine was falling apart. Clearly, the problem was me.

“Ah ha!”, I hear you say, because you have an Amazon Echo in your home. “This is the part where you say, ‘But lo, it was not me!‘”

Wrong. It really was me. The trick is to recognize that it wasn’t something wrong with me. It was just that I hadn’t been honest about who I was, and so I’d set myself up in a situation that was really, really caustic for my mental health.

A lot of people tend to regard “introvert” and “extrovert” as binary options, but the reality is that it’s actually a spectrum. Some people do lean heavily towards one end or the other, but many people have at least some aspects of both. For example, I personally recharge by spending time by myself, and I genuinely need “me time” to be a happy person, both of which are traditionally thought of as introvert tendencies. But I’ve also always been very social. That’s in fact why, while I do really enjoy software development, I’ve always been drawn a lot to the interpersonal side of the software process, like being a project lead or a manager: those roles still leverage a lot of my left-brain analytical muscles, but they also provide lots of social opportunities. This is also probably why I enjoy working in open offices: yeah, they’re honestly pretty awful to work in when I really need to be heads-down trying to fix a bug, but they also encourage a very collaborative environment that I’ve always loved when I’m in the early stages of a project.

Working from home might genuinely be the ideal environment for those closest to the introvert end of the spectrum, and I think those are the people who form angelic choirs of blog posts asking if you have met their lord and savior, the Fortress of Infinite Solitude, Home Office Edition. For them, the quiet work environment makes their jobs dramatically more enjoyable. But for me, it was the opposite: I’d gone from management (high social interaction) to software development (lower social interaction), and from working in an office (hundreds of people) to working from home (two cats), and expected that this would all be fine.

But of course it wasn’t fine. And guess what? There are tons of people out there for whom it wouldn’t have been fine. And if you’re at a similar place to me on the spectrum–maybe a developer who ends up gravitating to positions that involve a lot of interaction with the product or sales teams, or one who really enjoys doing lots of mentorship even though it slows you down–it probably won’t be fine for you, either. In fact, like me, you may find yourself being utterly miserable at a job that by all rights ought to be your one true calling.

But good news! Introversion and extroversion are a spectrum, and so ideal working environments are also a spectrum. In my case, while I may need less social time than some, I emphatically do need daily socializing time, and that led me to what ended up being the perfect solution for me: coworking.

The social benefits of coworking

When I first figured out what was really going on, I felt trapped. Working remote was awful for me, so I needed to stop, but that in turn meant leaving a job I knew I’d otherwise love, and that I’d wanted for a long time. So that stank. Thankfully, I’m annoyingly stubborn, so, rather than give up, I decided to reëvaluate something I’d immediately discounted when we initially moved down: getting an office.

On the surface, getting an office didn’t make any sense, which was a big part of why I rejected it. I don’t have clients, so I didn’t need an office for my professional image, and deliberately undoing some of the benefits of remote work (regaining a commute, losing workspace flexibility, avoiding interruptions) while not reaping the benefits (you’re not gonna suddenly have a spontaneous hallway discussion with coworkers about that project you’ve all been working on when your coworkers are located in another castle) seemed pretty ridiculous.

But when viewed through the lens of what I’d been suffering from, coworking–or at least, the right kind of coworking–might make a ton of sense, I realized. In particular, if I could find one that was both a good work environment, and also had a real sense of community, then I might find a way to turn things around and end up in a good place.

The bad news is that many coworking spaces emphatically do not fit this bill. It’s not that they’re bad; they’re just optimized for other types of people with different needs than I’ve got. For example, one of the first coworking spaces I looked at had food trucks (good) and an decent community (good), but its common area was insanely noisy (bad), and the private offices they provided to counterbalance that had no windows (very bad), weren’t near the communal space (really bad), were ridiculously expensive (tax-deductible), and could only be accessed by bribing the bridge troll with a fish (gross).5 If you need coworking primarily to get away from a distracting or noisy home environment, that might actually all be perfect, but it would’ve been the exact opposite of what I personally needed. Another I looked at was great…but would’ve easily resulted in an hour-long communte by car, which I rapidly determined was one of the few ways to improve upon a hellish New York commute.

I ended up lucking out. Right about when I was thinking I should give up, Loading Dock, with explicit goals of both having a community and being socially active (which gave it a strong alignment with Khan Academy’s culture), opened up very close to me, and to cut a long story short, it’s ended up turning my remote gig at Khan Academy into one of the best jobs I’ve ever had. And because of that, it’s improved my general mood, decreased my background stress level, and generally turned my move to North Carolina from something I regretted into something I’m loving.

Several sizes fit most

The thing with all of this is that it was the right move for me, and while I think it’d probably be the right move for a lot of people, I don’t think it’s the right move for everyone. Working situations are not a one-size-fits-all kind of thing, and I think the tech community can be surprisingly hostile to anything that isn’t a one-size-fits-all solution. When we code, we’re encouraged to find the One True Solution™, and I think that can make us overly biased to believe that when we’ve found the best solution for us–whether we’re talking Vim v. Emacs, C# v. Java, OpenBSD v. an insecure OS, etc.–we’ve by extension found the best solution for everyone. In the case of working remotely, I think those who hated traditional office environments and then found working from home to be the amazing for them concluded that that was the One True Solution™ for happy workers. But the reality is that neither working from home, nor working from the modern open office, is best for everyone; there simply isn’t a single solution for work environments.

That’s the great thing about coworking. For the first time in my life, I got to pick my company and my job separately from my office. I missed that at first, and saw the options only as working on-site with a company or remote from my home. In my case, what I actually needed was something in between.

If you’re thinking of working remote, then think about what kind of working environment you’re happiest with before you take the job, and make sure you’ll have that environment available to you. Are you sad when a lot of your office is out sick, or are you relieved? Do you get uncomfortable when you’re in quiet environments for too long, or do you revel in them? Do you feel weirdly lonely when you’re in a noisy coffee shop, or do you feel energized? Use experiences like these to help you form an opinion of what will make you happiest, and then go search for an environment that’s close to what you’re looking for. It’ll help you avoid learning the lesson I did the hard way, and will instead let you enjoy your job from day one instead of day 200.

  1. Duh. [return]
  2. At least for developers, anyway. Bakers might have some ironic difficulty. [return]
  3. Here, talk to Dewey, he knows more about it than I do. [return]
  4. This is not only true; I have the move-out bill from the apartment building to prove it. [return]
  5. Okay, okay. One of those things isn’t true. Guybrush insisted I mention it as a red herring. [return]

Separate, support, serve

Yesterday, Microsoft continued down a path that they’ve been pursuing for awhile by providing even tighter ties between Windows and Linux–including allowing running unmodified Ubuntu binaries directly in Windows. Reactions were, to say the least, varied; many people were preparing for the apocalypse, others were excited about being able to use Unix tools more easily at work, and still others were just fascinated by how this was technically accomplished. These reactions mostly made sense to me.

One did not. Especially on sites like Hacker News, many responses were screaming that people needed to be scared, to remember Embrace, Extend, Extinguish, to run for the exits as quickly as possible.

I find this reaction frustrating and depressing, not because it’s offensive, but because it’s so obviously incorrect and intellectually lazy that it gives me a headache.

I want to do two things in this blog post: convince you that Embrace, Extend, Extinguish is a grossly invalid understanding of Microsoft’s strategy; and convince you that an alternative strategy of Separate, Support, Serve provides a much better lens to view the modern Microsoft.

The Death of the Three Es

I’m not going to try to persuade you that Microsoft isn’t evil–if you believe they are, you’re wrong, but I don’t honestly care–but I am going to explain to you that, even if Microsoft were still evil, they would still not be doing Embrace, Extend, Extinguish.

First, I want to quickly remind you what the computing landscape looked like when Microsoft was using that strategy. Windows ruled everywhere, in a way that’s almost impossible to imagine today. Virtually all desktops everywhere ran some flavor of Windows. Mac OS, while arguably more usable than Windows, was technically inferior, and had such an app shortage (especially in niche spaces) that it was largely irrelevant. This in turn meant that Windows also ruled most of the back office. Paired along with the Office monopoly, Microsoft really and truly had a total lock on the personal computing space. It was basically impossible to use a computer without interacting with at least one Windows device in the process.

In that epoch, Embrace, Extend, Extinguish made a hell of a lot of sense. The idea was simple: if Microsoft saw a technology that threatened Windows, they’d embrace it (make it available on Windows), extend it in such a way that the best way to use the threat was Windows-specific, and, once most uses of the technology were sufficiently tied exclusively to Windows, extinguish it.

When Microsoft was a monopoly, this was a superb strategy to protect that monopoly. If they saw a threat, then bringing the threat in-house and tying it to the Windows platform was a great way to ensure people couldn’t leave, even if they wanted to. In effect, your alternatives had a tendency to evaporate before you had a chance to use them.

But Microsoft is no longer a monopoly. Hell, in many key areas, they’re effectively a non-player. While it maintains a plurality in old-school personal computers, Windows Phone is bascially a failed project, the cloud is all but Linux-only, and even the entire existence of the back office has been threatened by tools like Google Apps and other hosted solutions. They’ve even lost most kiosks to custom Android variants, and most developers to OS X. It’s now surprisingly rare that I do interact with a Microsoft system on a normal day, and I’m hardly unique in that.

This leaves us with two conclusions. First, empirically, Embrace, Extend, Extinguish failed; if it hadn’t, Windows would still be a monopoly. For Microsoft to be continuing this strategy, you have to believe they were not merely evil, but also unrecoverably stupid.

Second, it can’t work in an environment where Microsoft is an underdog. For nearly all shops out there, leaving Windows is honestly pretty trivial at this point; it’s adopting it that’d be an uphill battle. If I pick “Linux”, I can trivially integrate OpenBSD, Illumos, OS X, and any other Unix-like environment into my workflow with few or no issues. I can pick amongst AWS, GCE, Digital Ocean, and others for my hosting. I can pick virtually any language and database I want, use virtually any deployment tool, and migrate amongst all of these options with relative ease.

Windows is the odd one out. Adopting it not only means getting into a single-vendor solution, but also dealing with writing two sets of most deployment pieces, and dealing with licensing, and dealing with training my ops and IT teams with two radically different technology stacks. I’m going to need one hell of a value proposition to even think about it, and I still would likely turn it down to keep my ongoing maintenance costs sane.

Further, it’s a surprisingly hard environment for me to use as a developer these days even if I want to. If I grab a MacBook, I can write apps for iOS, Android, and Unix, all natively. If I grab a Windows laptop, I can’t target iOS at all, and I have to do any Unix development in a VM. This means that at Khan Academy, for example, I’d have to be insane to buy a Surface, even though I love the device; I’d end up spending all day in virtual machine running Ubuntu. It’s not impossible to use Windows, but honestly, if I have to spend all day in a full-screen VMware session, why bother?

In that environment, the old Three Es just don’t apply. They were about locking me into Windows, but we’ve long since passed that point. The problem Microsoft now faces is one of staunching the bleeding, and that requires a radically different strategy.

The Facts on the Ground

So: if you’re Microsoft, and you’re facing a world where you’ve largely lost all the current fights; where you’re losing developers left and right; where the challenge isn’t keeping people from leaving, but getting them to knock on the door in the first place; what do you do?

There are a couple of strategies that Microsoft could take in this environment, but I want to assert two key facts before we get going.

First, it’s very unlikely that Microsoft can stage a meaningful comeback at the OS layer in mobile, cloud, or server rooms at this point. We’re all now at least as entrenched in iOS and Android on mobile, and Linux on servers, as we ever were in Microsoft PCs. So if Microsoft is going to remain relevant, they’re going to have to do it in a way that meaningfully separates going Microsoft from going Windows.

Second, even if somehow they gained a meaningful foothold in those markets, it’s very unlikely they’ll be anywhere near a monopoly player in the space. iOS, Android, and Linux are so firmly established, and so pervasive, that any conceivable world for now is one where Microsoft has to get along with the other players. In other words, Microsoft-specific solutions are going to be punished; they’ll need technologies common to everyone.

If you agree with those two facts, their current strategy falls out pretty cleanly.

A Way Forward

First, Microsoft has to enable me to even use Microsoft technology in the first place. If Microsoft keeps tying all Microsoft technology to Windows, then they lose. If I have to use Windows to use SQL Server, then I’ll go with PostgreSQL. If I have to use Windows to have a sane .NET server environment, then I’ll pick Java. To fix that, Microsoft needs to let me make those decisions separately.

That’s indeed the first phase of their strategy: separating Windows from the rest of their technologies. SQL Server is available on Linux not to encourage lock-in, but because they need you to be able to chose SQL Server even though you’ve got a Docker-based deployment infrastructure running on RedHat. .NET is getting great runtimes and development environments (Visual Studio Code) for Unix so that I can more reasonably look at Azure’s .NET offerings without also forcing my entire dev team to work on Windows. This strategy dramatically increases the chance of me paying Microsoft money, even though it won’t increase the chance I’ll use Windows.

Next, Microsoft needs to do the reverse: make it feasible for me to use Windows as a development environment again. That’s where the dramatically improved Unix support comes from: by building in an entire natively supported Ubuntu environment, by having Visual Studio be able to make native Linux binaries, they’re making it feasible for me to realistically pick Windows even in a typical Unix-centric cloud-focused development shop. Likewise, Visual Studio’s improved support for targeting iOS and Android, and Microsoft’s acquisition of Xamarin, are going to go far to enabling me to do something similar on the mobile front.

In both of these cases, while there may be an “embrace” component, the “extend” part is notably missing from here–and it should be. Microsoft can’t meaningfully extend iOS, Android, or Linux in a way that’d actually matter to anyone at this point; it has to just support them on their own terms. And in that environment, it’s not possible to extinguish things; if Microsoft woke up one day and announced Xamarin was dead and gone, people would grumpily rewrite their stuff in Swift and Java, not suddenly announce that they were Windows-exclusive.

Finally, Microsoft still needs to make money, and they can do that by selling software as a service (Azure, Office365, and so on), rather than off-the-shelf. That not only gives them a steady revenue stream independent of their Windows installed base–after all, a person using Office365 pays the same whether they’re on Windows, OS X, or a Chromebook. It also provides insulation for them from any future platform changes. Does HoloLens take off? PlayStationVR? Oculus Rift? Will Microsoft catch the next wave? Who cares. As long as you’re using Microsoft products somewhere in your stack, they’ll be fine.

Separate, Support, Serve

It’s not as catchy as the original, and it certainly sounds a lot less ominous, but I think this can be summarized as the Three Ss: separate all of Microsoft’s offerings from Windows itself; support the reality of this heterogenous world when on Windows; and be the company that serves as much content as possible from its data centers.

I’m not saying that Microsoft can’t still lock people in some way. Apple definitely tries to lock in its customers with iCloud and iOS, and its developers with Swift, for example. But I do hope that this has convinced you that Embrace, Extend, Extinguish is dead–and, with it, at least some of the FUD about Microsoft’s software.

Jobs once famously said that Microsoft didn’t need to lose for Apple to win. Today, I think it’s worth realizing the reverse: Microsoft doesn’t require you lose for it to succeed.

Android, Project Fi, and Updates

Edit: Mere days after posting this (and unrelated to this post), Google publicly apologized for the Android 6 roll-out delay and pushed out Android 6.0.0 to Nexus 6 devices. They then followed that up extremely rapidly with the Android 6.0.1 update. I think this bodes incredibly well. Project Fi is still a very new service, and I’ve little doubt that Google has to work out some kinks of their end. For the moment, I’m going to take a step back, watch, and see if this new rapid update cycle is the new norm. If it is, I think I’ve found my ideal carrier and platform. But I still think that encouraging new users to stick to iOS until this update cycle is proven is probably the best course of action.

I want to make clear, right up front, that I am absolutely not an iOS apologist. I couldn’t wait for the first Android phones to come out, and I bought a Motorola Droid on launch day. I was excited about its better multitasking, about the keyboard, about the better integration with Google services, about the fact that I could use Java instead of Objective-C,1 about the much more open platform that wouldn’t restrict what I wanted to do. I was very sincerely excited.

But neither the hardware nor the software were quite ready at the time. I went through three Droids, suffering one (thankfully warranty-covered) hardware failure after another. After an initially promising update cycle (the Droid was upgraded to what I believe was Android 2.1 very quickly), I began to see that Google was having issues getting new versions of Android out on a sane schedule. So, after a couple of years of living on the Android train, I hopped off and grabbed an iPhone.

That didn’t mean I gave up on Android. If anything, I was pretty confident that Android, not iOS, would be the winner in the end anyway. Google would figure things out—and it wasn’t even just Google, after all, but a huge chunk of the telecom industry, all of whom had a vested interest in keeping Apple from dominating, helping them out. We’d seen this play out already with Microsoft and the PC makers versus Apple in the 90s; we knew how it would end, that Android would close the gaps and take over the industry. It was just a matter of time while Google got their operation running smoothly.

While that’s obviously not what happened, both Android software and hardware did markedly improve. There were even a lot of things that Android got first that were genuine usability wins: instant replies from notifications, assistants (Google Now), turn-by-turn directions, cross-application communication, automatic app updates, and more. I ended up buying a Nexus 7 as a tablet, and found that, at least as a developer, it fit my needs a lot better than an iPad ever did.

There was, however, one caveat: Android’s security story. Because Google couldn’t get updates out to its phones on a sane schedule, most Android phones had long-running unpatched security issues. If there’s one thing I think we’ve learned about security over the last few years, it’s that a team that patches early and often is going to be vastly better protected than one that doesn’t. This didn’t bother me too much on my Nexus 7—Google was better about pushing out updates for its tablets than its phones, and at any rate, side-loading the OS didn’t pose any major problems for me on a non-mission-critical tablet—but it kept me from returning to Android phones.

So when Project Fi was released, I signed up immediately. I figured I could finally, finally have my cake and eat it, too: Google generally kept Nexus devices up-to-date, and the Fi pricing model seemed like a huge improvement to me over what I’d been forced to do on the major carriers. What wasn’t to love? I could go back to Android and bid Verizon adieu at the same moment, a great double-win.

That is emphatically not what happened. First, security updates were slow to come out: whereas Apple virtually always has security issues patched well ahead of any disclosure window,2 Google seemed to struggle. When Stagefright came out, I had to wait, just like everyone else, for my patch. And when that patch happened, it was woefully incomplete, so then I got to wait again for a patch to the patch. And then when Android M shipped a month ago, Google left Nexus 6 users—all of whom own a phone that is just barely over a year old at this point—running Android 5.1.1. Yes, you can get M on Project Fi, but you have to side-load (which their support representatives are loudly and actively discouraging in their support forums), or you have to buy a new phone—the exact situation that exists on other carriers, and the exact situation I was trying to avoid.

This is ridiculous. Apple manages to push out updates to all carriers on the same day. Microsoft, which generally brings a vaguely Scooby Doo-like quality of competition to the smartphone landscape, manages to get updates out to all Lumia devices within at most a few days of each other, and also has a very simple system in which any Windows Phone user can opt-in to get Windows Update-style updates ahead of general availability. Meanwhile, on its own cell network, Google has…side-loading, which it’s discouraging.

This just shouldn’t be that hard. And yet, for Google, it clearly is.

So I give up. Apple can keep their products up-to-date across dozens of carriers; Google can’t even keep their own products up-to-date on their own cellular network. If they can’t even make that work, then I throw in the towel.

I suppose it’s possible that my next phone won’t run iOS, but the one I can guarantee you is that it’s not going to run Android.

  1. I am not trolling. Java, at the time, was a much more pleasant language to work in than Objective-C. You had garbage collection, a better dependency management story, better support resources, and a much larger collection of third-party libraries, and to top it all off, you had Eclipse or IntelliJ instead of a fairly early version of Xcode. Even if Android’s APIs might not’ve been the best I’d ever used, they were, at least in my opinion, just fine. [return]
  2. “Virtually” is a key word there; they had a couple minor vulnerabilities that were disclosed prior to patch. But we’re talking a couple issues patched after disclosure date versus a spate of major Android ones that stay unpatched for literally weeks or months after disclosure. There’s no contest. [return]

Genuine opinions, thoughtfully presented

When I was in high school, I used to do competitive speech.1 I didn’t really want to do competitive speech as such; what I wanted to do was competitive debate. After all, debate was way more fun: you got to argue, on purpose, about things with little actual consequence! And you got more points for being the best arguer! What’s not to love?

Sadly, my school didn’t have enough people to do both debate and speech; we had to pick one, and since the overwhelming majority of my fellow classmates wanted to do speech, we did speech. So I had to spend four lousy years getting good at public speaking, which hasn’t come up even a single time in real life when you exclude when I speak at events and conferences and at work and so on. Clearly I lost out.

There was, of course, an event that was kind of like debate at the speech meetups. The Internet is being amazingly unhelpful in answering this question, but I believe it was called “discussion.” To the best of my recollection, in discussion, you were given a topic to discuss, and then needed to demonstrate some finesse both in keeping the discussion going, and in having the group come to an agreement over the course of the event. You were actually penalized for arguing; you got points by drawing others into the discussion, helping them flesh out their points, and helping drive consensus.

Needless to say, I never did this event. Sounded like a thing for wimps.

Strong opinions, inconsistently supported

There’s a very, very old2 saying that many in our industry hold up on a regular basis as the ideal way for developers to have discussions: “strong opinions, weakly held.” It’s used almost as a mantra for how to be at once egoless and stay in motion. After all, if you have strong opinions, you’ll act on them, and if they’re weakly held, you’re highly amenable to changing your opinions in the face of more data. What’s not to like?

The truth is that I’ve been thinking about this a lot recently, and while I know that’s the spirit originally intended, and while I think there’s tons of value in what “strong opinions, weakly held” was originally trying to embody, I am increasingly convinced that its meaning has been coöpted by people who, like a teenage version of me, are really just pining for a good old round of high-school debate.

The original thought is one that I can pretty solidly get behind, at least in theory. As far as I can tell, the expression comes from Bob Johansen, via a post from Bob Sutton in 2006. According to that version,

Bob [Johansen] explained that weak opinions are problematic because people aren’t inspired to develop the best arguments possible for them, or to put forth the energy required to test them. Bob explained that it was just as important, however, to not be too attached to what you believe because, otherwise, it undermines your ability to “see” and “hear” evidence that clashes with your opinions

On the surface, this makes complete sense—and if this were how people actually took the expression, I doubt I’d have anything to add.

But in practice, that’s not what I actually see from people who claim to apply this rule. Instead, I see something that looks a lot more like this:

Developer A: This color is clearly a brownish gray, and everyone who disagrees with me is an idiot. Behold my strong opinion.

Developer B: It’s not a brownish gray; it’s taupe. Here’s the definition of taupe. You’re wrong. Repent before thou dost descend unto the Hellfire.

Developer A: Alas, I have seen the error of my ways. The color is clearly taupe, and everyone who disagrees with me is an idiot, including me from 15 seconds ago. Behold, ye, that my strong opinion was weakly held, and that I am now perfect in judgment.

This isn’t a “strong opinion, weakly held.” This is someone conveying their weak opinion in a forceful manner to the point of being an annoying little snot.3

And while this example is obviously hyperbole, I’ve seen real fights that are ever so barely removed from this extreme. Bower or Webpack? Gulp or Grunt? Go or Python? Java or Kotlin or Ceylon? I’ve seen all of these fought out tooth-and-nail by people who will, in a calm one-on-one discussion, prove surprisingly flexible, but who, in the heat of the moment, get so wrapped up in the “strong opinion” part of the quote that they forget they’re supposed to also be willing to change their minds.

In the most extreme version of this kind of attitude, you get debates over things like editors and personal workflows that don’t require buy-in from the entire team in the first place. I’d find these funnier if they weren’t so divisive, but you have whole friendships destroyed over things like Emacs v. Vim,4 or Firefox Developer Edition v. Chrome,5 or Windows v. OS X.6 It doesn’t even matter what your colleague picks in these situations, because you can still use whatever you want with no repercussions whatsoever. Their decision literally just does not impact you. And yet, these can be some of the most heated debates of any I’ve ever heard.

It’s not that these are invalid discussions to have as such, but rather that you’re likely to see them very forcefully carried out, by people who, to any onlookers, sure seem to be utterly convinced that they’re correct, and who seem to be allowing no margin for error. Even if they secretly, in their heart of hearts, are still holding their strong opinion weakly, the outward appearance they project is one of a strong opinion strongly held…on arguably flimsy data.

And that’s exactly not what you want.

Sincere opinions, frightfully restrained

To borrow a phrase from a different part of our industry, this kind of dialog has a chilling effect: when developers see people stridently defending fairly inconsequential opinions, they’re going to be very, very reticent to take a stand on something that actually matters.

Most people, after all, do not enjoy conflict, and will avoid it if they have the chance. If you’re willing to go all-in on a fight over how many spaces go in a tab,7 then heaven help me if I try to have a debate with you on whether using Docker images or a custom apt server is the best way to handle our infrastructure deployment. I’d rather just keep my opinion to myself and watch what happens, because I already know you’re going to fight me forcefully with a random assortment of arbitrary facts. So I’ll just be flexible and do whatever you want. That’s the “weakly held” bit, at least, right? I’m halfway there.

I’ve seen brilliant developers, who have great ideas when I speak to them quietly one-on-one, completely clam up in meetings when they’re in environments that espouse this mantra. They’re genuinely afraid to speak up, not because they don’t believe in what they want to say, and not because they can’t defend it, but because they know that anyone who disagrees with them will do so with violent rhetorical force. They will have to very literally fight, even if “only” verbally, for what they believe in. At the end of the day, whether consciously or subconsciously, they conclude that they’d simply rather not fight.

And now you’ve lost a valuable opinion, in a context where the outcome—unlike the conclusion to how many spaces go in a tab—may actually make a difference on whether your team can be successful.

Genuine opinions, thoughtfully presented

So let’s revisit what the original intention was. We want people to actually form a real opinion, because if they’re just trying to look at every possible side of something, they’ll never actually do anything. But we want them to be flexible, because even a well-researched, sincere opinion might be surprisingly incorrect when additional facts are presented.

I’d therefore like to propose a new version of this classic quote: genuine opinions, thoughtfully presented.

The opinions should be genuine because you should actually believe them, not just parrot them for the sake of having an opinion. While there are times and places for a tornado of possibilities, you should never take a side of an argument that you don’t believe in.8 You should have thought through things at least long enough that you do actually believe what you’re about to say.

But as soon as you reach that place, you should thoughtfully present that opinion. It’s an opinion, after all, but it’s one you took time to create. Everyone makes mistakes, including you, but also—and this is important—the person you’re talking to. Even I, on rare occasions, have been knonw to make a mistake or two. If you don’t present your opinion, I’ll never have a chance to realize I was wrong, you’ll never have a chance to validate you’re right, and we both lose out. And as long as you do it thoughtfully, I’ll be able to calmly discuss your opinion with you so that we can resolve our differences, regardless of which of us—if, indeed, either of us—is correct at the onset.

Calm discussion is the best form of debate

In the rush to have strong opinions, we lose track of the “weakly held” part. Even if we might genuinely be willing to change our opinions, presenting our own views so strongly that we appear to have our minds fully made up can prevent those with excellent, valuable opinions from speaking up.

I might or might not have enjoyed debate more than speech; I don’t know. What I do know is that I really badly wish I’d done discussion, even if only once, because magnifying those who are just learning how to present their genuine opinions is one of the best things I can do, both as a mentor, and as a colleague.

That’s my genuine opinion, thoughtfully presented. What’s yours?

  1. The organization that puts this on is the National Forensics League, which helpfully abbreviates its name to the NFL in most documents. So if you ever meet me and I casually mention I’ve won NFL meetups, don’t ask to see my Superbowl ring, because I’ve lent it out for the time being. [return]
  2. I mean, in industry terms. So, older than last week. [return]
  3. I’ve been told I name-call too much in my blog posts. Anyone who says that is a big fat nincompoop. [return]
  4. Emacs. [return]
  5. Firefox. [return]
  6. OS/2 Warp!, for PowerPC, little-endian, and anyone who says otherwise is a pinko commie. There. I think I’ve now fully offended everyone I can.9 [return]
  7. Seven. [return]
  8. Except when it comes to clothing fashions. I give up. If we all want to pretend that an un-tucked shirt with corduroys, a fedora, a used blazer, and US Keds is the apex of hip, rather than the completely expected outcome of a bunch of affluent extraterrestrials concluding that Goodwill represented our species’ best unified clothing line, then so be it. [return]
  9. No no, wait, hang on, I got it: and stick to just using WinOS2, since there are no good native apps for OS/2. There. Now I think I’m good. [return]

The More Things Change

React, if you’ve somehow missed it, is the new hotness in web programming. The idea is simple: each React component describes its view idempotently, in JavaScript. The view is rendered entirely based on a small amount of state the component keeps internally. Given the same state, a given component will always render identically. This in turn means that when data changes, React can apply just what changed to the browser’s DOM, saving it from having to re-rendering the entire page. In fact, the determination of whether to change anything at all can be made purely by consulting the component’s internal state. At its core, that’s why React is very fast.

React by itself doesn’t actually solve how to propagate changes, though. For that, the most popular solution is another Facebook framework called Flux. In Flux, you have stores that contain data, and dispatchers that process actions and notify parties appropriately when a relevant action has been performed. This flow is unidirectional: user actions trigger a dispatcher action, the dispatcher updates the stores, the stores update the views, potentially causing them to re-render. You can see a nice diagram that probably conveys it better than my description. To build a real application, you usually combine lots of React views and lots of Flux dispatchers into a coherent whole, so each piece composes nicely with all the others.

This sounds nice, but whenever I look at the code for websites that use Flux, I’ve just felt…well, weird. I felt like I’d seen this pattern before, that I had stories of that way be dragons1 even though it was all “new,” but I couldn’t quite put my finger on why I felt that way.

Until today.

Base Principles

Indulge me in a thought experiment. Let’s say we’re not writing a program for the web, but rather for a resource-strapped graphical computing environment. Maybe one of those embedded microcontrollers that are so popular these days, something even less powerful than a Raspberry Pi. How would we design such a framework?

Well, we mentioned resource-strapped, so instead of keeping an off-screen buffer for every single widget like we do on OS X, we’ll instead just have widgets redraw themselves when we need them to. Because of that, we’ll need to mandate that a widget’s drawing code be idempotent. Further, because (again) we’re resource-strapped, we don’t want to lug around a big runtime or anything. To keep things simple, we’ll say that each widget has a single procedure associated with it, and we’ll give that procedure two arguments: an integer representing what action the user just did, and a pointer to some additional action-specific data (if applicable). That way, each widget can handle each message in the most efficient way possible. Finally, because a user action might require us to redraw a widget, we’ll allow the widget to tell us if it needs to be changed. Because drawing is idempotent, we can minimize CPU usage by only redrawing whatever it says we have to redraw. Finally, for programmer sanity, we’ll allow these widgets to nest, and to pass custom messages to each other.

Congratulations! We’ve just designed Windows.

Specifically, Windows 1.0. Circa 1985.

Blasts from the Past

This is, really and truly, exactly how Windows used to be programmed all the way through at least Windows 7, and many modern Windows programs still work this way under the hood.2 Views would be drawn whenever asked via a WM_PAINT message. To be fast, WM_PAINT messages generally only used state stored locally in the view,3 and to keep them repeatable, they were forbidden from manipulating state. Because painting was separated from state changes, Windows could redraw only the part of the screen that actually needed to be redrawn. Each view had a function associated with it, called its WndProc,4 that took four parameters: the actual view getting updated; uMsg, which was the message type as an integer; and two parameters called wParam and lParam that contained data specific to the message.5 The WndProc would update any data stores in response to that message (frequently by sending off additional application-specific messages), and then, if applicable, the data stores could mark the relevant part of the screen as invalid, triggering a new paint cycle. Finally, for programmer sanity, you can combine lots of views inside other views. Virtually every widget you see in Windows — every list, every checkbox, every button, every menu — is actually its own little reusable view, each with its own little WndProc and its own messages.

This is exactly what Flux does. You can see this design very clearly in the official Flux repository’s example app. We can see the messages getting defined as an enum,6 just like you’d do in old-school Windows programming. We can see the giant switch statement over all the messages, and note that different data is extracted based on the message’s actionType. And, of course, you can see components rendering idempotently based off their small amount of internal state.

Everything old is new again.

When the hurly-burly’s done

Flux clearly works. Not just that; it clearly scales: people and companies, not least among them Facebook itself, are writing huge, functioning applications in this style.

But that shouldn’t be surprising; after all, many companies wrote huge, functioning applications for old versions of Windows, too. Just because it works—and moreover, even if I grant that it works and it’s better than what we had before—doesn’t mean it’s the be-all, end-all of web development.

Much as Windows wasn’t locked into WndProc-oriented code forever, I’m sure we’re not going to stop here on web development. We’ll rediscover Interface Builder and Morphic. I’ll get my web-based take on VisualAge, Delphi, and VisualBasic. Even if Flux lives on under the hood, I won’t have to think in its terms day-to-day. I know that will happen, because it has happened before, many times, on many platforms. There’s a ton of historical precedent; I just need to wait.

But maybe, with the benefit of hindsight this time, and recognizing that we have just brought the mid-eighties to web development, we can get through this phase a little faster than the last time.

Here’s to hoping.

  1. Or daemons. [return]
  2. WinUI programs probably work this way technically at some level, but the lowest level of abstraction for WinUI is thankfully much higher. [return]
  3. Via GWLP_USERDATA, if you’re curious. [return]
  4. It’s called WndProc because Windows calls views windows. Indeed, Windows’ concept of “window” is just a little chunk of screen with its own WndProc managing it. OS X’s Quartz actually used to make the same call deep under the hood, though Cocoa obviously exposes a much higher level of abstraction. [return]
  5. In practice, for all but the simplest of messages, wParam was ignored, and lParam pointed at a struct that contained data specific to that message. This actually makes it even closer to the Flux model. [return]
  6. Or JavaScript’s version of them, anyway. [return]