Messages, Google Chat, and Signal

Google is about to try, yet again, to compete with iMessages, this time by supporting RCS (the successor to SMS/MMS) in their native texting app. As in their previous attempts, their solution isn’t end-to-end encrypted—because honestly, with their business model, how could it be? And as with Google’s previous attempts to unseat a proprietary Apple technology, I’m sure they’ll tout openness: they’ll say that this is a carrier standard while iMessages isn’t, and attempt to use that to put pressure on Apple to support it—never mind the inferior security and privacy that make the open standard a woefully…erm, substandard choice.

So here’s my suggestion to Apple: you’ve got a good story going on right now that you have the more secure, more privacy-conscious platform. If you want to shut down Google’s iMessages competitors once and good, while simultaneously advancing your privacy story for your own customers, why not have iMessages use Signal when the recipient doesn’t have an iOS device? Existing Apple users would be unaffected, and could still leverage the full suite of iMessages features they’re used to. Meanwhile, Android customers on WhatsApp or Signal would suddenly have secure communication with their iOS brethren, not only helping protect Android users, but also helping protect your own iOS users. And you’d be doing all of this while simultaneously robbing Google of the kind of deep data harvesting that they find so valuable.

I doubt Apple will actually do this in iOS 12, but it’d be amazingly wonderful to see: a simultaneous business win for them, and a privacy win for both iOS and Android users. I’ll keep my fingers crossed.

Moving and backing up Google Moving Images

For reasons that I’ll save for another blog post, I decided recently to ditch pretty much the entire Apple ecosystem I’d been using for the last decade. That’s meant gradually transitioning from macOS to Ubuntu, and from iOS to Android. Of course, to ditch iOS for Android required a new phone; after some research, I opted for a Google Pixel 2.

The Pixel 2’s been a great phone and has lots of interesting features, but one of the more esoteric features is called Moving Images. These are Google’s take on Apple’s Live Photos: when you take a photo, a very small amount of video is also recorded, yielding a kind of Harry Potter-like effect. In general, I don’t honestly care all that much about the video bits of these, but every once and awhile, you capture a really unique moment by happenstance where a Live Photo or Moving Image is really special, and on those occasions, I’m incredibly thankful someone at Apple came up with this idea.

In general, I use Google Photos to manage my photo collection, in part because it hits a sweet spot on my convenience/safety metric: the web application and mobile clients are incredibly easy-to-use for day-to-day work, and keeping a local copy of all your photos is as trivial as clicking a checkbox in Google Drive and then downloading them with the Google Backup & Sync tool (or InSync or rclone on Linux). The ease of getting a local mirror of my Google Photos data is great not just for offline access, but also for both offsite backup (in case I ever lose access to my Google account) and trivial rich editing with The GIMP, Lightroom, darktable, Acorn, or any of the other heavier-duty photo editors when I want to. It’s genuinely been one of the better cloud/local hybrids I’ve used.

I was very happy with this setup until just a few days ago, when I made an annoying discovery: Moving Images are very difficult to back up. In fact, the only way I ultimately managed to get everything automatically backed up was to use a tool not from Google, but from Microsoft.

The lost 110 photographs

I wouldn’t honestly have even noticed there was a problem in the first place except that I realized that Backup & Sync failed for exactly 110 files—on all of my machines. macOS, Windows, whatever, didn’t matter, those 110 files wouldn’t download. I could click “Retry All,” I could reinstall Backup & Sync, I could even utterly remove all the downloaded data and retry from absolute scratch, but those 110 files refused to budge. Google is Google, so there was no way for me to really reach out and get genuine tech support,1 but I did poke through their forums. And promptly felt my heart drop as I found three things very quickly:

  1. I was hardly the only one with this issue.
  2. The Google Drive team would move posts on this topic to the Google Photos forums, and the Google Photos team would move them to the Google Drive forums, because each team generally said it was the other’s problem. As far as I could tell, no matter which forum ultimately ended up being the thread’s home, nothing was resolved (see e.g. this thread, which ended up in the Drive forum).
  3. Many of the affected users mentioned Pixel phones.

This caused me to look at whether there was a pattern to what wasn’t getting downloaded, and I spotted the issue instantly: all 110 files started with MVIMG, the prefix for Moving Images. At that point, I found that there had been topics going back months about Moving Images not syncing properly (e.g. this post from early January). But the good news was that multiple people were saying that newer Moving Images were backing up properly, and it was trivial for me to verify that, indeed, more recent Moving Images I’d taken had downloaded, and some spot-checks showed happy little JPEGs all right where I wanted them to be on my local disk.

Okay, I thought to myself. That stinks, but it’s just those 110 photos; new ones are downloading just fine. So, worst-case, you download 110 photos by hand. Not the end of the world.

I went to sleep and didn’t think more about it.

The “moving” part of Moving Images is optional

It wasn’t until the next morning that I realized something was wrong. When I’d spot-checked more recent Moving Images to verify they had backed up, I of course didn’t actually check on the actual “Moving” part of the Moving Image; while Moving Images are technically JPEGs, the video is stored in such a way that nothing I’ve got can (currently) see it. That didn’t faze me too much, mind—changes were overwhelmingly high that someone else would reverse-engineer the format, and failing that, the chance the thing was just an MPEG concatenated to, or stored inside, a JPEG was extremely high. That’s well inside the realm of things I’ve reverse-engineered in the past. But it did mean that I hadn’t explicitly verified whether a video stream was present.

Over breakfast, a little detail I’d missed finally registered: the files were just too damn small. The Pixel 2 has a 12 megapixel camera. Photos it takes, even with really good compression, really ought to be at least a couple megabytes by themselves; throw in video, and they should be at least 6-10 MB. Yet every file I was looking at was, tops, in the 4 to 5 MB range. That was simply insufficient to store both a high-resolution photo, and a video stream. Something was up.

I picked one of the Moving Images at random. On my Pixel 2, and on the Google Photos website, it showed up as 6.4 MB; my local copy was only 3.4 MB. Another Moving Image showed the same pattern: 7.2 MB on Photos and on my phone, but only 3.7 MB locally. Indeed, a quick sanity check seemed to reveal that all the Moving Images had suffered the same fate. And it wasn’t local to just the official Backup & Sync tool, either: InSync and rclone both showed the exact same behavior, too. Yet downloading the pictures manually from the Google Photos website gave the original, larger image. The only conclusion I could reach: the Google Drive service itself was stripping out the Moving part of the Moving Image.2

API? What API?

My first thought was I’d just write my own backup client. After all, while the Drive integration was nice, all I really wanted was automatic offsite backup. While writing something myself wasn’t quite my first pick, I didn’t anticipate it’d be that hard, and since I could download the full, untrimmed files from the Photos website, I knew the raw files existed; it was just a matter of using the proper Google Photos API.

Except…well, there is no Photos API, as far as I can tell. The Picasa Web Albums API has been deprecated since Picasa sunset in 2016, and Google doesn’t list a Photos API anywhere on its developer portal. In other words, the Drive API seemed to be the only official way to go. But I knew from InSync and rclone that the Drive API was exactly where the problem lay in the first place.

Okay, back to the drawing board.

Backup backup options

The second idea I had was to try another photo synchronization service. The raw data was obviously on the phone; I just needed something that could get them off. My first stop was Dropbox: I’d used it for years previously, I knew they had a nice Linux client, and I still used it actively.

Dropbox completely failed here, on two levels: first, it suffered the same trimming issue Google Photos did, so in a narrow sense, it obviously didn’t solve my problem. No biggie.

But Dropbox also failed because it has become downright slimy when it comes to letting you downgrade your account. When I was in Dropbox, I realized I’d fallen below the storage threshold for a free account, so I decided to cancel my paid membership. Dropbox made this incredibly difficult: first, when you click on “Change Plan,” your only option is to upgrade; there is no way to downgrade. You instead have to scroll to the very bottom of the window and click a tiny “Cancel” link. After that, you then have to choose to cancel three or four more times, being interrupted to be told why leaving’s a bad idea on screens where the default button keeps alternating between the “continuing closing my account” option and the “haha no actually I totally want to keep my account, thank you for asking” choice. It took me a couple of tries before I finally extricated myself. Never again, Dropbox. If you have to play that dirty to keep customers, then I’m definitely not sending any business your way.

My next thought was to see if someone had written a photo uploader for Upspin, but they haven’t, and that’s considerably more time than I’ve got right now, so that was it for that idea. I also thought about using Perkeep, since that does have an Android photo uploader, but my Perkeep installation is behind my firewall, and AT&T’s modem prevents my old OpenIKED-based VPN setup from working, so that route was also out.

The final tool I reached for before giving up was Microsoft OneDrive, and I was pleasantly surprised to find that OneDrive just worked. As far as I can tell, OneDrive uploads the unaltered original files, verbatim; if I copy the raw file off my phone via USB, the hashes match.

That said, while I have had very good experiences with OneDrive in the past, simply moving to OneDrive isn’t really an option for me right now: my family all heavily use Google Photos, and we make extensive use of shared albums. Getting everyone moved onto a new service just isn’t feasible, so I was going to have to find a way to make both OneDrive and Google Photos play together somehow.

Time for a short shell script.

The “solution”

I ended up putting together a process that is very gross, but does work: first, I have both Google Drive (via rclone) and OneDrive (via the excellent open-source onedrive client) syncing locally. I create a copy of the Google Photos folder structure in a different location, and then hardlink all of the photos from the InSync folder to the copy. Next, I look for any photos in the copy whose name start with MVIMG_. For each photo I find, I look for a corresponding, larger file in the Microsoft OneDrive camera roll, and, if I find one, move that image over to the new folder structure in place of the Google Drive one.

It’s not ideal, and the resulting Ruby script is not exactly the best code I’ve ever written, but it does appear to work.

Moving forward

Currently, I’m in an unhappy place: I’m generally still using Google Photos, but I’ve also got camera shots going to OneDrive, and I have a gross Ruby script that tries to sanitize this mess. Further, I’m not actually fully confident that these larger files do in fact have the video information I need; I’ll need to learn more about the JPEG file format to figure out if my hunch is correct—and if so, to figure out how to extract the data.

Meanwhile, I’m going to hope that Google either just makes an API for doing this, or otherwise, fixes the Drive API to allow fetching the original files. But at least I don’t have to worry about losing any raw data in the meantime.


  1. This is, strictly speaking, in my particular case, a lie; I know enough people at Google that I can usually just play a game of telephone until I find someone who both works on a relevant team and cares enough to help resolve my problem. But a) normal people cannot do this, and b) this actually was not helpful this time around. [return]
  2. To be clear here, it’s possible that’s not quite what’s happening; it’s tricky for me to tell, since I haven’t yet reverse-engineered the file, and Google hasn’t (as far as I can tell) documented what they’re doing. But Photos/Drive editing the file between my phone and my machine means regardless that it’s not trustworthy as a backup option. [return]

Commit SHAs as dates

I’ve been going through a pile of old bitquabit posts. While many of them hold up over time, the more technical ones frequently don’t: even when I was lucky and happened to get every technical detail right, and every technical recommendation I threw out held up over time (hint: this basically never happens), they were written for a time that, usually, has passed. Best practices for Mercurial in 2008 are very much not best practices now. But it’s a bit tricky: whether something I wrote is genuinely out-of-date has less to do with how much raw time has passed, than how much churn in the project has happened.

To that end, I was happy to see that some of the blogs I follow have started using Git commit SHAs to date their post, alongside the calendrical date—serving as a kind of vector clock for the passionate. If you’re writing technical posts for an open-source project, this seems ideal to me: for casual observers, they can go with the calendrical date, and for people deeply involved in that arena or project, they can instead key off what has happened since the commit in question.

I’m not going to retrofit all my old posts, but it’s something I’ll keep in mind going forward.

Automating Hugo Deployments with Bitbucket Pipelines

As I mentioned in a recent post, I manage my blog using a static site generator. While this is great to a point—static site generators can handle effectively infinite traffic, they’re stupidly cheap to run, and I can use whatever editor I feel like—the downside is that I lose tons of features I used to have with dynamic blog engines. For example, while it’s almost true that I can use any editor I want, I don’t have a web-hosted editor like I would in WordPress or MovableType, and I likewise can’t trivially add any sort of dynamic content. Most of what I lose I can live without, but one that is genuinely annoying, and which has even bitten me in the past, is that I can’t publish without being on a computer that has both my SSH keys, and the publishing toolchain installed. Not only is that inconvenient; it means that publishing output can vary depending on which machine I use for a given publishing run.1

There’s a pretty easy fix for that: add continuous deployment. If it’s good enough for real software, it’s good enough for a personal blog. I can set up a single, consistent deployment environment on some server, drive all the deploys through that, and call it a day. The problems here being that a) setting up a continuous integration server is annoying, and b) I am lazy. There are cloud-hosted CI servers, but most of them either are overly complex, or are too expensive for me to justify using for my personal blog.

Enter Bitbucket. I’m already using them, since they’re by far and away the best Mercurial hosting game in town these days, and they recently2 added a new feature called Bitbucket Pipelines that fits all my requirements: cloud-hosted, free, easy-to-use, cheap, and it didn’t cost anything.3

And I’m glad I looked, because getting everything running turned out to be stupidly easy.

Step one: write the Dockerfile

Bitbucket Pipelines wants to base your deployment on a Docker image, so I had to write one. Thankfully, it’s so easy to make Docker images these days that pretty much everyone is making them—even when there is no conceivable reason why they should. So let’s set one up.

To deploy my blog, I need at least four things: Hugo, Pygments, rsync, and SSH. It took me a couple tries to get the Dockerfile just right (mostly because I straight-up forgot rsync and SSH on the first go), but the result is literally five lines, total:

FROM alpine:3.6

RUN apk add --no-cache bash git go libc-dev python py2-pip rsync openssh-client
RUN pip install pygments
RUN go get -u github.com/gohugoio/hugo

About the only thing remotely interesting here is that I’m using Alpine Linux, which I selected based on it seemed to be what the cool kids were using these days and it was one of the smallest base Docker images I could find. I’m not honestly sure if bash is needed (I suspect /bin/sh would’ve been just fine), but I originally wrote my deployment script for bash, and I’m too lazy to figure out if I used any bashisms, so let’s just toss that in there anyway. What’s a paltry 34 MB between friends?

Tons of places host Docker images for free these days, and Bitbucket can use any of them; I kept it simple and pushed it to my Docker Hub account.4

Step two: write the build script

I actually already had a build script,5 so all I really had to do was tweak it slightly to be run on something other than my personal machine. The result’s genuinely not interesting, but for completeness, the functional part of it looks like this:

#!/bin/bash

# Normal boilerplate (see e.g. https://sipb.mit.edu/doc/safe-shell/)
set -euo pipefail
IFS=$'\n\t'

# Add $GOPATH to the path so Hugo will be present
export PATH=$(go env GOPATH)/bin:$PATH
hugo --cleanDestinationDir
rsync -av --delete public/ publisher@bitquabit.com:/var/www/blag/

Again, nothing interesting here. We’re at exactly ten lines, and even that only because I added some comments and some blank lines for readability. I called this file build and stored it unceremoniously in the root of my blog repository.

Step three: test it…if you feel like it

Since we’re going to deploy files to a real server in an automated fashion, the next step is to test everything.

Or not. It’s your server; I’m not gonna tell you what to do.

Myself, I decided to half-ass it a bit. Pipelines just launches your Docker image, copies your project into the container, sets your project to be the current directory, and begins running your script. I can do that:

$ docker run -it --volume=C:/Users/b/src/blag:/blag --entrypoint=/bin/bash bpollack/blag-builder:latest
$ cd /blag
$ ./build

The first line says to run a Docker container we built interactively (-i) on my terminal (-t), mount the Windows directory C:\Users\b\src\blag at /blag in the container, and then launch bash once the container is ready. In the next two lines, I demonstrate my amazing CS skills to change to the appropriate directory and run the script, proving that, even in this advanced day and age, I can still play the part of a computer.

This of course failed at the push step due to SSH keys not being set up (more on that in a second!), but otherwise seemed to work fine, so it’s good enough for me. Onwards!

Step four: create the pipeline

The pipeline spec is really simple: you give it a Docker image (which we just made), a condition of when to run (I’ll just have it run whenever there’s a new changeset, which is the default), and what steps to run when the condition is met (in our case, we need to run one single step, which is the build script we just wrote). So that file, in its entirety, is:

image: bpollack/blag-builder:latest

pipelines:
  default:
    - step:
        script:
          - ./build

Granted: being Yaml, this looks like the result of an editor with broken indentation rules. But it’s at least pretty self-explanatory: we give it a Docker image (it defaults to using Docker Hub, which is great, because so did we), we give it one pipeline, called default, and give it the sole job of running a one-line script that calls our real build script, which we wrote together in the previous heading after much struggle. Commit this as a file called bitbucket-pipelines.yml in the root of your repository and push.

Step five: add relevant SSH keys

Congratulations! If you did everything perfectly at this point, Bitbucket will create your pipeline, run the build, and it will fail!…because you don’t allow random people to push stuff to your server over SSH.6 Fair enough. For reasons I’m not honestly entirely clear about, Bitbucket won’t let you specify SSH keys to use for Pipelines until at least one pipeline exists. But now that we’ve got a pipeline—it’s the one that just failed—you’re good.

In your repository, click on the Settings tab, and then, under the Pipelines heading, there’s an entry called SSH Keys. Still with me? Good. These are SSH keys that will be loaded into your Docker container right before your script runs, and which will be used to push code to your server. I recommend following their advice, generating a key with them, and then adding that key to the ~/.ssh/authorized_keys file in the appropriate user account. You’ll also need to tell it what servers you’ll be using these keys with so that Bitbucket will detect if your server gets swapped out and can avoid deploying your precious secrets to some nefarious machine.

(Incidentally, I recommend using those Bitbucket keys only with a heavily locked-down account that’s dedicated purely to handling the deploy, but how to do that is a bit outside the scope of this particular post.)

Step six: you were actually done at step five

That’s it; we’re done. You do need to either re-run the pipeline manually at this point or push a dummy changeset to make sure, but everything should honestly Just Work™.

That’s honestly it; a hair over twenty lines of code got you free continuous delivery. You can get more fancy at this point if you’d like (I’m probably going to make sure the pipeline runs only when certain bookmarks are moved, rather than on every push, for example), but that’s the fundamentals. Three short files, each ten lines or less.


  1. I briefly had what I guess could qualify as an outage when I accidentally ran a deploy on a machine that didn’t have Pygments installed—which promptly deleted every single code snippet on the site. Oops. [return]
  2. Relatively speaking; the feature went into beta in March 2016. [return]
  3. It’s not free-free, but you get 50 minutes of build time with the free account, and building my blog with Pipelines takes about 16 to 25 seconds, so I figure I’ll be fine for awhile. [return]
  4. I won’t stop you from using this image, but I really discourage you from doing so; I make zero guarantees I won’t do horrible things to it in the future. [return]
  5. Two, actually—one for Windows and one for Unix—but since the Windows Subsystem for Linux has stabilized, all the Windows one does is call the Unix one. [return]
  6. I sincerely hope. [return]

The Paradox of Apple Watch

When the Apple Watch first came out, my initial reaction was basically disgust. Everywhere I looked, I saw people already Krazy Glued to their phones, missing the world around them to live instead in the small mini-Matrix in their pocket. Now, Apple was proposing to add additional distractions right on our wrist, making it even easier to ignore real life and stay focused on a screen instead. Not only was the Apple Watch not for me; it was a sad commentary on how tech was ruining our lives.

Yet I kept seeing more and more friends of mine falling victim to the Apple Watch. They insisted it was actually great, that I was the crazy one, that it was the next revolution in tech, that they loved how it kept them in touch with everyone even more easily, etc., etc., etc. I’ve heard this song before, and while I doubted I’d agree, it became equally obvious that the Apple Watch wasn’t going anywhere. In the interest of making sure I could stay not just with it, but also hip, I bought one a few weeks ago. I figured I’d play with it for a couple weeks and return it, getting a nice blog post out of it about how I was right and the Apple Watch made my life worse.

But what I’ve instead found something else: properly used, at least for me, the Apple Watch isn’t yet another distraction. Instead, it can allow me to stay informed, without constantly pulling me out of the moment. It’s actually freed me to leave my desk much more easily, without succumbing to staring at my phone instead. In other words, it’s had the exact opposite effect I anticipated.

The Problem with iPhone Notifications

Here’s my basic problem: I’m a manager. I have twelve direct reports spread across four disparate projects, plus I also provide management support to our Infrastructure project—you know, the one project at Khan Academy where even we have alerting and chatbots and whatnot to let you know when things have exploded. This means I have meetings constantly, and I’m pinged on Slack constantly, and I get an obscene volume of email. And each and every one of these constantly wants your attention, by default sending tons of notifications basically all over the place. Phone, computer, tablet, cyborg sitting next to you muttering about killing all humans, everywhere.

Some of these distractions I can easily disable while still doing my job. For example, since emails rarely require an immediate response, I turned off mail notifications completely, and only bother checking messages every hour or so. That’s socially acceptable, and keeps me available while also letting me get work done. I likewise killed notifications from tools like Trello, OneNote, Asana, and anything else that almost certainly could wait for a regularly scheduled check-in.

But Slack and meetings are trickier: while many Slack notifications can genuinely wait, many can’t, so I do need to actually read the notifications and make a decision on whether to respond. (I actually just ranted about this in detail if you’re bored.) My meetings likewise frequently shift radically during the day, so the fact I had been clear at 11 doesn’t mean I still am, nor does the fact I originally had an interview at 2 mean I still do.

I thus fell into this pattern where I’d get a buzz from Slack, take my phone out, read the alert, realize I had a pile of unread messages in some room or other, read through those, get distracted paging in context for the conversation, remember to recheck my calendar for any meeting changes, put the phone away, forget what I was doing, and then repeat. My spouse grumpily noticed that even on date nights, even when I was trying to stay in the moment and wasn’t honestly thinking about work, even when my phone was in Do Not Disturb mode and couldn’t have buzzed, I’d still sometimes mechanically take my phone out, look at the screen, and put it right back—just because I was so used to doing that motion during the day that it had become a habitual reflex.1

In this environment, adding the Watch seemed like a bad idea. I’d already cut down my notifications as far as I could; putting them on my wrist seemed like it’d make an existing problem even worse.

So I was quite surprised when exactly the opposite happened.

Enter Apple Watch

Here’s the thing: the Watch can’t actually do all that much—at least not in the way a smartphone can. It ultimately really does three things very well, and everything else very poorly:

  1. It’s a great way to track my jogs. That’s not why I bought it, but it turns out it’s great at it, and I use this feature a lot.
  2. It is indeed very good at giving you notifications, usually along with a small handful of possible response actions, if applicable.
  3. It is also quite good at taking certain kinds of very quick voice commands—basically the same subset Siri already handles well on the iPhone.

That’s it. Doing anything other than these is generally somewhere between painful and a genuine farce. Yeah, Todoist and other task lists exist on the Watch, but they fit maybe two to three things on the screen at once; you’d have to be a masochist to enjoy it. There’s a similar story with note-taking apps, like OneNote: yes, the app exists, and it honestly does the best it can with voice entry, but that gets old really quickly. Tools like Maps and Yelp are so limited that I’m forced to wonder why anyone bothered in the first place. And trying to read something long-form like an email on the Watch…I mean, yes, you technically can, but you’d have to be really desperate. Indeed, any use that requires reading or generating a substantial amount of information is either impossible or so difficult that I avoid it at all costs.

And…that weirdly turns out to be perfect. Fine; I can’t avoid real-time Slack and calendar notifications and do my job effectively, so they’re just going to be part of my life for now. But when I get them on the Watch, I glance down, make a snap decision on whether it requires me to do anything, and then either go back to doing what I was doing immediately (the overwhelming majority of the time), or, if the notification does require an immediate response, I walk back to my actual computer to handle it appropriately. In mere days, my habit of pulling my iPhone out of my pocket basically evaporated. Not only that; because I already try very hard to separate my work and personal devices, and because I was now responding to anything long-form on my work PC rather than my phone, I basically obliterated all of my media grazing habits overnight.2

The actual impact has been obvious to me: my work velocity increased, my iPhone battery lasts disturbingly longer, and I find myself much better able to focus whether we’re talking 1:1s with coworkers, or personal time with friends and family. Plus, I can now actually take a nice midday walk without having to stop every two minutes to check my phone. It’s honestly been an incredible win.

Mindful(ish) Notifications

I’ve been making a very deliberate effort for the last six months to pursue what I’ve been calling mindful computing—basically, trying to use technologies and develop habits that discourage distractions and that encourage and reward getting onto a computing device to do some specific action, and then putting the device away when you’re done.

I cannot quite say that the Apple Watch fits cleanly into this rubric. Indeed, as I noted, notifications are both one of the things it does best, and the explicit reason I ended up keeping it—and I don’t know that any person who would argue that seeking out a distraction-making device is a good example of mindful computing as I defined it.

But I do think that, properly used, the Apple Watch can be mindful-ish. If you are in a situation where you genuinely cannot fully avoid having some form of distracting notifications and still be effective, the Watch, specifically due to its incredibly limited abilities, can actually be an amazing compromise.

It’s one of the few recent technology purchases where I can say with a straight face that it meaningfully improved my quality of life. And while it didn’t do so in a fundamental way, and it may not be for everyone, I am surprisingly happy that I ended up ignoring my initial judgment and taking the plunge.


  1. There’s a valid question here of why these are on my phone this way in the first place; after all, if I’m at my PC, I could put the notifications there. And in truth, when I am sitting at my desk, I usually put my phone into Do Not Disturb mode for this exact reason. But one of the nice things about being remote is I can frequently attend meetings while taking a walk, or read through some emails or documents in the nearby park—but if I do that, then I do in fact need all these notifications on my phone in case I need to switch up my plans/head back to the house/get back to my laptop. [return]
  2. The unexpectedly positive impact of suddenly not reading reddit, Twitter, and the like anymore is a great topic for another day. [return]

Why I Hate Slack and You Should Too

Yeah, that’s right: there’s finally something I feel so negatively about that I’m unsatisfied hating it all by myself; I want you to hate it, too. So let’s talk about why Slack is destroying your life, piece by piece, and why you should get rid of it immediately before its trail of destruction widens any further—in other words, while you still have time to stop the deluge of mindless addiction that it’s already staple-gunned to your life.

1. It encourages use for both time-sensitive and time-insensitive communication

A Long Thyme Agoe, in the Days Before Slack, I had three different ways of being contacted, and they served three very different purposes, with radically different interrupt priorities. I had emails, which could wait; I had phone calls, which couldn’t; and I had the company IRC server, which was usually where I went to waste time by sharing links to things that either made me get very angry or made me laugh hysterically.1 In this system, the important, time-sensitive thing can interrupt me, and everything else can’t. That’s great for productivity and great for my sanity, and the people were happy and things were good.

Slack totally just trashed everything. It’s email and phone calls and cat pictures, all rolled into one. So sometimes Slack notifications are totally not time-sensitive (@here Hey I need coloring books for my niece, any suggestions? also she’s afraid of animals clowns food people and dinosaurs and also allergic to paper kthxbye!), and sometimes they require an immediate action (@here Dr. Poison just showed up and tl;dr maybe run for it idk?)—and until I’ve read the message, I have absolutely no idea whether it deserves my immediate attention. That order’s backwards and it makes me feel bad because it is bad.

This is actually a whole thing in psychology: if you give a mouse food every time they push a lever, they’ll eventually only push it when they’re hungry, but if you only give them food sometimes when they push a lever, then the “reward uncertainty” will actually cause them to push the lever more often.2 And hey! Here we are, all checking Slack 23,598 times a minute for each notification, because who knows, maybe this one matters. It’s all the pain of Vegas with none of the reward and somehow we’re still hooked.

So unlike before, now I get interrupted constantly, and I have to break my flow to figure out whether getting interrupted was worthwhile, and for some reason this is supposed to enhance business productivity.

Right. Sure. You go on being you, Slack.

2. It cannot be sanely ignored

“Okay, pea-brain,” you mutter, “so just turn off Slack notifications when you need to focus for awhile, and catch up later.”

I once thought as you did, but part of the reason you end up addicted to Slack is that catching up on what you’ve missed feels very similar to when you were back in college and were a day before the final and suddenly realized that your plan of not highlighting the book or taking notes all semester may’ve been a Bad Idea™. About the only way Slack bothers grouping information is by room3—and as anyone who’s been trapped in a heavily-used Slack system can tell you, the room names and descriptions are at best weak guidelines, so you can’t even necessarily prioritize what to catch up on even at that gross level of granularity.4 Nope: your only option is going to be to read the entire backlog, from start to finish, or else just accept that, at some distant point three months from now, you’re going to look like a complete idiot when you’re the only one didn’t know that all employee blood was now going to be collected for occult purposes.5

Granted, this isn’t Slack’s fault per se, at least insofar as every chat system has this problem, but Slack’s attempt to become your One True Source of Everything, from scheduling to reminders to SharePoint replacement to company directory, means that a huge amount of information that previously would’ve been in emails ends up in Slack, and only in Slack. And that’s a very deliberate decision by Slack to make themselves utterly indispensable, so I feel very comfy screaming at them until I go hoarse.

3. It cannot be sanely organized

Okay fine, so you read through the whole backlog from your vacation, which took you barely even 70 hours, and have extracted the six actual to-do items from it, one of which involves something about pentagrams and goats that you’ll decipher later. Great. Mazel tov. Phase one complete.

Now what? Slack has no meaningful way to organize those six messages. There aren’t folders. There isn’t a meaningful “do later” pile. (There’s /remind, to be fair, but, as noted previously, that just generates more notifications, which we’re trying to avoid. Theoretically.) So you’re left with…what, exactly? Right-clicking on each individual message at the end of the chain, copying the link, and pasting that into some external to-do app? Which, of course, when you click back on the link, will require you to re-read at least some amount of unstructured backlog, including a bunch of unrelated garbage about reconfiguring CARP on the edge servers and something about epoll and multithreading and a panda birth video that just happens to be there, just to remind yourself what everyone said?

Welcome to hell. Population: all Slack users.

4. It’s proprietary and encourages lock-in

In an ideal world, I could circumvent a lot of these issues in any number of ways. For example, I’m still active in open-source sometimes, and the open-source equivalent of Slack is (usually) still IRC. But IRC, being a well-documented6 older system, has tons of different tools to extract data from it. If I want to be nerdy, I can yank individual messages from ERC straight into org mode, or write custom scripts for WeeChat, or use any of literally dozens of clients written in Ruby and Python and Io and Java and C# and thousands of other programming languages plus also JavaScript and do really bespoke things. And even if I don’t, the plethora of macOS and Windows clients means that an off-the-shelf or trivially customizable AppleScript or WSH solution is never far away.

But Slack is Slack, and Slack is Electron, and Electron is Chrome—Chrome surrounded by an unscriptable posterior that eats up 100 MB of RAM per channel, plus an extra 250 MB for each Giphy.7 And while I can almost script my way out of this hell, I really can’t. Not as a mortal end-user, anyway. To the extent I can do anything, I need to write directly against the Slack API, rather than using something commonplace like XMPP or IRC, so goodbye portability. And even if I’m willing and able to write against the proprietary API, a lot of the more interesting things you can do require being an organization admin, and require being enabled globally for the entire instance. So goodbye, personalized custom integration points, and hello, one-size-fits-zero webhooks. This is my life now.

5. Its version of Markdown is just broken

I’m going to use up an entire heading purely to say that making *foo* be bold and _foo_ be italic is covered in Leviticus 64:128 and explicitly punishable by stoning until death.

6. It encourages use for both business and personal applications

All this would be merely infuriating and drive me into a blind murderous rage if it were just something I dealt with at work, but oh no, now the fun groups I interact with are turning to Slack! That’s right: the same application and environment that makes a full-blown Dementor-style kiss with my attention span for work can now corner me in a back-alley when I just want to shoot the breeze with friends.

I glance at the Slack icon. I have nine unread messages. Neat. Are they from work? I should probably actually go read those and see which ones require I do something. Are they all the ex-employees of that one company I used to work for? It’s probably a bunch of political screaming about stochastically sentient Cheetos that somehow won the presidency, and I’m honestly a bit tired of reading about that at this point.8 But at any rate, I can’t know until I take my phone out and read the notification—and sometimes even then I can’t, since of course some of the people I talk to are on multiple Slack instances and have a habit of saying things like “@bmp did you look at this it’s really concerning?” which requires I actually load up the freaking client and find the instance and the message and finally learn to my utter horror that I shall never be given up, let down, or run around/deserted.

Give up and yield unto Cthulhu Slack, destroyer of focus

Stop using Slack. I hate it; you also should hate it. It’s distracting. It murders productivity. It destroys old tools. It exploits psychological needs in such a way that it kills your soul and hangs it up to dry over a lava pit, where the clothesline catches fire and your soul falls into the fire and somehow you’re not dead, just a zombie, forever, reading zombie notifications on your zombie iPhone and wondering whether “@here brains?” is a lunch invite or an insult until you read the backlog. Friends do not let friends use Slack. I have been utterly convincing and you should listen to me in my capacity as low-grade Internet celebrity and do what I say because mindlessly obeying authority is the right thing to do.

But realistically? We’re all still using Slack, because it’s there, and we have to, and it’s the best option according to our collective judgment, which I do have point out may empirically be lacking at this point. So if we are stuck in Slack, then maybe, just maybe, we could start trying to restore Slack to a place where it’s genuinely for ephemeral ideas. Where it’s indeed the place for ad hoc conversations, but not a canonical store for their conclusions and action items. Where I don’t have to read the backlog when I come back from vacation, because anything actionable will at worst have been duplicated as an email or a Trello card or what have you. Where I can disable Slack notifications because I can know, with certainty, that any activity can wait until I’m back at my computer and actually want to spend time chatting on Slack.

In the meantime I’ll be right back because either the data center just exploded or someone posted a picture of a goat fainting and The Notification God must be placated.


  1. This function is now provided by reddit. [return]
  2. Aziz Ansari, Modern Romance (New York: Penguin Books, 2015), 59. Yeah, I could’ve given you a scientific paper, but this book is way less boring and made me stupidly happy I’m not in the dating pool anymore. [return]
  3. Slack honestly is trying to address this with threads, but the problem, which anyone who tried using a system like Wave or Zulip or something similar could tell you, is that the origami crane of organizing information neatly by topic runs basically head-on to the rabid bull of real-time chat and then everything falls apart, so these don’t actually get used effectively in practice. Hell, whether a conversation uses a thread or not in Slack in the first place—and whether a threaded conversation stays that way in Slack (thanks, “Also send to #channel” checkbox! may the fleas of a thousand camels infest your armpits!)—seems sufficiently random that I’d be comfy using it as the main entropy source for a digital slot machine. [return]
  4. They’re trying really hard to address this recently with concepts such as “All Unreads” and (very recently) “Important Messages,” but while these certainly make catching up go faster, they don’t actually resolve the issue unless you really trust how Slack’s deciding what’s important. Based on my experience, we’re very much not there yet. [return]
  5. Didn’t you see it? It was in #kitten-pics. You were @here-messaged, so that’s on you. Now roll up your sleeve and welcome in the Lord of Darkness, His Holiness Spirit Agnew. [return]
  6. I mean…as far as that goes, anyway. [return]
  7. I genuinely have no idea if this scales by channels, but since I’m in ten channels and wasting 1.2 GB, I’d honestly prefer to assume it’s by channel, rather than the alternative that Slack needs a gig of RAM just to run. Which it probably does. But let’s assume. [return]
  8. Not because they’re wrong, mind. I just can only handle so much ranting about a human/toupée hybrid before I start to zone out. [return]

JSON Feed with Hugo

Every couple of years months [checks wristwatch] weeks, we reinvent a file format for no particularly good reason. Don’t get me wrong; we come up with all kinds of reasons to justify what we’re doing—easier to read, better for the environment, It’s Got Electrolytes™—and sometimes, the new format does genuinely represent a meaningful or necessary improvement. But more often than not, we’re just reinventing things out of boredom and a nagging sense, deep down, that if we don’t keep changing everything constantly, normal people may grok that most of the reason programming is complicated and weird is because we put a lot of effort into making it that way.

So I was pretty psyched when JSON Feed came on the scene a couple weeks ago, because it’s pretty much the absolute rawest possible example of a file format that’s unrepentantly change for the sake of change. Literally every language I interact with has perfectly good tools, right in the standard library, for generating and consuming RSS and Atom. Until a few weeks ago, none had any tools for working with JSON Feed whatsoever because it didn’t even exist. But since, and I quote from the JSON Feed manifesto, “developers will often go out of their way to avoid XML,”1 JSON Feed is now a thing, and we’ve already entered the phase where every language I use has a pile of third-party libraries for the format, most of which will be unsupported going forward, and all of which have interesting quirks and bugs that no one fully understands yet. I thus figured it was high time to support JSON Feed on bitquabit.

There was unfortunately a caveat. Some time ago, I moved my blog over to Hugo, a static site generator, so that I wouldn’t have to spend time maintaining my own blog software. In general, that’s been brilliant, but whereas it’d have taken me about five minutes to add JSON Feed to my old blog, I had no idea how to add it to a Hugo site. The highest-ranked link on Google is just vague enough to make me think I should get it but not be able to, and I can say in retrospect that Hugo’s documentation on alternate output formats makes a ton of sense after you already know what’s going on—but not before.

So without further ado, here’s how you add JSON Feed to a Hugo site:

Add some magic to config.toml

We want to tell Hugo that there’s a thing called JSON Feed, which is a JSON file, and we want to assign it a file extension. That’s easy enough. In your config.toml, just slam the following lines at the end:

[outputFormats.jsonfeed]
  mediaType = "application/json"
  baseName = "feed"
  isPlainText = true

mediaType is the file’s MIME type, baseName is just the name of the file template before the extension2, and isPlainText tells Hugo that it shouldn’t do any HTML-related shenanigans. Whatever you slap after the . in outputFormats at the beginning, combined with the media type, defines the expected file extension, so everything we just wrote applies to files that end with .jsonfeed.json. Putting everything together, we’ve now told Hugo that feed.jsonfeed.json files are JSON Feed templates. So far, so good.

Next up, we tell it that we would like it to generate a JSON Feed if one exists. If you already have a section in your config.toml labeled [outputs] (you don’t by default), you’ll need to alter it, but otherwise you can just this at the end:

[outputs]
  home = ["html", "jsonfeed", "rss"]

All that says is, “hey, when you’re generating my home page, in addition to HTML and RSS (which are defaults), also generate this "jsonfeed" thing,” which (conveniently) we just defined.

Add a template for the JSON Feed

We told Hugo that our JSON Feed templates would end in jsonfeed.json and that the base name would be feed, so go create a file called feed.jsonfeed.json in the root of your content/ directory and put this in it:

{
  "version": "https://jsonfeed.org/version/1",
  "title": "{{ .Site.Title }}",
  "home_page_url": {{ .Permalink | jsonify }},
  "feed_url": {{ with .OutputFormats.Get "jsonfeed" -}}
    {{- .Permalink | jsonify -}}
  {{- end }},
  "items": [
    {{ range $index, $entry := first 15 .Data.Pages }}
    {{- if $index }}, {{ end }}
    {
      "id": {{ .Permalink | jsonify }},
      "url": {{ .Permalink | jsonify }},
      "title": {{ .Title | jsonify }},
      "date_published": {{ .Date.Format "2006-01-02T15:04:05Z07:00" | jsonify }},
      "content_html": {{ .Content | jsonify }}
    }
    {{- end }}
  ]
}

Most of that’s boring if you’ve seen the JSON Feed format description, but a couple of things to point out:

  1. We’re programmatically grabbing the JSON Feed permalink, rather than hard-coding it. If you have multiple feeds on your site (e.g., one per category), that’ll help things work out
  2. The {{ range $index, $entry := ... }} silliness is the only way in Go templates to handle fence posts. In this case, because JSON does not allow trailing commas, we need to prevent having an extra comma at the end, and the easiest way to do that is to inject a comma before every entry except the first. Caching the $index lets us easily do that (and taking advantage of 0 being falsy in Go templates makes the conditional short, too).
  3. Finally, the hyphens on some of the {{ ... }} injections deletes preceding (if it’s directly after the opening brace) and trailing (if it’s directly before the close brace) whitespace, which mostly isn’t programmatically necessary here, but keeps the JSON looking clean.

The last step is to tell the world about your new feed. On your main index page, just add

<link
  href="{{ with .OutputFormats.Get "jsonfeed"  }}{{ .Permalink }}{{ end }}"
  rel="alternate" type="application/json" title="{{ .Site.Title }}" />

There shouldn’t be anything surprising there. We’re reusing the {{ with .OutputFormats.Get ... }} trick from earlier to avoid hard-coding the feed URL, and the rest is straightforward templating.

So there you have it: that’s all it takes to add JSON Feed to your Hugo blog. I look forward to the next entry, in which we can explore how to add YAML Feed, EDN Feed, and maybe some custom Microsoft-specific extensions to both of those as well.


  1. No one tell them what HTML is. I really do not want to see JHTML. At least, not more so than I already have it with React. [return]
  2. "index" would’ve been another fine choice, and in line with other Hugo templates; I just found "feed" clearer. [return]

Working remotely, coworking spaces, and mental health

This should be a hard blog post to write–after all, it’s the one where I openly admit I had an emotional breakdown and saw a mental health professional–but it’s actually easy. And it’s easy because it has a good ending: facing long odds and a frustrating situation, I ended up turning everything around and getting a place where I love my job and I’m a happy person again.

But this is not one of those times where the journey was the fun part. No, I’d really preferred to have skipped the journey entirely.

So this is the post I wish I’d read myself back when I decided to work remotely. If you don’t want to read the whole thing, I can even summarize it for you, right here: different people like different kinds of work environments; “working remotely” doesn’t have to mean “working from home”; and if you’re going to work remotely, you should find the work environment that’s the right fit for you.

I demand infinite cake

A bit over a year ago, I moved out of New York City. It’d been great for a decade, and I had tons of friends, but I hit a point where it was draining the life force out of me. Simple pleasures, like going for a hike or joining friends for a potluck dinner, ended up these huge logistics nightmares that took so much effort they stopped being enjoyable. Knowing you theoretically could see 209 different Broadway shows being exciting if simply bringing a turkey to your friends four miles away can trivially turn into a three-hour hell of cramped subways, traffic jams, or The Hunt for the Mythical Available Taxi. Meanwhile, as my spouse and I thought more and more of having kids, the reality of just how much raising a child in NYC costs was something we felt we couldn’t ignore anymore.

This all posed a bit of a problem: I happen to enjoy making money; I do this best by working in tech; and the two hottest markets for that are New York City, which I wanted to leave, and the Bay Area, which is arguably worse. Where we wanted to move was the Raleigh/Durham/Chapel Hill area of North Carolina, known as the Research Triangle, but while this area has has tons of tech jobs, it doesn’t have some of the companies where I most wanted to work.

A decade ago, I would’ve had to pick which I cared more about. We’d either have stayed and dealt with NYC, or we’d have moved here anyway, and I’d have taken a job at one of the many great startups around here. But this was 2015, and there was a great way get everything I wanted without compromising on anything: I could leave management, go back to being a developer, and join the hoards of programmers who worked remotely. I had quite a few friends who had taken remote dev jobs and were having a blast, and it’d let me move wherever I wanted and still work for whomever I wanted. So, almost before I knew it, I left my job working on-site as a manager in NYC, and began working remotely as a developer for Khan Academy, a company I’d wanted to work at for literally years.

Which is how I ended up having an emotional breakdown this past February.

The sun, the moon, and the stars

That’s not how it was supposed to work.1 Working remotely is supposed to be the best thing since sliced bread.2 If you listen to people like Jason Fried in Remote, it’s basically a cure-all for everything remotely wrong in modern office culture. Modern offices are noisy and chaotic; your home office will be serene and peaceful. Modern offices are plagued with interruptions; your home office allows you to ignore the outside world and focus narrowly on code. Your commute need no longer bookmark your day, your coworkers’ illnesses need no longer presage your own, and you can even trivially work outside in an idyllic park surrounded by birds, nature, and psychotic hungry face-eating squirrels if the fancy strikes. Beyond these material benefits, a remote-friendly office has to change its work process in at least one key way that gives you a massive ancillary benefit: it must adopt asynchronous communication as the law of the land, which in turn means fewer meetings and a much easier time scheduling activities. Want to see your kids at a play? Want to ditch the play but go to a theme park at the same time slot? Want to check in with Nana at 2pm because it’s about time you played at least someone in Overwatch who’s at your skill level? Just work at a remote-friendly company, and all this can be yours.

So when I began my remote life, I had nothing but the highest expectations. And, to be honest, they were largely initially met. I not only got my serene and quiet office; it got some great new features, like the ability to make meals with long cooking times, or to customize my office exactly how I wanted it, even if that meant blasting 90s punk rock out my speakers at high volume. The flexible schedule was also indeed nice, and the mixture of that, plus the relative paucity of meetings, did initially drive up my developer productivity higher than it had been for years. It really did seem to be living up to the hype.

But then the cracks started to appear.

The dark side of remote

One of the first warning signs was that my “off days” began to get more common. Look: all developers have off days. I’ve even talked to developers from 60s and 70s who used to get them, and they didn’t get excuses like “a white nationalist egg avatar tweeted ‘ideas’ at me” to blame it on. But I was used to having, at most, a couple of off days a month. Suddenly, I was getting one to two per week. My colleagues didn’t seem to notice, but I sure as hell did, and I had no idea what to do about it.

Then I gradually stopped taking advantage of the benefits remote work was supposed to provide. At the beginning, I’d routinely go for walks or hit the gym midday, I’d echo an on-site Khan Academy tradition and make fresh loaves of bread, I’d take breaks to meditate. I’d even occasionally work from nearby parks, face-eating squirrels be damned.3 But, gradually, that stopped, eventually hitting a point where, more often than not, I would go multiple days of barely leaving my apartment. I created an off-color dent in the rug in front of my computer because I was moving so little. It was seriously that bad.4

Partly causing this, and partly a result of it, my already limited new social sphere began to shrink. I quit going to meetups. I stopped attending workshops. It became entirely possible for me to have no meaningful in-person interaction with anyone other than my spouse for days at a time.

Things finally came to a head one February evening when I had an emotional breakdown. I wrapped up a completely normal and uneventful day at work, did my fifteen foot commute from my office to my living room, and promptly found myself vomiting from stress, saying how much I hated–truly hated–my job, and crying as I realized how unhappy I was with my new life.

About those workplace interruptions

I calmed down, took a sick day, scheduled a therapist, and began trying to figure out what the hell was going on. I used to love being a developer, and working remotely was supposed to be the bee’s knees for that. Yet here I was, miserable. Many close friends of mine talked about how much their lives had improved since they began working from home, yet mine was falling apart. Clearly, the problem was me.

“Ah ha!”, I hear you say, because you have an Amazon Echo in your home. “This is the part where you say, ‘But lo, it was not me!‘”

Wrong. It really was me. The trick is to recognize that it wasn’t something wrong with me. It was just that I hadn’t been honest about who I was, and so I’d set myself up in a situation that was really, really caustic for my mental health.

A lot of people tend to regard “introvert” and “extrovert” as binary options, but the reality is that it’s actually a spectrum. Some people do lean heavily towards one end or the other, but many people have at least some aspects of both. For example, I personally recharge by spending time by myself, and I genuinely need “me time” to be a happy person, both of which are traditionally thought of as introvert tendencies. But I’ve also always been very social. That’s in fact why, while I do really enjoy software development, I’ve always been drawn a lot to the interpersonal side of the software process, like being a project lead or a manager: those roles still leverage a lot of my left-brain analytical muscles, but they also provide lots of social opportunities. This is also probably why I enjoy working in open offices: yeah, they’re honestly pretty awful to work in when I really need to be heads-down trying to fix a bug, but they also encourage a very collaborative environment that I’ve always loved when I’m in the early stages of a project.

Working from home might genuinely be the ideal environment for those closest to the introvert end of the spectrum, and I think those are the people who form angelic choirs of blog posts asking if you have met their lord and savior, the Fortress of Infinite Solitude, Home Office Edition. For them, the quiet work environment makes their jobs dramatically more enjoyable. But for me, it was the opposite: I’d gone from management (high social interaction) to software development (lower social interaction), and from working in an office (hundreds of people) to working from home (two cats), and expected that this would all be fine.

But of course it wasn’t fine. And guess what? There are tons of people out there for whom it wouldn’t have been fine. And if you’re at a similar place to me on the spectrum–maybe a developer who ends up gravitating to positions that involve a lot of interaction with the product or sales teams, or one who really enjoys doing lots of mentorship even though it slows you down–it probably won’t be fine for you, either. In fact, like me, you may find yourself being utterly miserable at a job that by all rights ought to be your one true calling.

But good news! Introversion and extroversion are a spectrum, and so ideal working environments are also a spectrum. In my case, while I may need less social time than some, I emphatically do need daily socializing time, and that led me to what ended up being the perfect solution for me: coworking.

The social benefits of coworking

When I first figured out what was really going on, I felt trapped. Working remote was awful for me, so I needed to stop, but that in turn meant leaving a job I knew I’d otherwise love, and that I’d wanted for a long time. So that stank. Thankfully, I’m annoyingly stubborn, so, rather than give up, I decided to reëvaluate something I’d immediately discounted when we initially moved down: getting an office.

On the surface, getting an office didn’t make any sense, which was a big part of why I rejected it. I don’t have clients, so I didn’t need an office for my professional image, and deliberately undoing some of the benefits of remote work (regaining a commute, losing workspace flexibility, avoiding interruptions) while not reaping the benefits (you’re not gonna suddenly have a spontaneous hallway discussion with coworkers about that project you’ve all been working on when your coworkers are located in another castle) seemed pretty ridiculous.

But when viewed through the lens of what I’d been suffering from, coworking–or at least, the right kind of coworking–might make a ton of sense, I realized. In particular, if I could find one that was both a good work environment, and also had a real sense of community, then I might find a way to turn things around and end up in a good place.

The bad news is that many coworking spaces emphatically do not fit this bill. It’s not that they’re bad; they’re just optimized for other types of people with different needs than I’ve got. For example, one of the first coworking spaces I looked at had food trucks (good) and an decent community (good), but its common area was insanely noisy (bad), and the private offices they provided to counterbalance that had no windows (very bad), weren’t near the communal space (really bad), were ridiculously expensive (tax-deductible), and could only be accessed by bribing the bridge troll with a fish (gross).5 If you need coworking primarily to get away from a distracting or noisy home environment, that might actually all be perfect, but it would’ve been the exact opposite of what I personally needed. Another I looked at was great…but would’ve easily resulted in an hour-long communte by car, which I rapidly determined was one of the few ways to improve upon a hellish New York commute.

I ended up lucking out. Right about when I was thinking I should give up, Loading Dock, with explicit goals of both having a community and being socially active (which gave it a strong alignment with Khan Academy’s culture), opened up very close to me, and to cut a long story short, it’s ended up turning my remote gig at Khan Academy into one of the best jobs I’ve ever had. And because of that, it’s improved my general mood, decreased my background stress level, and generally turned my move to North Carolina from something I regretted into something I’m loving.

Several sizes fit most

The thing with all of this is that it was the right move for me, and while I think it’d probably be the right move for a lot of people, I don’t think it’s the right move for everyone. Working situations are not a one-size-fits-all kind of thing, and I think the tech community can be surprisingly hostile to anything that isn’t a one-size-fits-all solution. When we code, we’re encouraged to find the One True Solution™, and I think that can make us overly biased to believe that when we’ve found the best solution for us–whether we’re talking Vim v. Emacs, C# v. Java, OpenBSD v. an insecure OS, etc.–we’ve by extension found the best solution for everyone. In the case of working remotely, I think those who hated traditional office environments and then found working from home to be the amazing for them concluded that that was the One True Solution™ for happy workers. But the reality is that neither working from home, nor working from the modern open office, is best for everyone; there simply isn’t a single solution for work environments.

That’s the great thing about coworking. For the first time in my life, I got to pick my company and my job separately from my office. I missed that at first, and saw the options only as working on-site with a company or remote from my home. In my case, what I actually needed was something in between.

If you’re thinking of working remote, then think about what kind of working environment you’re happiest with before you take the job, and make sure you’ll have that environment available to you. Are you sad when a lot of your office is out sick, or are you relieved? Do you get uncomfortable when you’re in quiet environments for too long, or do you revel in them? Do you feel weirdly lonely when you’re in a noisy coffee shop, or do you feel energized? Use experiences like these to help you form an opinion of what will make you happiest, and then go search for an environment that’s close to what you’re looking for. It’ll help you avoid learning the lesson I did the hard way, and will instead let you enjoy your job from day one instead of day 200.


  1. Duh. [return]
  2. At least for developers, anyway. Bakers might have some ironic difficulty. [return]
  3. Here, talk to Dewey, he knows more about it than I do. [return]
  4. This is not only true; I have the move-out bill from the apartment building to prove it. [return]
  5. Okay, okay. One of those things isn’t true. Guybrush insisted I mention it as a red herring. [return]

Separate, support, serve

Yesterday, Microsoft continued down a path that they’ve been pursuing for awhile by providing even tighter ties between Windows and Linux–including allowing running unmodified Ubuntu binaries directly in Windows. Reactions were, to say the least, varied; many people were preparing for the apocalypse, others were excited about being able to use Unix tools more easily at work, and still others were just fascinated by how this was technically accomplished. These reactions mostly made sense to me.

One did not. Especially on sites like Hacker News, many responses were screaming that people needed to be scared, to remember Embrace, Extend, Extinguish, to run for the exits as quickly as possible.

I find this reaction frustrating and depressing, not because it’s offensive, but because it’s so obviously incorrect and intellectually lazy that it gives me a headache.

I want to do two things in this blog post: convince you that Embrace, Extend, Extinguish is a grossly invalid understanding of Microsoft’s strategy; and convince you that an alternative strategy of Separate, Support, Serve provides a much better lens to view the modern Microsoft.

The Death of the Three Es

I’m not going to try to persuade you that Microsoft isn’t evil–if you believe they are, you’re wrong, but I don’t honestly care–but I am going to explain to you that, even if Microsoft were still evil, they would still not be doing Embrace, Extend, Extinguish.

First, I want to quickly remind you what the computing landscape looked like when Microsoft was using that strategy. Windows ruled everywhere, in a way that’s almost impossible to imagine today. Virtually all desktops everywhere ran some flavor of Windows. Mac OS, while arguably more usable than Windows, was technically inferior, and had such an app shortage (especially in niche spaces) that it was largely irrelevant. This in turn meant that Windows also ruled most of the back office. Paired along with the Office monopoly, Microsoft really and truly had a total lock on the personal computing space. It was basically impossible to use a computer without interacting with at least one Windows device in the process.

In that epoch, Embrace, Extend, Extinguish made a hell of a lot of sense. The idea was simple: if Microsoft saw a technology that threatened Windows, they’d embrace it (make it available on Windows), extend it in such a way that the best way to use the threat was Windows-specific, and, once most uses of the technology were sufficiently tied exclusively to Windows, extinguish it.

When Microsoft was a monopoly, this was a superb strategy to protect that monopoly. If they saw a threat, then bringing the threat in-house and tying it to the Windows platform was a great way to ensure people couldn’t leave, even if they wanted to. In effect, your alternatives had a tendency to evaporate before you had a chance to use them.

But Microsoft is no longer a monopoly. Hell, in many key areas, they’re effectively a non-player. While it maintains a plurality in old-school personal computers, Windows Phone is bascially a failed project, the cloud is all but Linux-only, and even the entire existence of the back office has been threatened by tools like Google Apps and other hosted solutions. They’ve even lost most kiosks to custom Android variants, and most developers to OS X. It’s now surprisingly rare that I do interact with a Microsoft system on a normal day, and I’m hardly unique in that.

This leaves us with two conclusions. First, empirically, Embrace, Extend, Extinguish failed; if it hadn’t, Windows would still be a monopoly. For Microsoft to be continuing this strategy, you have to believe they were not merely evil, but also unrecoverably stupid.

Second, it can’t work in an environment where Microsoft is an underdog. For nearly all shops out there, leaving Windows is honestly pretty trivial at this point; it’s adopting it that’d be an uphill battle. If I pick “Linux”, I can trivially integrate OpenBSD, Illumos, OS X, and any other Unix-like environment into my workflow with few or no issues. I can pick amongst AWS, GCE, Digital Ocean, and others for my hosting. I can pick virtually any language and database I want, use virtually any deployment tool, and migrate amongst all of these options with relative ease.

Windows is the odd one out. Adopting it not only means getting into a single-vendor solution, but also dealing with writing two sets of most deployment pieces, and dealing with licensing, and dealing with training my ops and IT teams with two radically different technology stacks. I’m going to need one hell of a value proposition to even think about it, and I still would likely turn it down to keep my ongoing maintenance costs sane.

Further, it’s a surprisingly hard environment for me to use as a developer these days even if I want to. If I grab a MacBook, I can write apps for iOS, Android, and Unix, all natively. If I grab a Windows laptop, I can’t target iOS at all, and I have to do any Unix development in a VM. This means that at Khan Academy, for example, I’d have to be insane to buy a Surface, even though I love the device; I’d end up spending all day in virtual machine running Ubuntu. It’s not impossible to use Windows, but honestly, if I have to spend all day in a full-screen VMware session, why bother?

In that environment, the old Three Es just don’t apply. They were about locking me into Windows, but we’ve long since passed that point. The problem Microsoft now faces is one of staunching the bleeding, and that requires a radically different strategy.

The Facts on the Ground

So: if you’re Microsoft, and you’re facing a world where you’ve largely lost all the current fights; where you’re losing developers left and right; where the challenge isn’t keeping people from leaving, but getting them to knock on the door in the first place; what do you do?

There are a couple of strategies that Microsoft could take in this environment, but I want to assert two key facts before we get going.

First, it’s very unlikely that Microsoft can stage a meaningful comeback at the OS layer in mobile, cloud, or server rooms at this point. We’re all now at least as entrenched in iOS and Android on mobile, and Linux on servers, as we ever were in Microsoft PCs. So if Microsoft is going to remain relevant, they’re going to have to do it in a way that meaningfully separates going Microsoft from going Windows.

Second, even if somehow they gained a meaningful foothold in those markets, it’s very unlikely they’ll be anywhere near a monopoly player in the space. iOS, Android, and Linux are so firmly established, and so pervasive, that any conceivable world for now is one where Microsoft has to get along with the other players. In other words, Microsoft-specific solutions are going to be punished; they’ll need technologies common to everyone.

If you agree with those two facts, their current strategy falls out pretty cleanly.

A Way Forward

First, Microsoft has to enable me to even use Microsoft technology in the first place. If Microsoft keeps tying all Microsoft technology to Windows, then they lose. If I have to use Windows to use SQL Server, then I’ll go with PostgreSQL. If I have to use Windows to have a sane .NET server environment, then I’ll pick Java. To fix that, Microsoft needs to let me make those decisions separately.

That’s indeed the first phase of their strategy: separating Windows from the rest of their technologies. SQL Server is available on Linux not to encourage lock-in, but because they need you to be able to chose SQL Server even though you’ve got a Docker-based deployment infrastructure running on RedHat. .NET is getting great runtimes and development environments (Visual Studio Code) for Unix so that I can more reasonably look at Azure’s .NET offerings without also forcing my entire dev team to work on Windows. This strategy dramatically increases the chance of me paying Microsoft money, even though it won’t increase the chance I’ll use Windows.

Next, Microsoft needs to do the reverse: make it feasible for me to use Windows as a development environment again. That’s where the dramatically improved Unix support comes from: by building in an entire natively supported Ubuntu environment, by having Visual Studio be able to make native Linux binaries, they’re making it feasible for me to realistically pick Windows even in a typical Unix-centric cloud-focused development shop. Likewise, Visual Studio’s improved support for targeting iOS and Android, and Microsoft’s acquisition of Xamarin, are going to go far to enabling me to do something similar on the mobile front.

In both of these cases, while there may be an “embrace” component, the “extend” part is notably missing from here–and it should be. Microsoft can’t meaningfully extend iOS, Android, or Linux in a way that’d actually matter to anyone at this point; it has to just support them on their own terms. And in that environment, it’s not possible to extinguish things; if Microsoft woke up one day and announced Xamarin was dead and gone, people would grumpily rewrite their stuff in Swift and Java, not suddenly announce that they were Windows-exclusive.

Finally, Microsoft still needs to make money, and they can do that by selling software as a service (Azure, Office365, and so on), rather than off-the-shelf. That not only gives them a steady revenue stream independent of their Windows installed base–after all, a person using Office365 pays the same whether they’re on Windows, OS X, or a Chromebook. It also provides insulation for them from any future platform changes. Does HoloLens take off? PlayStationVR? Oculus Rift? Will Microsoft catch the next wave? Who cares. As long as you’re using Microsoft products somewhere in your stack, they’ll be fine.

Separate, Support, Serve

It’s not as catchy as the original, and it certainly sounds a lot less ominous, but I think this can be summarized as the Three Ss: separate all of Microsoft’s offerings from Windows itself; support the reality of this heterogenous world when on Windows; and be the company that serves as much content as possible from its data centers.

I’m not saying that Microsoft can’t still lock people in some way. Apple definitely tries to lock in its customers with iCloud and iOS, and its developers with Swift, for example. But I do hope that this has convinced you that Embrace, Extend, Extinguish is dead–and, with it, at least some of the FUD about Microsoft’s software.

Jobs once famously said that Microsoft didn’t need to lose for Apple to win. Today, I think it’s worth realizing the reverse: Microsoft doesn’t require you lose for it to succeed.

Android, Project Fi, and Updates

Edit: Mere days after posting this (and unrelated to this post), Google publicly apologized for the Android 6 roll-out delay and pushed out Android 6.0.0 to Nexus 6 devices. They then followed that up extremely rapidly with the Android 6.0.1 update. I think this bodes incredibly well. Project Fi is still a very new service, and I’ve little doubt that Google has to work out some kinks of their end. For the moment, I’m going to take a step back, watch, and see if this new rapid update cycle is the new norm. If it is, I think I’ve found my ideal carrier and platform. But I still think that encouraging new users to stick to iOS until this update cycle is proven is probably the best course of action.

I want to make clear, right up front, that I am absolutely not an iOS apologist. I couldn’t wait for the first Android phones to come out, and I bought a Motorola Droid on launch day. I was excited about its better multitasking, about the keyboard, about the better integration with Google services, about the fact that I could use Java instead of Objective-C,1 about the much more open platform that wouldn’t restrict what I wanted to do. I was very sincerely excited.

But neither the hardware nor the software were quite ready at the time. I went through three Droids, suffering one (thankfully warranty-covered) hardware failure after another. After an initially promising update cycle (the Droid was upgraded to what I believe was Android 2.1 very quickly), I began to see that Google was having issues getting new versions of Android out on a sane schedule. So, after a couple of years of living on the Android train, I hopped off and grabbed an iPhone.

That didn’t mean I gave up on Android. If anything, I was pretty confident that Android, not iOS, would be the winner in the end anyway. Google would figure things out—and it wasn’t even just Google, after all, but a huge chunk of the telecom industry, all of whom had a vested interest in keeping Apple from dominating, helping them out. We’d seen this play out already with Microsoft and the PC makers versus Apple in the 90s; we knew how it would end, that Android would close the gaps and take over the industry. It was just a matter of time while Google got their operation running smoothly.

While that’s obviously not what happened, both Android software and hardware did markedly improve. There were even a lot of things that Android got first that were genuine usability wins: instant replies from notifications, assistants (Google Now), turn-by-turn directions, cross-application communication, automatic app updates, and more. I ended up buying a Nexus 7 as a tablet, and found that, at least as a developer, it fit my needs a lot better than an iPad ever did.

There was, however, one caveat: Android’s security story. Because Google couldn’t get updates out to its phones on a sane schedule, most Android phones had long-running unpatched security issues. If there’s one thing I think we’ve learned about security over the last few years, it’s that a team that patches early and often is going to be vastly better protected than one that doesn’t. This didn’t bother me too much on my Nexus 7—Google was better about pushing out updates for its tablets than its phones, and at any rate, side-loading the OS didn’t pose any major problems for me on a non-mission-critical tablet—but it kept me from returning to Android phones.

So when Project Fi was released, I signed up immediately. I figured I could finally, finally have my cake and eat it, too: Google generally kept Nexus devices up-to-date, and the Fi pricing model seemed like a huge improvement to me over what I’d been forced to do on the major carriers. What wasn’t to love? I could go back to Android and bid Verizon adieu at the same moment, a great double-win.

That is emphatically not what happened. First, security updates were slow to come out: whereas Apple virtually always has security issues patched well ahead of any disclosure window,2 Google seemed to struggle. When Stagefright came out, I had to wait, just like everyone else, for my patch. And when that patch happened, it was woefully incomplete, so then I got to wait again for a patch to the patch. And then when Android M shipped a month ago, Google left Nexus 6 users—all of whom own a phone that is just barely over a year old at this point—running Android 5.1.1. Yes, you can get M on Project Fi, but you have to side-load (which their support representatives are loudly and actively discouraging in their support forums), or you have to buy a new phone—the exact situation that exists on other carriers, and the exact situation I was trying to avoid.

This is ridiculous. Apple manages to push out updates to all carriers on the same day. Microsoft, which generally brings a vaguely Scooby Doo-like quality of competition to the smartphone landscape, manages to get updates out to all Lumia devices within at most a few days of each other, and also has a very simple system in which any Windows Phone user can opt-in to get Windows Update-style updates ahead of general availability. Meanwhile, on its own cell network, Google has…side-loading, which it’s discouraging.

This just shouldn’t be that hard. And yet, for Google, it clearly is.

So I give up. Apple can keep their products up-to-date across dozens of carriers; Google can’t even keep their own products up-to-date on their own cellular network. If they can’t even make that work, then I throw in the towel.

I suppose it’s possible that my next phone won’t run iOS, but the one I can guarantee you is that it’s not going to run Android.


  1. I am not trolling. Java, at the time, was a much more pleasant language to work in than Objective-C. You had garbage collection, a better dependency management story, better support resources, and a much larger collection of third-party libraries, and to top it all off, you had Eclipse or IntelliJ instead of a fairly early version of Xcode. Even if Android’s APIs might not’ve been the best I’d ever used, they were, at least in my opinion, just fine. [return]
  2. “Virtually” is a key word there; they had a couple minor vulnerabilities that were disclosed prior to patch. But we’re talking a couple issues patched after disclosure date versus a spate of major Android ones that stay unpatched for literally weeks or months after disclosure. There’s no contest. [return]