What a cute little dog! I mean robot!

There’s a video that made the rounds of the internet recently. You may have seen it, but I’ll link it here anyway.

It shows Boston Dynamics’s SpotMini, one of their robots, open a door in a rather uncannily human way, so another robot can go through the door, and then follows behind it.

For anyone who doesn’t know, or doesn’t feel like googling, Boston Dynamics designs and builds robots, most famously for the military with funding from DARPA (the home of some really advanced technology that starts bordering on magic).

SpotMini is small, short enough to crouch under your dining room table. As of now, it only has a 90-minute battery, but that will likely change as the design is refined and made more efficient. Most importantly, Spot is kind of . . . cute. Not fuzzy, but definitely smooth and curved and not immediately terrifying. It’s also currently school-bus yellow.

When the video above hit the internet, most people freaked out, but some took an alternate view. Those who were upset can see the potential for the robot to be used in some incredibly horrific ways by an overreaching government or an ingenious criminal group. In their worst imaginings, Spot is merged with some self-aware AI and kills us all.

Those who took the alternate view saw the altruistic potential in Spot. There’s the potential to use robots in dangerous situations, sending them into toxic environments humans can’t safely enter. Elderly and disabled people could indeed benefit greatly from a dependable robot that would have the strength to help them off the floor if they fell, open items that are difficult to manage with decreased motor skills, and allow those people greater independence than even a service animal might provide.

To be honest, I can see both sides of this argument and I think they’re both valid. As a software developer and science fiction fan, though, I have to side with the folks who are scared of what Spot means. I do find it interesting that many people who see that robot and have a chill run down their backs are also the same people who have welcomed Siri, Alexa, or Cortana into their homes and allowed them to listen in on the intimate details of their lives.

But where does this all originate? Both the technology itself and the fear of it? I lay much of the praise and blame at the feet of the entire science fiction genre.

Sci-fi has its place in all this as both the driver of inspiration and also the cautionary tale of potential catastrophe. So many of our technological advancements are inspired by science fiction writings, and in turn science fiction writers extrapolate stories out of emerging technologies and round and round we go.

More often than not though, science fiction has reflected back to us the possible dangers of technology run amok. Sometimes it’s the humans who use these technologies to dark and dangerous ends, causing harm to others, either intentionally or not. Other stories center around sentient technology and a conscious AI, sometimes with all of humanity’s frailties, other times manifesting as a completely alien mindset from our own.

As a software developer, I often think about my own work and those of my fellow programmers. Bugs run rampant through our code, causing errors that affect everything from a simple menu glitch on your website to a misplaced decimal point that costs companies, and their employees, millions of dollars. The idea of entrusting human-written code to the well-being of someone I care about, or even putting my life in the hands of someone’s code (even my own) is a chilling prospect. I’ve often thought up my own versions of the tech-run-amok story, where embedded technology is hacked and suddenly everyone is subject to the whims of vindictive programmers. The Ministry of Silly Walks from Monty Python is a likely contender for a bio-hacking prank, only not so funny when people have no control over where they’re walking.

At the end of the day, no matter how perfect our code seems to be, no matter how well-tested our technology, there is always going to be a risk. Human error is always a specter in the background, for even with computer-generated code, there is a human at the end of the chain, programming the program that writes the program, as it were.

When we talk about robots, independently mobile creatures, for lack of a better word, that are much harder to unplug when things go bad, then perhaps it’s time to take a step back. After all, “A.I. is a crapshoot” is a well-known trope and not something I particularly want to witness played out in real life. I suppose the best I can do from my end is not get sucked in by Spot’s deceptive cuteness and do my best to write code that’s as bug-free as possible. Wish me luck.