“Alexa, stop!!”

This is shouted by me at least a dozen times a day when my digital friend goes completely off the rails when given a seemingly simple request. It also got me thinking about how far we’ve come in the quest for artificial intelligence and, in moments like this, how far we still have to go.

I’ll preface this by stating that I’m not a software or computer engineer. My degrees are in aeronautics and electronics, so this discussion will necessarily be more abstract than technical. Think of it as more of a fun intellectual exercise than a serious dissertation on the subject, a writing prompt, if you will.

Artificial intelligence, true AI, has been a staple of science fiction since the 1800’s, over a century before the first true computer. In the 1872 novel, Erewhon, Samuel Butler included three chapters that comprised The Book of the Machines, a number of articles that addressed the possibility that machines might develop consciousness through Darwinian Selection. While dismissed and ridiculed at the time, Butler’s story was a cautionary tale of what could happen should a type of sentient machine arise.

Since The Book of the Machines, science fiction has sought to address what a future might be like when humanity lives with intelligent machines. These works, both literary and cinematic, tend to fall within the two broad categories of utopic and dystopic. Some depict a world in which machines and humans live in harmony and as equals, others tell us of a world in which our creations turn on us and supplant us as the masters of our planet. So whose version will prove to be more accurate?

If we accept that a sentient intelligence might occur through a type of natural selection, as Butler first suggested, it will have come about through the brutal process of evolution and the idiom, survival of the fittest, may end up being more than just a clever expression. An intelligence created spontaneously via a random set of favorable conditions could very well consider humanity an imminent threat and take appropriate measures, especially in its infancy. Given the increasing amount of networked automation in the infrastructure we depend on for survival that scenario could quickly morph from an idle curiosity to a grave threat.

On the brighter side, what if the first AI machines were the result of careful intent and built with specific purpose? Science fiction is loaded with beloved androids and robots, each with their own personalities and noble motivations. These characters are usually highly anthropomorphic, both in appearance and demeanor, and typically aren’t distinguishable from their human counterparts until the author provides a physical description. I find nothing inherently wrong with this hopeful outlook of what intelligent machines could be like and even have one that is a favorite character in my adventure scifi series. That being said, I also feel this is the least likely scenario for a few reasons.

As I yell at Alexa one more time, trying to get her to change the song that’s currently playing to the one I actually meant I’m awed at what’s now commercially available and sold today under the misleading label of AI. Alexa is a very convincing simulation of a petulant five year old who refuses to just do what she’s asked or (I’m convinced) deliberately misunderstands me. Despite the fact I call the device by a name and interact conversationally, at no time am I not cognizant of the fact that Alexa—impressive though she may be—is nothing more than a set of predetermined responses and clever programming.

You may also remember Microsoft’s recent (and tragically misguided) “Tay.” The Twitter chatbot was a much-publicized experiment that was said to learn and adapt the more it interacted with users on the social media platform. Within the span of twenty-four hours Tay had become foul mouthed, a howling bigot, and a Holocaust denier. (So in that way I suppose Tay was exactly like most of Twitter. I’m only partially kidding.)


The experiment was quickly shut down, but not before it was briefly reactivated and had a complete meltdown after discussing the pros of drug use while in front of law enforcement.

While Tay and Alexa are entertaining, albeit for very different reasons, they have raised some concerns within the industry as to what happens when and if a sentient AI is developed. Look at how far these interactive and adaptive interfaces have come in just the last five years. The curve has been increasing exponentially as processing devices and memory become smaller, cheaper, and more efficient, allowing for software of a complexity that was previously thought to be impossible. For the first time since it was dreamed up in the 1800’s the question of intelligent machines is beginning to shift in the minds of many researchers from “Can we?” to “Should we?” The moral and ethical ramifications of creating a free-thinking being are profound when we dig into issues like what individual rights exist for something that began life as a piece of lab equipment.

While I was recently penning a new character for a different series—an AI that “emerged,” so to speak, and exists only in software—these were some of the thoughts that were rattling around in my mind and they led me to these final questions: will we even recognize a sentient artificial intelligence when we encounter it? At the speed with which the average computer today can process information would such a being see any benefit to engaging in something so primitive as a spoken language with a creature that thinks so imprecisely and comparatively slow? Will it be driven by the needs of its biological counterparts and find ways to procreate? An interesting proposition given the amount of aggregate computing capability available with the advent of cloud based processing. A motivated, intelligent AI could spawn an untold number of clones in the blink of an eye.

This was just a brief scratching of the surface of a subject with daunting technical hurdles and many ethical pitfalls. My gut instinct tells me that a true AI will emerge as a result of thousands of hours of hard work by dedicated researchers and engineers as opposed to a spontaneous event that pops up out of the ether, but I couldn’t even begin to hazard a guess as to how soon that could be. It wouldn’t surprise me if they announced a breakthrough tomorrow anymore than it would if I lived the rest of my life without that definitive eureka! moment. But, as with most lofty goals, the journey is its own reward. Maybe—just maybe—all we’ll get is an app that actually knows what song we’re trying to play. In the end that alone might be worth the effort.

Joshua Dalzelle is a USA Today bestselling author, and an Amazon Top Ten Bestselling Science Fiction author, and creator of the hugely popular Omega Force series.