How Space and Cyberspace are Merging in the 21st Century Battlefield

CYBERSPACE AND OUTER space are merging to become the primary battlefield for global power in the 21st century. Both space and cyberspace systems are critical in enabling modern warfare—for strike precision, navigation, communication, information gathering—and it therefore makes sense to speak of a new, combined space-cyberspace military high-ground. From the moment Sputnik was launched in 1957, and everyone’s head turned skyward, space has occupied the military high-ground, defining much of the next fifty years of global geopolitics. Space-based systems, for the first time, broke the link between a nation’s physical territory and its global ability to gather information, communicate, navigate, and project power.

In the 1980’s, the rise of information and communications technology enabled the creation of the internet and what we’ve come to call cyberspace, a loosely-defined term that encompasses the global patchwork collection of civilian, government and military computer systems and networks. For the same reasons that space came to occupy the military high-ground—information gathering, navigation, communication—cyberspace is now taking center stage.

From a terrestrial point of view, space-based systems operate in a distant realm, but from a cyber point of view, space systems are no different than terrestrial ones. In the last decade, there has been a seamless integration of the internet into space systems, and communications satellites are increasingly internet-based. One can make the case that that space systems are now a part of cyberspace, and thus that space doctrine in the future will be heavily dependent upon cyber doctrine. The argument can also be made that cyberspace, in part, exists and rests upon space-based systems. Cyberspace is still based in the physical world, in the data processing and communications systems that make it possible. In the military domain, cyberspace is heavily reliant on the physical infrastructure of space-based systems, and is therefore subject to some of the same threats.

Space and cyberspace have many similarities. Both are entirely technological domains that only exist due to advanced technology. They are new domains of human activity created by, and uniquely accessible through, sophisticated technology. Both are vigorous arenas for international competition, the outcomes of which will affect the global distribution of power. It is no coincidence that aspiring powers are building space programs at the same time as they are building advanced cyber programs.

Space and cyberspace are both seen as a global commons, domains that are shared between all nations. For most of human history, the ability of one group of humans to influence another was largely tied to control of physical territory. Space and cyberspace both break this constraint, and while there is a general common interest to work cooperatively in peace, there has inevitably been a militarization in both domains. As with any commons, over time they will become congested, and new rules will have to be implemented to deal with this.

Congestion and disruption are problems in both space and cyberspace. Ninety percent of email is spam, and a large proportion of traffic over any network is from malware, which clogs up and endangers cyberspace. Cyberattacks are now moving from email as the primary vector, to using customized web applications using tools such as the Blackhole automated attack toolkit. Cyberattack by nation-states is now joining the criminal use of spam, viruses, Trojans and worms as deliberate attempts to attack and disrupt cyberspace.

The congestion analogy in space is that entire orbital regions can become clogged with debris. Tens of thousands of objects, from satellites and booster rockets to smaller items as nuts and bolts, now clog the orbital space around Earth. The danger of this was dramatically illustrated when an Iridium satellite was destroyed when it was hit by a discarded Russian booster in February of 2009. The situation can be made dramatically worse by purposely creating debris fields, as the Chinese did when they conducted an anti-satellite test in 2007 using a kinetic kill. Over time, entire orbital regions could become unusable.

Another similarity is that while traditional the air-sea-land domains are covered under the UN—Law of the Sea, Arctic, Biodiversity—outer space and cyberspace still operate under ad-hoc agreements mostly outside of UN frameworks. They both expand the range of human activity far in advance of laws and rules to cover the new areas being used and explored. Because space can be viewed as a sub-domain of cyberspace, any new rules brought into effect to govern cyberspace, will also affect outer space.

If there are many similarities between space and cyberspace, there are some critical differences, the most important being that space-based systems require massive capital outlays, while in comparison, cyberspace requires very little. As James Oberg points out in his book Space Power Theory, the most obvious limitation on the exercise of space power is cost, with the astronomical cost of launch first among these. Cyberspace, on the other hand, has a low threshold for entry, giving rise to the reality that governance of an extremely high-cost domain, space systems, will be dictated by rules derived from the comparatively low-cost domain of cyberspace. Space power resides on assumption of exceptionalism, that it is difficult to achieve, giving nations possessing it a privileged role in determining the balance of global power. In contrast, cyberspace, and the ability to conduct cyberwar, is accessible to any nation, or even private organizations or individuals, which have the intent.

Cyberwar has already started, and is beginning to gain in frequency and intensity. To most people, the term cyberwar still has a metaphorical quality, like the War on Obesity, probably because there hasn’t yet been a cyberattack that directly resulted in a large loss of life. Another important defining characteristic of cyberwarfare is the difficulty with attribution. Deterrence is only effective as a military strategy if you can know, with certainty, who it was that attacked you, but in a cyberattack, there is purposeful obfuscation that makes attribution very difficult.

The first cyberattack can be traced back to the alleged 1982 sabotage of the Soviet Urengoy–Surgut–Chelyabinsk natural gas pipeline by the CIA—as a part of a policy to counter Soviet theft of Canadian technology—that resulted in a three-kiloton explosion, comparable to a small nuclear device. Titan Rain is the name the US government gave a series of coordinated cyberattacks against it over a three-year period from 2003 to 2006, and in 2007 Estonia was subject to an intense cyberattack that swamped the information systems of its parliament, banks, ministries, newspapers and broadcasters. In 2011 a series of cyberattacks called Night Dragon were waged against energy grid companies in America. This is significant because of the Aurora Test conducted by Idaho National Laboratory in 2007, where a 21-line package of software code, injected remotely, caused a large commercial electrical generator to self-destruct by rapidly recycling its circuit breakers, demonstrating that cyberattack can destroy electrical infrastructure.

A new breed of sophisticated cyberweapon was revealed when the Stuxnet worm attacked Iran’s Natanz uranium enrichment facilities in June of 2010. It was not the first time that hackers targeted industrial systems, but it was the first discovered malware that subverted industrial systems. Another game-changer was the 2012 Shamoon virus that knocked out 30,000 computers at Saudi Aramco, forcing that company to spend weeks restoring global services. Shamoon was significant because it was specifically design to inflict damage, and was one of the first examples of a military cyberweapon being used against a civilian target. The more recent Wannacry malware attacks in 2017  were reportedly initiated by North Korea and directed to disrupt Western commercial and logistics networks. It is only a matter of time before a cyberweapon targeting space-based systems is unleashed, if it already hasn’t happened.

It is worth it to back up and explore the core issues surrounding internet security. The internet was originally designed as a redundant, self-healing network, the sort of thing that is purposely hard to centrally control. In the late 80’s it evolved into an information-sharing tool for universities and researchers, and in the 90’s it morphed into America’s shopping mall. Now it has become something that is hard, even impossible, to define—so we just call it cyberspace, and leave it at that.

First and foremost, there is the issue that while everyone runs the internet, nobody is really in charge of it. ICANN— The Internet Corporation for Assigned Names and Numbers—exerts some control, but the World Summit on the Information Society (WSIS), convened by UN in 2001, was created because nations around world have become increasingly uneasy that their critical infrastructures, and economies, are dependent on the internet, a medium that they had little control over and no governance oversight. The issue has still not been resolved. To the libertarian-minded creators of the internet, decentralized control is a feature, but to governments trying to secure nuclear power stations and space-based assets, it is a serious flaw. A large part of the problem is that we are trying to use the same internet-based technology for social networking and digital scrap-booking, and use this same technology to control power stations and satellites. Not that long ago, critical systems—space systems, power grid, water systems, nuclear power plants, dams—had their own proprietary technologies that were used to control them, but many of these have been replaced these with internet-based technologies as a cost-savings measure. The consequence is that as a result, now nearly everything can be attacked via the internet.

When it comes to software producers, while they would like their products to be secure from hackers, they have a competing interest in wanting to able to access their software installed on customers’ machines. They want to be able to collect as much information as possible, to sell to third parties or use in their own marketing, and also to want to update new features into their software remotely. Often, this is to install patches to discovered security vulnerabilities, precisely because code is poorly written to begin with, because they realize they can update it later. This backdoor into software is a huge security flaw—one that companies purposely build into their products—and is one that has been regularly exploited by hackers.

There are many consequences to all this.

The first is that, because we use the same internet-based technology to support both the private lives of individuals and operate critical infrastructure, there will be a perpetual balancing act between these two competing interests when it comes to security. Another is that until the general public really sees cybersecurity as a threat, many of the fixable problems will not be addressed, such as setting international prohibitions on cyberespionage—making them comparable in severity to physical incursions into the physical sovereign space of a nation-state—or forcing software companies to get serious about secure coding practices and eliminating backdoors into their products.

Because of the extremely high value of space-based assets, and because they are already a seamless part of cyberspace, when a major cyber conflict does emerge, space systems will be primary targets for cyberattack. Even if space systems are not directly attacked, they may be affected. There can be no known blast radius to a cyberweapon when it is unleashed. Even the Stuxnet worm, which was highly targeted in several ways, still infected other industrial control systems around the world, causing untold collateral damage.

A more difficult threat to consider than simply denying access or service to a space system through cyberattack is the problem of integrity. In the cybersecurity world, the three things to protect are confidentiality (keeping something secret, and being able to verify this), availability, and integrity of data. Integrity is by far the hardest to protect and ensure. If a cyberattacker, for example, decided on a slow (over time) modification of data in a critical space junk database, they could influence moving satellites into harm’s way or worse, drop satellites from orbit into populated areas.

Over the last fifty years, a comprehensive strategy based around deterrence was developed in conjunction with the idea of space power theory. In the future, a comparable framework and space-cyberspace power theory will need to be developed. Many questions need to be answered, most especially regarding how the international community will establish rules for cyberspace, the definition of rules for cyberwar, proportionality of response, and how to deal with the problem of attribution. Exactly how the developing cyberwar doctrine will affect the way outer space is governed remains to be seen.


Matthew Mather is author of a fictional account of the first major cyberattack, CyberStorm, which has sold close to a million copies, been translated in 23 countries and is in development for film by 20th Century Fox. You can find CyberStorm on Amazon.

The Brink of a New Age of Discovery

Do you remember those old posters from the 1950's that had people in flying cars and robots doing the dishes? It must have been an exciting time. Test pilots had just broken the sound barrier, followed by a breathless rush into the dawn of the Space Race that led to the moon landings just 66 years after the first time Orville and Wilbur Wright flew the first airplane. At that time, nuclear power seemed ready to offer limitless cheap energy, and the boom of microelectronics was just beginning to dazzle.

flying-car

 

What happened to my flying car? While it's true that electronics have gotten smaller and faster beyond the wildest of imaginings of 40 years ago, it's also true that the 747 airliner first flew in 1969, and that's probably the same plane you'd take to fly today. Same speed, same altitude–the 747 was an amazing feat in the 60's, but by now we were supposed to be vacationing in the vast donut space stations of Arthur C. Clark's 2001. And speaking of 1969, that was 47 years ago…if we went from the first airplane to the moon in just sixty years, fifty years after that shouldn't we be taking warp-drive spaceships to Betelgeuse? What happened?

For instance, what about dark matter? This is the stuff that makes up about 90% of the mass/energy of our universe, but so far physicists have only been able to narrow it down to (a) massive subatomic particles that we're literally swimming in although have never detected, or (b) primordial black holes that invisibly glue together galaxies. So 90% of everything is either something subatomic or something unimaginably massive and large. That's a pretty big gap for something rather important.

Dark_matter_stride_by_tchaikovsky2

Or how about eels? If you live in North America or Europe, you've most likely encountered an eel in your local river. Yet all North American and European eels originate from a single source, the mysterious “Sargasso Sea” somewhere in the Atlantic Ocean where eels spawn each year and migrate outward. Despite knowing this *has* to exist, not one person has ever witnessed a spawning eel, or found the location of the Sargasso Sea that has to exist.

eels

Two obvious things that have to exist, and yet we've never seen them. I believe this is called faith. So keep the faith, my friends, because our future is fast approaching.

Just a few years ago, I remember feeling depressed when NASA made tired-sounding announcements of sending humans to Mars in thirty or forty years. Ho-hum, ho-hum. And then this week, SpaceX makes a surprise announcement saying they plan to send an unmanned Red Dragon capsule to Mars in 2018 (TWO years from now, not twenty), and in September of this year will they will release serious plans for colonizing Mars. Holy Buck Rogers! And this comes just a few weeks after they butt-landed a rocket on a floating drone ship in the middle of the Atlantic. Does this sound like something from a science fiction book? Cause it's not. This is happening, folks, not to mention the slew of other private space enterprises going on.

In other news, big corporations are now creating their own endemic artificial intelligences–witness Siri from Apple, Alexa from Amazon, Cortana from Microsoft, and reports of just about every major hedge fund in Connecticut starting up their own AIs to run their core businesses. It's not quite the android Replicants of Mr. Philip K. Dick, but it's more than halfway to HAL of 2001…and pair this up with the walking robots from Boston Dynamics. And speaking of AI disasters, when Microsoft recently unleashed Tay–Cortana's AI cousin–free and unfettered into the world a few weeks ago, within hours she became a Hitler-loving racist asshole, which I feel perhaps doesn't bode well for humankind over the long term (bear fealty now to our robot-AI overlords before it's too late).

But this isn't the big news. No. The big news, I think, is that we're on the brink of TOE–and by that I mean the Theory Of Everything. Without getting stuck in the details, for the last forty or so years, we've been stuck with quantum-electrodynamics theory on one side (the merger of quantum, electromagnetics, and strong and weak nuclear force theories) and gravity-relativity on other, and never the twain shall meet. Nobody has been able to devise one coherent physical model of our universe that includes the four fundamental forces together with quantum theory–but I think scientists are on the brink of a breakthrough (witness the discover of a new, previously unsuspected particle by the LHC) that may create a new fundamental picture of reality.

Esoteric?

How can this possibly affect us?

Perhaps.

But “quantum theory” only really emerged in 1924 as a discipline unto itself with Heisenberg and Schrodinger (it did exist as bits and pieces in the 1800's, but only hints of something unconnected), and at the time, sitting on a steamship deck and sipping your coffee, you might have been excused from wondering what possible application it could have. Fast-forward sixty years, and it fueled the technical underpinning of the electronics boom that has birthed the Internet, AIs, and worldwide instantaneous communication networks.

images

What could a new theory of the ultimate nature of reality make possible? I have no idea, but I'll bet you that in fifty years it will be something amazing that we can't even imagine now. Tired old NASA is even funding a serious research project into faster-than-light travel–the idea isn't to really travel faster than light, but to bend space (and thus time) to punch holes through it. The physics say it's possible, but the energies required are either vaster than a hundred suns, or not much at all–what's needed is an understanding of the real physics behind the ultimate nature of our reality, and our lab-coated friends may just be on the edge of supplying it. So dust off your Mars suit, boot up your personal AI, and step onto that warp-drive spaceship, because the future is fast approaching.

But I doubt we'll ever find out where eels come from.

Interspecies Communication

I first met Sammy Davis, Junior, when I was nine. At the edge of the kitchen counter, he waited, a gray house lizard—what we in the Philippines called butiki. No bigger than my father’s index finger, half of him was a thin, twitching tail that tapered to a point.

 

Butiki

 

Sammy Davis was a similar specimen of Hemidactylus frenatus that my mother and father discovered long ago in their first apartment near España Boulevard in Manila. He had kept the moths and mosquitos at bay, and so they’d tolerated, then befriended him.

Now, several years later, my father approached Junior, making a series of clicks with his tongue, his hand outstretched with a pinch of boiled rice. My mother continued nibbling at her steamed chicken while my seven-year-old brother watched with a kind of stunned, frightened look in his eyes.

Still clicking–a quick click-click-click, pause, repeat–my father carefully set down the pinch of rice about two inches away, while the lizard watched with rotating eyes.

It took about half a minute while the lizard twitched his tail, swung his head first this way, then that–before he darted forward and snapped up the rice, swallowed, then darted away down the vertical side of the counter.

Triumphant, my father offered another pinch of rice.

Click-click-click.

Junior poked his head over the edge, scrambled to the rice, and gobbled it up.

Click-click-click.

 

koko-a-talking-gorilla

 

Koko, a lowland gorilla trained by Dr. Penny Patterson, is said to comprehend over one thousand signs from American Sign Language and to understand and respond to a spoken vocabulary of over two thousand English words. Beyond that, Koko is reported to have invented her own signs to communicate new thoughts: for example, describing a ring by combining “finger” and “bracelet” into the new word “finger-bracelet.”

 

kanz-icon

 

Kanzi, a bonobo, has been using a specialized keyboard with symbols on the keys to communicate with the team of primatologist Sue Savage-Rumbaugh, using a vocabulary of six hundred words.

 

Alex

 

Alex, an African Grey, was shown by Dr. Irene Pepperberg to understand over a hundred English words and could identify various colors and shapes.

 

Nim

 

A controversial project in the 1970s saw a baby chimpanzee named Neam Chimpsky—“Nim,” for short—taken from his mother just days after birth at a primate research center. Behavioral psychologist Herbert Terrace aimed to raise Nim as a human child, placing him with human families who strove to teach him a form of American Sign Language. Despite a sad end, when researchers attempted to re-integrate him unsuccessfully with other chimpanzees, Nim learned to sign in three- and four-word sentences:

Apple me eat.

Drink me Nim.

Finish hug Nim.

Give me eat.

Hug me Nim.

Tickle me Nim.

Yogurt Nim eat.

Banana eat me Nim.

Me eat drink more.

Tickle me Nim play.

 

Dolphin

 

In a NASA-funded experiment with a bottlenose dolphin named Peter, neuroscientist John C. Lilly tried to prove his theory that dolphins could learn language via constant human contact. Over ten weeks, Margaret Howe, his research assistant, spent day and night with Peter.

Dolphins can make human-sounding noises via their blowholes, and Margaret’s goal was for Peter to mimic sounds that he heard.

Over time, Peter could pronounce a rough version of several words, including “hello,” “we,” “one,” “triangle,” “diamond,” and “ball.” His favorites:

Hello, Margaret

Play, play, play

Disturbingly, Peter got emotionally attached to and aggressive with Margaret, circling around her, nibbling her, and jamming himself against her legs. The behavior escalated, and he was quickly re-instated with other dolphins until he had calmed down enough to be re-introduced to Margaret.

Unfortunately, after ten weeks, funding for the project ended, and Peter was shipped to another lab. Without Margaret, he apparently lost the will to live and refused to breathe, sinking to the bottom of his tank in what might be understood as suicide.

 

Butiki 2

 

Months later, I’m alone in the kitchen when I hear a clicking beside me.

There is Junior, his eyes two quivering balls of black, his tail flicking, right in the middle of the table.

Click-click-click.

I throw a rice grain at him, and he runs forward, catching it in his mouth and swallowing. I follow with several more.

Click-click.

Two clicks means “I’m done.” He twitches his tail one more time, turns, and is gone.

 

Big Ear

 

On August 15, 1977, astronomer Jerry Ehman was examining data from Ohio State University’s radio telescope, part of the Search for Extraterrestrial Intelligence (SETI) project. He saw an anomaly in the data from the direction of the constellation Sagittarius in the 1.43GHz frequency. Most scientists agree that would be the most likely frequency an alien civilization would use to broadcast a signal. It was so amazing that Ehman circled it and wrote “Wow!” in the margin of the print-out. Up until then, the signal had resisted all explanation. The signal’s strength was represented on a scale of thirty-six intensity levels by the numerals 0-9, then A-Z. The 72-second signal formed a perfect bell curve:

6EQUJ5

We are here.

 

Wow_signal

 

Out there, beyond the furthest arms of our galaxy, our radio telescopes broadcast our own signals, our hopes and dreams, in a language we hope someone will understand.

Our spacecraft bear plaques engraved with drawings and symbols of ourselves in a form we hope someone will decipher.

And we listen, straining to hear beyond the noise of supernovae and neutron stars, to ascertain if there is indeed somebody out there.

 

adoptaspacecraftvoyager1

 

Click-click-click.

 


SAMUEL PERALTA is a physicist and storyteller. An Amazon bestselling author, he is also the creator and driving force behind the Future Chronicles series of speculative fiction anthologies, with 14 consecutive titles ranking at the top of the Amazon SF Bestseller lists, several hitting the overall Amazon Top 10 Bestsellers list. His own work has been recognized in Best American Science Fiction and included in the author community anthology for the John W. Campbell Award for Best New SF Writer.

Samuel Peralta

Samuel Peralta, creator of The Future Chronicles

This article was first published, in slightly different form, as the Foreword to Interspecies

https://www.amazon.com/Interspecies-Inlari-M-J-Kelley-ebook/dp/B01G7KON9U?tag=disscifi-20

Interspecies, a shared universe anthology

 

The Butterfly Effect

“Can anyone alter fate? All of us combined… or one great figure… or someone strategically placed, who happens to be in the right spot. Chance. Accident. And our lives, our world, hanging on it.”

— Philip K. Dick

 

Ray Bradbury’s classic short story A Sound of Thunder is the most reprinted science fiction story of all time. Set in the year 2055, a company offers time travelling safaris to the past, to the Cretaceous Era, to hunt a Tyrannosaurus rex.

 

The company takes great pains to choose targets that are about to die anyway, since the belief is that changes in the distant past could become an avalanche that changes everything. But despite all precautions, something goes utterly wrong—

 

Ray Bradbury's "A Sound of Thunder", illustrated by Richard Corben (Topps Comics) 1993

Ray Bradbury's A Sound of Thunder, illustrated by Richard Corben (Topps Comics) 1993

 

Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?

 

Butterfly Effect

Samuel Peralta

Because your father stopped in Strandja park
to point out that whirligig of wings–blue
argus
, he said, Ultraaricia
Anteros
–you were dazzled forever.

Those wings wafted you here, ten thousand six
hundred kilometres away, to the
University of California,
Davis. Encyclopedia of Insects

in arm, you haul yourself up the stairwell
of Briggs Hall. Your frail sandal spindles on
the threshold–and you trip, a beautiful,
crippled Lycaenidaen specimen,

into the butterfly net of my arms.
Somewhere in Texas, a hurricane stirs.

 

Ultraaricia anteros

Ultraaricia anteros

 

Besides the chaos theory reference, my free verse sonnet Butterfly Effect arose from many memories. Of my father writing a scientific monograph on moths and butterflies, and handing me a paper pamphlet of it, when I was young.

 

Of my fondness for the blue argus butterfly, from the family Lycaenidae, a specimen restricted to the Balkans in Europe.

 

Encyclopedia of Insects

Encyclopedia of Insects

 

Of seeing the Encyclopedia of Insects in a library, a bloody huge book.

 

And memories of the three years, I lived in Davis, California, where I won my first-ever literary prize, and where I first thought I was in love.

 

MU Entrance, Freeborn Hall

University of Davis, California

 

So here we are. Where we are now, what language we’re speaking, what foods we eat, what we believe in—all of these are based on a myriad of events happening in the past, just so. Accidents. Coincidences. Chance.

 

We don’t live in the world of Philip K. Dick’s The Man in the High Castle because the Allied forces were victorious over the Axis powers in the Second World War.

 

From Amazon's production of "Man in the High Castle" by Philip K. Dick

Amazon's production of The Man in the High Castle by Philip K. Dick

 

We don’t live in a world where Franklin D. Roosevelt was defeated in his third run for President of the United States, to be replaced by Charles Lindbergh, as in Philip Roth’s The Plot Against America.

 

But what if? 

 

Speculative fiction itself is based on asking that question.

 

What if Pope John Paul I hadn't died after just a month in his office? What if the women's suffragist movement lost their battle for the right to vote? What if Steve Wozniak’s focus had turned to medical technology instead of personal computers? What if the Japanese and United States of America had allied to combat an expected Great Depression? What if Edward Jenner had died prematurely before developing a vaccine for smallpox?

 

butterfly-effect-1920x1200

 

The flap of such butterfly wings would surely have changed everything—lives, loves, the world as we know it.

 


SAMUEL PERALTA is a physicist and storyteller. An Amazon bestselling author, he is also the creator and driving force behind the Future Chronicles series of speculative fiction anthologies, with 14 consecutive titles ranking at the top of the Amazon SF Bestseller lists, several hitting the overall Amazon Top 10 Bestsellers list. His own work has been recognized in Best American Science Fiction and included in the author community anthology for the John W. Campbell Award for Best New SF Writer.

Samuel Peralta

Samuel Peralta, creator of The Future Chronicles

This article was first published, in slightly different form, as the Foreword to Alt.History 101

Alt.History 101, part of The Future Chronicles series of speculative fiction anthologies

 

FTL – Science Fiction’s Fudge Factor

Hyperspace, warp drive, folding space…over the years, authors have come up with lots of ways to travel faster than light, a virtual necessity if we are to portray any plausible kind of interstellar civilization.  Yes, you can build a good story even with years of transit time between even close systems.  Generation ships and crews in suspended animation can be interesting, and of course, we can restrict the action to a single solar system.  The Expanse is a great example of this kind of action.  But sooner or later we want to break away from the gentle warmth of Sol and explore the galaxy.  And we need to leave light behind in our dust (cosmic dust) as we do.

 

This is where the fudging begins.  Without turning this into a physics symposium, let’s just say that even the wildest imaginings of our knowledge of science tell us it is impossible to do this, especially for something like a spaceship full of human beings (as opposed to a few sub-particles).  So what do we do?  We make something up, of course.

 

Here is where we branch off in options.  Some authors make considerable effort to create systems of faster than light travel that at least seem plausible (they’re not).  Others don’t even worry about it.  They may call it a hyper-jump or a Jaworsky Field (after the fictional inventor), but they don’t even try to explain how it functions.  It can also be a naturally occurring phenomenon, a warp point, for example, or something manmade (possibly by ancient aliens now mysteriously vanished).  But one way or another, we will get the spaceships from system to system.

 

Sometimes, however, there is method to the madness, though it is often driven by plot rather than science.  For example, look at something like Star Trek.  The Enterprise flits all across space, seemingly unconcerned with refueling or even maintenance, at least unless someone sneaks onboard and scrags the dilithium crystals.  This is a great system when you want your ships to be able to get anywhere, to function at maximum efficiency even when they are lost and cut off.  But what if you want the reality of travel to impose greater restrictions on your space fleet?

 

Other systems are based on more of a fixed system using point to point travel.  I’ve used warp gates in my Crimson Worlds series.  These largely unexplained natural phenomena allow travel back and forth between two systems that are lightyears apart.  A system like this offers a number of advantages, especially for the writer of military science fiction.  It takes space, in all its three dimensional glory, and reduces it to a series of connections.  It rationalizes battle lines, and it creates a value structure for systems, making those with larger numbers of gates leading to cool places worth fighting over.

 

FTL systems can also be used to regulate the pace of travel and warfare in space.  Perhaps ships can “jump” anywhere, without the need for warp gates or the like.  But they can only go so far, and then they need to stop and refuel…and possibly have repairs done.  This can drive the plot in a powerful way.  Why is this backwater world so important?  Why are there giant battleships in orbit?  Because it is on the invasion route into the heart of a space empire!  This can be used to create something akin to the “island hopping” campaigns of World War II, as fleets maneuver to secure bases along invasion routes.

 

So the next time you pick up a new space opera, stop and think about whether there was more than made up science in the author’s mind.

 


JAY ALLAN currently lives in New York City, and has been reading science fiction and fantasy for just about as long as he;s been reading. His tastes are fairly varied and eclectic, but favorites are military and dystopian science fiction and epic fantasy, usually a little bit gritty.

Jay writes a lot of science fiction with military themes, but also other SF and some fantasy as well, with complex characters and lots of backstory and action. He thinks world-building is the heart of science fiction and fantasy, and since that is what he's always been drawn to as a reader, that is what he writes.

Telepathy – From Science Fiction to Reality

“Any sufficiently advanced technology is indistinguishable from magic.”

Arthur C. Clarke

 

During the Golden Age of science fiction, John W. Campbell, Jr.’s Astounding Science Fiction was a vanguard in popularizing stories that centered on humans with enhanced mental abilities, and how ordinary society might look at people with those abilities, notably with A.E. van Vogt’s serialized novel Slan and the similarly themed stories that collectively made up Henry Kuttner’s Mutant.

 

Indeed, the first Hugo Award was given in 1953 to a novel that revolved around telepaths. The Demolished Man, by Alfred Bester, is a police procedural science fiction story set in a world where telepathy has become commonplace, although so-called espers have varying degrees of ability.

 

The Demolished Man by Alfred Bester

The Demolished Man by Alfred Bester

 

That this work has become a landmark in the genre is evidenced by nods to his work, as in the television series Babylon 5, where the author lends his name to one of the primary protagonists, Psi Corps officer Alfred Bester, played by the iconic Walter Koenig from Star Trek (whose Vulcans were also able to mind-meld, to share thoughts, memories, and knowledge with others through physical contact).

 

Today this melding of minds, this staple of science fiction, is coming closer to reality than many of us may realize.

 

In his book The Physics of the Impossible, Michio Kaku, noted futurist and Professor of Theoretical Physics at the City College of New York, classifies three types of impossibilities. Class III impossibilities are what we normally think of as not possible: things that cannot become real, at least not according to our current understanding of science; these include perpetual motion and precognition. Class II impossibilities include things that may be realizable, but in the far future, such as faster-than-light travel.

 

According to Professor Kaku, telepathy is a Class I impossibility. These are phenomena that don’t violate the known laws of physics, and indeed may become reality in the next century.

 

A meeting of minds

A meeting of minds

 

Never mind the next century—some scientists believe the age of telepathy may be upon us.

 

The first clue? That people lacking one or more of the normal five senses can now, in certain situations, be given them.

 

Since the 1960s, around 350,000 people who were profoundly deaf or severely hard of hearing have been fitted with cochlear implants, providing them with a sense of sound where previously there was none. Essentially, a microphone picks up sounds, which are filtered by a speech processor and sent as an electronically coded signal to a transmitter behind the ear. This transmitter sends the signal to the subject’s brain through an array of up to twenty-two electrodes circling the cochlea, which then send the impulses through the auditory nerve system to the brain.

 

Following European approval in 2011, the United States Food and Drug Administration in 2013 approved for use the first retinal implant. The system uses a video processing unit to transform images from a miniature video camera into electronic data, which is then wirelessly transmitted to a sixty-electrode retinal prosthesis implanted in the eye, replacing the function of degenerated cells in the retina. Although vision isn’t fully restored, the system allows those affected with age-related macular degeneration, or with retinitis pigmentosa—a condition which damages the light-sensitive cells lining the retina—to better perceive images and movement.

 

Retinal implant

Retinal implant

 

Similar advances are being reported for the other three senses of touch, smell, and taste.

 

But what about the sixth sense?

 

In my own speculative fiction universe, electronically augmented telepaths make use of technologies akin to magnetic resonance imaging (MRI) to associate perceived images of neural activity with a subject’s memory palace in his brain. This is a key point for my conception of the protagonist of my short story Trauma Room, a man who can use augmented telepathy to traverse a subject’s thoughts and memories using the method of loci.

 

Trauma Room by Samuel Peralta

Trauma Room by Samuel Peralta

 

Today, functional MRI has actually been used to sense words being thought by a subject, or to discern the images being formed in the brain as a subject watches a movie. It’s still very mechanical, matching monitored brainwave activity with a huge database of impulse responses to benchmark words or images, but it’s the same big numbers principle that enabled the IBM Deep Blue chess computer to win against then-World Champion Garry Kasparov in 1997.

 

In the same year that The Demolished Man was published, Theodore Sturgeon‘s More Than Human also came out. It’s the story of several people with extraordinary abilities who are able blend their abilities together and achieve human transcendence. The same theme—of humans transcending ordinary humankind—is explored in Time is the Simplest Thing, by Clifford D. Simak. It can be argued that a similar sort of communal experience—if not transcendence—is already part of our experience, with the spread of the Social Web.

 

It’s only a matter of time before all the input and output devices we have—keyboards, flat screens, heads-up displays—become obsolete. Why should you have to type or dictate information into a computer, when you can control it directly by thought? Why project information onto your eyes when you could send information directly into the brain? In time, many of us may be direct input/output nodes into the cloud.

 

Science fiction?

 

Direct brain interfacing

Direct brain interfacing

 

We live in a world where cochlear implants are already helping the deaf to hear, and retinal implants are beginning to help the blind to see.

 

We live in a world where smartphones and connected wearable devices—watches, glasses, health and fitness monitors—simultaneously receive and broadcast information to and about us through the cloud of the Internet.

 

We live in a world where deep brain stimulation is routinely used in therapies to address Parkinson’s disease, where implants in the brain allow people to bypass a broken spinal cord and move hands, arms, limbs with the power of thought.

 

Augmented reality heads-up display

Augmented reality heads-up display

 

In fact, we live in a world where real telepathy has already been achieved. A team at Duke University in North Carolina has, for the first time, demonstrated a direct communication interface between two brains. In the Duke experiments, two thirsty rats are placed into separate cages. They cannot see or hear each other, but their brains are wired together via electrode implants in their motor cortices. Each rat will be rewarded with a sip of water if it pushes the correct one of two levers. In the first rat’s cage, a light comes on above the correct lever to let the rat know which lever to push—but there is no such indicator in the second rat’s cage.

 

The experiment, then, measures whether, when the first rat pushes the correct lever, it sends a brain-initiated signal to the second rat, which must then correctly interpret the signal it experiences in its own brain, and push the correct lever.

 

The technology is simple: implanted electrodes capture the signals from the firing of the neurons in the motor cortex, translate them into binary code, and sends the signal—via a wire, wirelessly, or via the Internet to another location—into the electrodes in the other brain, which translates it back into neural signals.

 

Sheer chance would have the second rat pushing the correct lever 50% of the time. In fact, the rat chose the correct lever between 60% and 85% of the time. This was true even when one animal was in North Carolina and the other was in Brazil.

 

How much longer before what you read in the following pages is no longer science fiction?

 

The Future of the Mind by Michio Kaku

The Future of the Mind by Michio Kaku

 

In The Future of the Mind, Professor Kaku notes, “We have learned more about the brain in the last fifteen years than in all prior human history, and the mind, once considered out of reach, is finally assuming center stage.”

 

Science fiction writers peer into possible futures, using a literary form of precognition, as it were. And so we follow that grand tradition, celebrating this a new Silver Age of fiction, an age of online publishing and digital books, an age where we are surrounded by wonderment and wonders, where science, in many ways, has become magical.

 


SAMUEL PERALTA is a physicist and storyteller. An Amazon bestselling author, he is also the creator and driving force behind the Future Chronicles series of speculative fiction anthologies, with 14 consecutive titles ranking at the top of the Amazon SF Bestseller lists, several hitting the overall Amazon Top 10 Bestsellers list. His own work has been recognized in Best American Science Fiction and included in the author community anthology for the John W. Campbell Award for Best New SF Writer.

Samuel Peralta

Samuel Peralta, creator of The Future Chronicles

This article was first published, in slightly different form, as the Foreword to The Telepath Chronicles

Various_TELEPATH_CHRONICLES_EbookEdition-320x512

The Telepath Chronicles – part of The Future Chronicles anthology series

 

Artificial Intelligence: A Pragmatic and Ethical Dilemma

“Alexa, stop!!”

This is shouted by me at least a dozen times a day when my digital friend goes completely off the rails when given a seemingly simple request. It also got me thinking about how far we’ve come in the quest for artificial intelligence and, in moments like this, how far we still have to go.

I’ll preface this by stating that I’m not a software or computer engineer. My degrees are in aeronautics and electronics, so this discussion will necessarily be more abstract than technical. Think of it as more of a fun intellectual exercise than a serious dissertation on the subject, a writing prompt, if you will.

Artificial intelligence, true AI, has been a staple of science fiction since the 1800’s, over a century before the first true computer. In the 1872 novel, Erewhon, Samuel Butler included three chapters that comprised The Book of the Machines, a number of articles that addressed the possibility that machines might develop consciousness through Darwinian Selection. While dismissed and ridiculed at the time, Butler’s story was a cautionary tale of what could happen should a type of sentient machine arise.

Since The Book of the Machines, science fiction has sought to address what a future might be like when humanity lives with intelligent machines. These works, both literary and cinematic, tend to fall within the two broad categories of utopic and dystopic. Some depict a world in which machines and humans live in harmony and as equals, others tell us of a world in which our creations turn on us and supplant us as the masters of our planet. So whose version will prove to be more accurate?

If we accept that a sentient intelligence might occur through a type of natural selection, as Butler first suggested, it will have come about through the brutal process of evolution and the idiom, survival of the fittest, may end up being more than just a clever expression. An intelligence created spontaneously via a random set of favorable conditions could very well consider humanity an imminent threat and take appropriate measures, especially in its infancy. Given the increasing amount of networked automation in the infrastructure we depend on for survival that scenario could quickly morph from an idle curiosity to a grave threat.

On the brighter side, what if the first AI machines were the result of careful intent and built with specific purpose? Science fiction is loaded with beloved androids and robots, each with their own personalities and noble motivations. These characters are usually highly anthropomorphic, both in appearance and demeanor, and typically aren’t distinguishable from their human counterparts until the author provides a physical description. I find nothing inherently wrong with this hopeful outlook of what intelligent machines could be like and even have one that is a favorite character in my adventure scifi series. That being said, I also feel this is the least likely scenario for a few reasons.

As I yell at Alexa one more time, trying to get her to change the song that’s currently playing to the one I actually meant I’m awed at what’s now commercially available and sold today under the misleading label of AI. Alexa is a very convincing simulation of a petulant five year old who refuses to just do what she’s asked or (I’m convinced) deliberately misunderstands me. Despite the fact I call the device by a name and interact conversationally, at no time am I not cognizant of the fact that Alexa—impressive though she may be—is nothing more than a set of predetermined responses and clever programming.

You may also remember Microsoft’s recent (and tragically misguided) “Tay.” The Twitter chatbot was a much-publicized experiment that was said to learn and adapt the more it interacted with users on the social media platform. Within the span of twenty-four hours Tay had become foul mouthed, a howling bigot, and a Holocaust denier. (So in that way I suppose Tay was exactly like most of Twitter. I’m only partially kidding.)

https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

The experiment was quickly shut down, but not before it was briefly reactivated and had a complete meltdown after discussing the pros of drug use while in front of law enforcement.

While Tay and Alexa are entertaining, albeit for very different reasons, they have raised some concerns within the industry as to what happens when and if a sentient AI is developed. Look at how far these interactive and adaptive interfaces have come in just the last five years. The curve has been increasing exponentially as processing devices and memory become smaller, cheaper, and more efficient, allowing for software of a complexity that was previously thought to be impossible. For the first time since it was dreamed up in the 1800’s the question of intelligent machines is beginning to shift in the minds of many researchers from “Can we?” to “Should we?” The moral and ethical ramifications of creating a free-thinking being are profound when we dig into issues like what individual rights exist for something that began life as a piece of lab equipment.

While I was recently penning a new character for a different series—an AI that “emerged,” so to speak, and exists only in software—these were some of the thoughts that were rattling around in my mind and they led me to these final questions: will we even recognize a sentient artificial intelligence when we encounter it? At the speed with which the average computer today can process information would such a being see any benefit to engaging in something so primitive as a spoken language with a creature that thinks so imprecisely and comparatively slow? Will it be driven by the needs of its biological counterparts and find ways to procreate? An interesting proposition given the amount of aggregate computing capability available with the advent of cloud based processing. A motivated, intelligent AI could spawn an untold number of clones in the blink of an eye.

This was just a brief scratching of the surface of a subject with daunting technical hurdles and many ethical pitfalls. My gut instinct tells me that a true AI will emerge as a result of thousands of hours of hard work by dedicated researchers and engineers as opposed to a spontaneous event that pops up out of the ether, but I couldn’t even begin to hazard a guess as to how soon that could be. It wouldn’t surprise me if they announced a breakthrough tomorrow anymore than it would if I lived the rest of my life without that definitive eureka! moment. But, as with most lofty goals, the journey is its own reward. Maybe—just maybe—all we’ll get is an app that actually knows what song we’re trying to play. In the end that alone might be worth the effort.


Joshua Dalzelle is a USA Today bestselling author, and an Amazon Top Ten Bestselling Science Fiction author, and creator of the hugely popular Omega Force series.

[/av_textblock]

Science, Progress, and Science Fiction

As a novelist I often get categorized as a ‘hard’ science fiction writer, which I’ve never been entirely certain fit because I absolutely make use of the customary handwavium and even the occasional Unubtanium. Of course, I do try to at least explain my speculative technologies within the framework of real scientific hypotheses and theories… and that is where things can get sticky by times.

For your average reader in most genres, cutting edge research isn’t on their daily reading list. That’s just not the case in Science Fiction, however. Scifi readers tend to enjoy science, at least the knowledge of it, just as much as they enjoy science fiction. And that’s awesome, but it does make for interesting times as an author because science advances, and does so quickly, so sometimes your brilliant (or just acceptably clever) scientific plot point can be turned into fantasy magic overnight by some PhD at Cern or other research facility.

Ok, by this point it sounds like I’m complaining about science ruining my novels.

Not even close.

When you’re writing speculative works, you have to expect that some (most) of your speculations will be wrong. That’s just par for the course unless you happen to be a PhD with access to billions of dollars’ worth of equipment and a series of theories that you’ve, for some reason, not already put out to your peers.  If you’re that guy, I have to question your priorities.

Still, it can be jarring to have something you specifically wrote about be chucked out by scientists, even if you knew it was coming anyway. It’s happened three or four times over my time as a writer, and each time I go through the same stages of response.

First, there is the automatic face palm.

Yeah, that moment where you’re just grateful that you weren’t drinking anything when you found out, otherwise you know you’d have a mess to clean up. Your brain goes immediately to the vilest epithets you can imagine which, for me, is usually something out of Bugs Bunny… (Don’t judge me.)

Thankfully that only lasts for a few seconds because, hey, this is the game we play and we play it because we love it. Any advance in science is good for science fiction. When a door closes, a dozen others unlock, because that’s just how huge the universe is. Maybe someday we’ll know so much about how things work that every theory that’s disproven somehow makes the universe smaller, but that day isn’t going to be today.

So that brings us to the second stage, the question of whether we can adjust the story to work. This is an important question, particularly if the story is currently ongoing. If we’re working on a series and we know that there will be another novel coming out, or more perhaps, then we have to decide if we’re going to stay in our, now ‘fantasy’ world, or try and wrench the laws of physics back to reality as we’d like to know them.

Sometimes this is easy, especially with cutting edge theories. Quantum Mechanics is such that you can bury a lot of crimes in the uncertainty of String or M-Theory. Sometimes, though, it can’t be done without retroactively messing with novels you’ve already written and, quite likely, other people already love.

Don’t DO that.

It’s better to write fantasy than mess with the stuff people already love.

Ok, maybe it’s a close call… I mean, it is Fantasy and all. (I’m kidding! Relax, I like fantasy, just making a point about so called hard science fiction here.)

So we finally get to the third stage, Acceptance.

Yeah, we get there faster than for stages of grief, but we’re science fiction types. We’re just awesome that way.

Whether you’ve managed to fix the problem, or you’ve decided that it can go play in the Elysian Fields for all you care, it’s time to put it aside and go back to writing.

After several times through this process, I have to admit that I look forward to it now. Being proven wrong, even when it was relatively obvious, is fun. It means that you’re working with real ideas that real people are also tangling with in the real world. Even being wrong is awesome because of that connection to actual research.

We’re science fiction fans, all of us, and that connection with the cutting edge is what drives us just as much as the ancient link to the story construction itself. We care about both the future and the past, so science fiction connects both the cutting edge world we live in and the oldest art we know of…

Storytelling.


 

Evan Currie has been writing both original and fan fiction works for more than a decade, and finally decided to make the jump to self-publishing with his techno-thriller Thermals.

Since then Evan has turned out novels in the Warrior's Wings series, the Odyssey One series, and the first book in an alternate history series set during the height of the Roman Era. From ancient Rome to the far flung future, Evan enjoys exploring the possibilities inherent when you change technology or culture.

In his own words, “There's not much I can imagine better than being a storyteller.”