Each day this week I’m looking at works of science fiction and fantasy which I think might be useful for organisations, institutions, companies and communities who are trying to get ready for the shape of things to come.
Yesterday we looked at stories of spies and mongrels; today we turn our attention to insolent, emotional things.
Reading speculative fiction for inspiration in your work means picking up not just new science fiction but the old stuff too. The kind of Sixties paperbacks which line the second-hand bookstores and which smell of smoke and history, their pages a deep tan, as if the pulp was slowly returning to its origins as a tree.
In one of these stores, I found Robert Sheckley’s picaresque Options, about an interstellar deliveryman whose attempts to repair a broken down spaceship lead to increasingly surreal situations.
The hero’s interactions with a series of emotive and unreliable artificial intelligences – who are at once constrained by programming and yet just as flaky as any human you’d meet – made me think of the Internet of Things, of the notion that we might not code our next generation of machines but train them like pets, and of the way that offhand genre fiction jokes from fifty years ago might become real social issues in time to come.
Sheckley’s robots remind me of the technology in novels by his contemporary Philip K. Dick, who is most famed for the work which was adapted into the movie Blade Runner.
Something you lose in that fabulous noirish movie is the petty and ridiculous side to Dick’s vision. The source novel for Blade Runner includes a pet shop for android animals; the doors of hotel rooms in Dick’s book The Ganymede Takeover badger you to give them a tip, and robot taxis hassle you when the meter is running.
Dingy, with scaling enamel, once bright green but now the color of mold, the tattered ionocraft taxi settled into the locking frame at the window of Joan Hiashi’s elderly hotel room. “Make it snappy,” it said officiously, as if it had urgent business in this collapsing environment, this meager plantation of a state once a portion of a great national union. “My meter,” it added, “is already on.” The thing, in its inadequate way, was making a routine attempt to intimidate her. And she did not precisely enjoy that.
“Help me load my gear,” Joan answered it.
Swiftly—astonishingly so—the ionocraft shot a manual extensor through the open window, grappled the recording gear, transferred the units to its storage compartment. Joan Hiashi then boarded it.
I remember reading these books as a teen and thinking, “Those Sixties writers are fun, but they got it so wrong!”
Now we have Google voice assistants for your home which will also throw in a quick commercial to help you start your day, we see IBM’s Watson picking up profanity a couple of years back, and the prospect of domestic intelligences which are trying to put the soft sell on you – and might even have code to mimic human personality traits – seems eerily close.
Even the simple robots at the State Library of Queensland are pretty emotive. Our NAO robot Sandy lost a leg recently, and rather than stating “error” or refusing to function, she says ouch and expresses pain.
More than that, NAO roboticist Angelica Lim was specifically thinking about emotional gesture while working on the device and has written on the possibility of using emotionally aware robots in settings such as care homes. The glorified toys which we currently showcase in our communities are forerunners to a potential future which may look more like those 60s novels than we expect.
That doesn’t just mean we should worry about how robots will treat us; it also matters that we think about how we treat machines. As Michael Schrage writes in Harvard Business Review,
…because humans don’t (yet) attach agency or intelligence to their devices, they’re remarkably uninhibited about abusing them.
[…] If adaptive bots learn from every meaningful human interaction they have, then mistreatment and abuse become technological toxins.
[…] Just as one wouldn’t kick the office cat or ridicule a subordinate, the very idea of mistreating ever-more-intelligent devices becomes unacceptable. While not (biologically) alive, these inanimate objects are explicitly trained to anticipate and respond to workplace needs. Verbally or textually abusing them in the course of one’s job seems gratuitously unprofessional and counterproductive.
How do you think the crabby taxis and neurotic robots in those 60s novels got to be that way? Maybe through the kind of interactions Schrage describes.
When we start imagining life in such a world, it turns out those sci-fi paperbacks from fifty years ago might be more useful than we thought.
Stay tuned for Friday’s edition of You’re Still Not Reading Enough Sci Fi.
One thought on “You’re Still Not Reading Enough Sci Fi, Pt. 5”