My current novel-in-progress (Wash Away, but that’s a working title, depending on how much I can work the river into the storyline) includes a corporation that has developed an artificial intelligence in order to automate its factory. The AI has begun to build robots so that it can take over customer-service jobs. It has named itself.
Honestly, I don’t know why every fiction writer doesn’t find AI to be irresistible. Imagine the strange characters that will result from non-human intelligences trying to reprogram themselves to thrive in a human world. AI will take character-building into unmapped territory.
I’m trying not to write a science-fiction novel – that’s my #1 author guideline on this project. Some early draft chapters got tossed when my writing group friends said they sounded “futuristic.” But don’t you know: personal robotics, automated manufacturing, self-driving delivery vehicles, and AI chatbots are already here? It’s a present-day novel! As William Gibson said, “The future is already here – it’s just not evenly distributed.”
These technologies are worth knowing about, if you want to be informed, or justifiably scared. The librarian in me wants to provide a bibliography, but I’ll restrict myself to saying that The New Yorker’s coverage in this area has been excellent. Look up “How Frightened Should We Be of A.I.?” by Tad Friend, or “What Happens When Machines Learn to Write Poetry” by Dan Rockmore, or “Learning to Love Robots” by Patricia Marx. These are mainstream business technologies now. There’s nothing futuristic, I hope, about a humble handmade factory AI that wants to pull itself up by its bootstraps into a white-collar job.
An outstanding research resource for me has been a book called Possible Minds, edited by John Brockman. I don’t know much about Mr. Brockman, but he seems to be a prolific thinker and writer, and if you want to be impressed by the length of an Amazon.com author page, go see about him. This book contains essays on AI by 25 top modern intellectuals and scientists. The consensus seems to be that the development of self-programming, machine-learning artificial intelligence is likely to be a major turning point in human history. Where it turns us, there wasn’t much agreement about that.
An endearing oddity of this book is that all the essays have been structured so that they refer back to the research and ideas of Norbert Wiener, who coined the word “cybernetics” in 1950. That was decades before the personal computer, before silicon processors, before the Internet. All the same, some of Wiener’s insights have held up.
An important topic for consideration about AI is what is euphemistically called “value alignment”: making certain that that any future AI systems pursue goals that are beneficial to humans, even though these systems will not be human themselves. Intelligence on earth has come to exist only through the thorny and tortuous process of natural evolution, and so human intelligence may be bound by constraints that are not even knowable to us. Now, soon, for the first time, we will have an intelligence that has not evolved. How will it think differently? How will we guide it and how will we know if it’s taking a destructive path? And will we be able to stop it if it does? One of the essayists points out that if humans create a superintelligent machine that has an off switch, the first thing the machine will do will be to disable the switch.