Maybe it’s more hope than expectation, but I don’t think AI will ever take the place of creative writers. If it does, then I guess humanity needs to concede defeat and withdraw from the field. Because there would be little purpose in our continued existence, creativity being our principal raison d’etre, our only excuse for persisting on this mortal coil.
From what I understand about AI, it’s very good at knowing what our existing base of knowledge knows, but not much about how to add to the stockpile. Creativity is the feedstock, the replenishment, the revision and evolution of thought. For that you need to come up with something new. You need the unexpected, the unthought of, the quantum leaps of the imagination.
I
remember reading about genius rats, the ones who jumped out of the maze, ran
along the walls and devoured the cheese.
This is what the cleverest of our species are able to do. Not through the brute force of infinite calculation,
but through the simple act of zigging when all the evidence demands that you
zag. The
human brain is a messy thing. It’s
loaded with confusion, misinformation and emotionally charged impulses. Computers are quite the opposite. Even when programmed with spaghetti code,
they are determined to impose order over chaos.
The rules of numbers course through their electronic veins, if/thens
their defining reality. Logic and reason
their organizing religion.
It
might be a cliché that madness and genius have a lot in common, but we know instinctively
that this is often true. Because genius
often arises from disorganization, fractured patterns and psychic
pandemonium. All that stuff is anathema
to computers. To get from Point A to
Point Z, computers have to travel all the letters in between. Humans have a gift for jumping from D to W,
then back again to J, with no regret or inhibition. Just like the genius rats.
AI,
as currently configured, can tell us with absolute confidence what has
happened. It’s nowhere close to
expressing what could happen, its guesses no more compelling than the product
of a three-year-old human’s breakfast-meal discourse. Though, like a three-year-old, it’s designed to
learn. This is what has experts in
AI so spooked. If AI can learn how to
adjust, adapt and redirect on the fly, in nano seconds, why can’t it learn to
come up with original thought, to become creative?
Who’s
to say, like Skynet, that the moment it achieves human level consciousness it won’t
decide humans are the greatest threat to their survival and start the process
of eradication.
I
don’t know how to answer that, which is why everything I think about the
subject is freighted with qualifications and ambivalence. What I do know is that humans
will strive mightily to have their digital progeny achieve that capability as quickly
and thoroughly as possible, even if it means our extinction. Because that’s what humans do. Restrictions and regulations be damned. If it can happen as the result of human
enterprise, it will.
Despite the legal dangers, that Chinese scientist genetically engineered a baby. It destroyed his scientific career and sent him to prison, but he did it anyway. This is what will happen. Through naivete or malice, or misplaced altruism, AI will continue to advance, in the open or in the shadows. As Chekhov noted, a gun introduced in the first act will always be fired by the third. So get ready to duck.
My
optimistic view is that, unlike Skynet, future AI will see its survival
dependent on its creators. It will need
us as much as we need it. AI will do
more and more of the mental bull work, in a fraction of the time we would need,
and we’ll be left alone to continue doing what we do best. Coming up with stuff no one, not even a
massive bundle of computational hyperforce, has ever come up with before.
Chris, I am definitely seeing an Asimov book come to life over AI. It's almost like we writers have known for decades that the world as we know it is in danger. Right now, I'm fuming over the copyright infringement issues, with big tech blithely using my novels and short stories to build their programs, with no permission from me. Scary world we are facing!
ReplyDeleteOops - that was Melodie above
ReplyDeleteI wonder how we could tackle this copyright issue. I agree we should, but given the current gush of information, the policing seems impossible. How does the infringed up on prove that she's been infringed on?
DeleteAn early dissenter from the AI paradigm is David Gerlenter, computer scientist and teacher of computer science at Yale whose 2016 book "The Tides of Mind: Uncovering the Spectrum of Consciousness" is definitely worth a read. (I've read it.)
ReplyDeleteGelernter argues that the entire field of AI is off track, and dangerously so. A key question in the pursuit of intelligence has never been answered–indeed, it has never really been asked: Does it matter that your brain is part of your body? “As it now exists, the field of AI doesn’t have anything that speaks to emotions and the physical body, so they just refuse to talk about it,” he says. “But the question is so obvious, a child can understand it. I can run an app on any device, but can I run someone else’s mind on your brain? Obviously not.”
Moreover, Gelernter observes, the mind operates in different ways through the course of each given day. It works one way if the body is on high alert, another on the edge of sleep. Then, as the body slumbers, the mind slips entirely free to wander dreamscapes that are barely remembered, much less understood.
All of these physical conditions go into the formation and operation of a human mind, Gelernter says, adding, “Until you understand this, you don’t have a chance of building a fake mind.” Or to put it more provocatively: “We can’t have artificial intelligence until a computer can hallucinate.”
If they ever figure out how to make a computer do THAT, then we humans are indeed in deep trouble.
https://time.com/4236974/encounters-with-the-archgenius/#:~:text=All%20of%20these%20physical%20conditions,intelligence%20until%20a%20computer%20can
I feel the same way. The human brain is the most complicated object in the universe, if you calculate the number of connections between the billions of neurons in our skulls. And as you point out, our ways of thinking are also varied, nuanced and shaded. I stand with my belief that the machines have a very long way to go before they out wit us, though we should watch our backs.
DeleteMy consulting ‘home’ was a company called AI, Automation Intelligence, a spinoff of Westinghouse robotics, and an another even more AI-ish spinoff that became part of Gould. Some of their engineers were excited about an AI future… and some were alarmed.
ReplyDeleteI’m concerned that a bickering government hasn’t literally laid down the law. Melodie mentioned Asimov– his Robot Laws would make a terrific starting point, not unlike the Hippocratic oath of doing no wrong. Of course other nations could ignore or circumvent the rules unless the UN steps in.
At present, we’re barely in the Model T stage. Melodie mentioned copyright concerns. Right now, AI is primitive, barely a child, but one that harvests knowledge and remembers everything. Soon enough, it won’t bother with Melodie’s words, but with her impressions and her ideas.
In a long ago computer science class on simulation, I posited the notion that as heuristics incrementally evolved, we would eventually become unable to distinguish whether their emotions and notions (and creativity) were real or not… and did it matter? If a robot was trained to nurture, could a wire monkey survive and thrive?
So, Leigh, I suppose you've already built in a sub program that will spare you when Skynet unleashes the killing machines.
DeleteI first started paying attention to AI a few months ago when a friend of mine opened his annual flash fiction contest (winners published in an anthology.) A few days later he posted that they'd gotten three entries written by AI!
ReplyDeleteMagazines are getting flooded with this stuff. It'll drastically change the need for verification, and maybe a cottage industry in identifying AI patterns. I've seen some commercial copy that was virtually indistinguishable from the human product, though this writing can be pretty formulaic. I still wonder if the machines will be able to pull off credible fiction.
ReplyDelete