AI is everywhere, everywhere. Even where it isn’t.
You may remember my friend Thrush. Months ago, he urged me to write a non-fiction book, a layman’s guide to AI, artificial intelligence. I concluded I’d need a lot of research to bring my knowledge up to date for a full-fledged book, but an article might be more achievable at this point.
Thrush himself has a fair amount of experience. He was an engineer and I a consultant who worked for Westinghouse’s robotics division. (W) spun off as Automation Intelligence and a much smaller team led by Richard Fox specialized in neural networks, the foundation for modern artificial intelligence, also called machine intelligence.
To Understand the Future, Look to the Past
In the 1990s, AI approached the intelligence of a catatonic Labradoodle, but contrariwise, low IQ made it valuable for handling backbreaking and mind-deadening tasks.
An anecdote out of Australia told of a small parts inspection that was not only brain-numbingly dull, but subject to errors as workers’ attentions wandered. According to the story, workers trained a goose to detect bad assemblies, and an anserine solution proved somewhat more accurate without grousing. Supposedly the government stepped in and shut down the goose’s career, claiming forced labor was cruel to animals… not humans, but animals. (I would be interested if anyone can verify if this tale is true.)
Meanwhile, Fox and his colleagues, eventually acquired by Gould, had viable contracts, many from the automotive industry. These included banal tasks like Ford counting palleted sheets of window glass for export, GM identifying machine tooling heads too fine for the human eye to see, and determining if a transmission tab had been welded or not.
A particularly mundane task allowed Fox’s team to exercise its neural network: grading lumber. As plywood finishing a final trim and sanding, end-of-line workers noted knots and imperfections and categorized each sheet as Grade A, B, C, etc. This is where AI training kicked in.
For one last time, the grading workers sat with a training dongle, a keypad to rate the plywood. AI peered over their shoulders as workmen clicked through plywood sheets, teaching the neural network that a crack in the wood meant a rejection. Three small knots or one very large knot might drop a score to a grade C. After training with several sample sheets, the program was ready to tirelessly grade lumber all day long. Not brilliant, but it helped reduce industrial drudgery.
Pattern Recognition
Like all development, AI built on the shoulders of people and programs and techniques that came before, especially the Holy Grail of pattern recognition.
Humans can easily recognize faces, voices, songs, the distinctive beat of hawk wings, the scent of a lover’s neck, a fractional glimpse of an artifact our minds instantly extrapolate from a part to the whole in toto. Historically, digital computers have suffered at such simple human tasks. Before our current LLMs (large language models) and picture recognition could become useful, we needed pattern recognition, and pattern recognition required huge amounts of very fast storage and processors. These prerequisites have finally fallen into place and pattern solving has become a reality. Google and Alexa can not merely parse verbal questions, they can identify the voice of their inquisitor. Technology has finally arrived at the doorstep of machine intelligence.
The Stimulation of Simulation
Writers and researchers have speculated for a century about AI, and even as far back as a couple of millennia if one extrapolates early myths. AI was a topic at my university. I found myself assigned to a team working on economic simulations. Other universities pursued weather models, the effects of tides, and even earthquake simulations, so economics seemed relatively manageable.
In an idle moment, I submitted a paper about simulating human intelligence, arguing that as simulations became more refined, they would eventually become indistinguishable from the real thing. I didn’t immediately realize I was backing into postulates and theorems by well-known theorists in the field, but I visualized simulation as an unexplored path to artificial intelligence.
Eliza
We’ve discussed prerequisites and pattern recognition, but in the mid-1960s, a fun experiment surfaced at MIT. Joseph Weizenbaum created a program, Eliza, that could communicate in a primitive way with humans. Although MIT disclaimed artificial intelligence, it nevertheless contained seeds of AI, the recognition of certain words resulting in a reaction. Beyond its small vocabulary, note the lack of specifics:
ELIZA: How do you do. Please tell me your problem.
Leigh: I'm concerned about the dark side of artificial intelligence.
ELIZA: Please go on.
Leigh: AI may not only be used, but also abused.
ELIZA: That is interesting. Please continue.
An anecdote told of an associate professor outside the computer room who sat down at an untended terminal, not knowing it was connected to Eliza. He attempted in vain to access his apps, and met with an implacable Eliza whispering soft platitudes in this teletype. Eventually the screaming professor drew the attention of staff, who soothed him with hot tea and an explanation.
Eliza lives on. You can try her out for yourself.
It all comes together.
Not long ago, our friend ABA had a government contracted business where she and her staff sorted through historical photographs to identify content, i.e, this is Nelson Mandela visiting Table Mountain on a mid-July afternoon.
AI has made her job obsolete. Computers can now recognize Madiba and distinctive geography with little to no human interaction other than feeding the machine.
Robots
As we speak, commercial robots are mating software with hardware, matching AI with mechanical precision. Applications are numerous. Think of elder care. Instead of slings and motorized hoists, a robot that may or may not look human softly scoops arms under a patient and gently lowers her on a toilet or into a bath, adjusting water temperature as desired. Another robot tenderly feeds a paralyzed patient and reads her an email from home.
I predict– yes, I’m willing to take such risk generally fraught with failure– I predict within a Christmas or two, we will see toys with built-in AI. Children can talk with them, ask questions, tell them their concerns, and let them know when they’re hungry or tired or need soothing. In the middle of the night, a toy might sing a child back to sleep. Advanced toys might resemble futuristic robots, whereas toys for young toddlers could look like Tribbles or teddy bears.
Jesus at the Wheel
We see one robot almost every day, a highly-intelligent ’bot on wheels. I’m referring to an electric vehicle (EV) with full self-driving (FSD)– the Tesla.
The average person may take FSD for granted, but those of us in the computer field are amazed at advances, especially in recent months. It is possible to enter the car, tell it where to go, arrive at its destination, and park itself without a person once turning the wheel or tapping the brake.
My friend Thrush, having traded in his Model 3 for a Model Y, cogently commented he doesn’t so much drive his Tesla, he supervises it. Close your eyes and you’ll never imagine a real person isn’t steering the car or– no disrespect– Jesus, as bumper stickers read.
Thrush no longer feels comfortable driving a traditional car. He has a point. Tesla accidents make the news simply because they are rare. Their accident and fatality rate is considerably lower than human-driven vehicles.
The console screen in a Tesla identifies traffic lights, stop signs, trucks, cars, and bicycles. It used to show traffic cones, but they’re no longer displayed. If a child darts into traffic, the car will brake and swerve to avoid him.
I’ll dare make another prediction, that the day will soon come when self-driving vehicles talk to one another. What? Why? Say you’re following a truck. An EV cannot see through a semi tractor-trailer any better than a person. But a smart vehicle in front of the semi sees a pending traffic accident and shouts out an alert to like-minded vehicles, giving them a chance to react without seeing what they’re reacting to. In a step beyond, intelligent vehicles could deliberately coordinate with one another to mitigate collisions.
The Dark Side
Those of us in the industry are well aware of abuses of computer intrusions and malware. At least one downtown bank here in Orlando won’t let you enter if you wear dark glasses or a hat. They want their cameras to pick you up and soon enough, identify you with AI.
London is one of the most monitored cities in the world. With AI, their computers could scan sidewalks and plazas snapping photos. Cameras sweep Trafalgar Square, its AI circuits identifying each person, innocent or not. AI can analyze faces at political rallies and protests, the wetdream of a police state. That is one creepy intrusion.
AI is already used to identify target marketing weaknesses and desires. In fact, AI was forced to cut back because it was targeting pregnant women before husbands and the women themselves were aware of their condition.
Finally, among other joys, AI can be used for warfare, electronic and traditional.
A Starting Point
We need AI legislation not only to protect us from corporations, but protect us from government. Decades ago, Isaac Asimov proposed three Laws of Robotics, later refined with an overriding fourth law. I can think of little better to form a foundation (there’s a joke here) for statutes to protect us against government and corporate excesses.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
- Superseding other laws, a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
What now?
Experiment with ChatGPT and other AIs. You’ll encounter flaws, many, many flaws, but what they get right can be impressive. But I suggest three works of fiction from the past to give an idea of the future, what’s possible and what terrifies. They are novels after all, not inevitable.
- The Shockwave Rider (1975)
- John Brunner may have been the greatest Sci-Fi writer of the latter 20th century. He didn’t depend upon evil aliens or hungry monsters (except for an embarrassing awful early novel). Each of his books deals with a topic of concern such as pollution and over-population. His novels read like tomorrow’s newspaper and he had an astonishing prescient record of predicting the future. In Shockwave Rider, he foresaw computer viruses and hackers. Like Stand on Zanzibar, the story is an entertaining peek into the future– now our present.
- The Adolescence of P-1 (1977)
- Thomas Ryan also foresaw hackers and viruses and he provided the most realistic descriptions of computers and data centers in fiction, eschewing goofy tropes often seen on television and in bad movies. In its day, P-1 was little known, even in the computer community, but was adapted into a 1984 Canadian TV movie titled Hide and Seek. Like The Shockwave Rider, it ultimately foresees a promising, positive future.
- Colossus: The Forbin Project (1966)
- D.F. Jones turned this into a trilogy. It’s the best known of the novels here and was made into a 1970 film, The Forbin Project. A powerful supercomputer is turned over to the military, despite warnings to the contrary. Meanwhile in the Soviet Union, a similar computer goes on-line. They meet. Warning: Unlike other stories here, this is one depressing novel. I mean unrelentingly depressing. It makes Dr Strangelove read like Pollyanna. The only good note is it serves as a warning of what could happen.
Wanted: Data Entry Clerks with No Knowledge of English
Colossus is so damn depressing, I couldn’t end with it. In early days, data storage was expensive and computer memory even more so. Scanners were huge, slow, and inaccurate, so how might large bodies of information such as law library Lexus/Nexus be imported into computers?
They were typed in. Twice. At least.
Building a digital, searchable law library was the goal of one enterprise. Despite being surrounded by emerging automation, the group depended upon manual data entry. They hired pairs of keypunchers to retype law books in tandem. In other words, two women would be assigned the same text in a section, each duplicating the work of the other. A pair of women would attack another section, and two more yet another. The result was two streams of text hypothetically identical.
At the end of a daily run, each pair’s output was compared. Differences indicated a problem and a third person would locate the error and correct it. A lack of knowledge of English was a key requirement on the theory their output wouldn’t be influenced by their language.
Verily, let us cautiously give thanks for artificial intelligence.