Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

22 September 2024

AI on AI


The Impact of Artificial Intelligence in Real Life

AI robots serving in elder care

Artificial Intelligence (AI) has become an integral part of our daily lives, transforming various sectors and enhancing the way we live and work. From healthcare to finance, education to entertainment, AI’s applications are vast and continually expanding. Here are some key areas where AI is making a significant impact:

Healthcare

AI is revolutionizing healthcare by improving diagnostics, personalizing treatment plans, and predicting patient outcomes. AI algorithms can analyze medical images with high accuracy, assisting doctors in detecting diseases like cancer at early stages. Additionally, AI-powered tools can monitor patient vitals and predict potential health issues, enabling timely interventions.[01]

Leigh: I anticipate robotic nursing assistants will rapidly move into disabled and elder care. The initial robots may not look humanoid, but they will have strong and gentle arms capable of lifting patients in and out of baths and toilets. AI and possibly AI robotic figures may find use to alleviating patient loneliness and boredom. Chess anyone?

AI robots serving in the classroom

Finance

In the financial sector, AI is used for fraud detection, risk management, and personalized banking. AI systems can analyze transaction patterns to identify fraudulent activities in real-time. Moreover, AI-driven chatbots provide customers with personalized financial advice and support, enhancing the overall banking experience.[02]

Education

AI is transforming education by offering personalized learning experiences. Adaptive learning platforms use AI to assess students’ strengths and weaknesses, tailoring educational content to meet individual needs. This personalized approach helps students learn more effectively and at their own pace.[03]

AI robots on the road

Transportation

AI is at the forefront of developing autonomous vehicles, which promise to make transportation safer and more efficient. Self-driving cars use AI to navigate roads, avoid obstacles, and make real-time decisions, reducing the risk of accidents caused by human error.[04]

Leigh: Vehicles like the Tesla with FSD (full self-driving) are actually robots on wheels. I suspect one advance will be the ability to communicate with like-minded vehicles. “Car 54, your view is blocked of a child running into the street, vector 13.56.”

Entertainment

In the entertainment industry, AI is used to create personalized content recommendations. Streaming services like Netflix and Spotify use AI algorithms to analyze user preferences and suggest movies, shows, and music that align with their tastes. This enhances user engagement and satisfaction.[05]

Leigh: And with that said, just for fun…


AI robots writing essays like this one

AI’s Role in Mystery Fiction Writing

For mystery fiction writers, AI offers a range of tools and capabilities that can enhance the creative process and streamline writing tasks. Here are some ways AI can be particularly useful for mystery fiction writers:

Plot Development

AI can assist writers in developing intricate and compelling plots. By analyzing existing mystery novels, AI can identify common plot structures and suggest new twists and turns. This helps writers craft engaging and unpredictable stories that keep readers on the edge of their seats.[06]

Character Creation

Creating multi-dimensional characters is crucial in mystery fiction. AI can generate detailed character profiles, including backstories, personality traits, and motivations. For instance, an AI might suggest that a detective character has a mysterious past as a former spy, adding depth and intrigue to the story.[07]

Writing Assistance

AI-powered writing tools can help authors with grammar, style, and coherence. These tools can provide real-time feedback, suggesting improvements and ensuring the narrative flows smoothly. Additionally, AI can help writers maintain consistency in tone and style throughout the manuscript.[08]

Idea Generation

When facing writer’s block, AI can be a valuable brainstorming partner. AI can generate prompts, plot ideas, and even entire scenes based on the writer’s input. This can spark creativity and help writers overcome hurdles in the writing process.[09]

Market Analysis

AI can analyze reader preferences and market trends, providing writers with insights into what themes and genres are currently popular. This information can guide writers in tailoring their stories to meet reader expectations and increase their chances of success.[10]

In conclusion, AI is a powerful tool that can significantly enhance various aspects of our lives, including the creative process for mystery fiction writers. By leveraging AI’s capabilities, writers can develop richer stories, create compelling characters, and engage readers in new and exciting ways.



Human Here

Leigh: I recently wrote about AI and a bit of what we might expect, whether sweet or sour. As I'm sure you surmised, I asked ChatGPT (a large-language model AI) to write an essay on the topic. This is the result.

AI robots competing for creativity

18 August 2024

AI-de-camp


retro-AI toaster
Sale: Latest fashion military-precision toaster equipped with ChatGPT•Alexa and automatic, Google-controlled pop-up features. Includes tactical controls, genuine UL-approved plug, USB, and 451° fuel-injected heating elements. Comes in an array of sixteen AI colors. Only $1299.95. Batteries not included. Free shipping with orders over $3600.

AI is everywhere, everywhere. Even where it isn’t.

You may remember my friend Thrush. Months ago, he urged me to write a non-fiction book, a layman’s guide to AI, artificial intelligence. I concluded I’d need a lot of research to bring my knowledge up to date for a full-fledged book, but an article might be more achievable at this point.

Thrush himself has a fair amount of experience. He was an engineer and I a consultant who worked for Westinghouse’s robotics division. (W) spun off as Automation Intelligence and a much smaller team led by Richard Fox specialized in neural networks, the foundation for modern artificial intelligence, also called machine intelligence.

To Understand the Future, Look to the Past

In the 1990s, AI approached the intelligence of a catatonic Labradoodle, but contrariwise, low IQ made it valuable for handling backbreaking and mind-deadening tasks.

An anecdote out of Australia told of a small parts inspection that was not only brain-numbingly dull, but subject to errors as workers’ attentions wandered. According to the story, workers trained a goose to detect bad assemblies, and an anserine solution proved somewhat more accurate without grousing. Supposedly the government stepped in and shut down the goose’s career, claiming forced labor was cruel to animals… not humans, but animals. (I would be interested if anyone can verify if this tale is true.)

Meanwhile, Fox and his colleagues, eventually acquired by Gould, had viable contracts, many from the automotive industry. These included banal tasks like Ford counting palleted sheets of window glass for export, GM identifying machine tooling heads too fine for the human eye to see, and determining if a transmission tab had been welded or not.

A particularly mundane task allowed Fox’s team to exercise its neural network: grading lumber. As plywood finishing a final trim and sanding, end-of-line workers noted knots and imperfections and categorized each sheet as Grade A, B, C, etc. This is where AI training kicked in.

For one last time, the grading workers sat with a training dongle, a keypad to rate the plywood. AI peered over their shoulders as workmen clicked through plywood sheets, teaching the neural network that a crack in the wood meant a rejection. Three small knots or one very large knot might drop a score to a grade C. After training with several sample sheets, the program was ready to tirelessly grade lumber all day long. Not brilliant, but it helped reduce industrial drudgery.

Pattern Recognition

Like all development, AI built on the shoulders of people and programs and techniques that came before, especially the Holy Grail of pattern recognition.

Humans can easily recognize faces, voices, songs, the distinctive beat of hawk wings, the scent of a lover’s neck, a fractional glimpse of an artifact our minds instantly extrapolate from a part to the whole in toto. Historically, digital computers have suffered at such simple human tasks. Before our current LLMs (large language models) and picture recognition could become useful, we needed pattern recognition, and pattern recognition required huge amounts of very fast storage and processors. These prerequisites have finally fallen into place and pattern solving has become a reality. Google and Alexa can not merely parse verbal questions, they can identify the voice of their inquisitor. Technology has finally arrived at the doorstep of machine intelligence.

The Stimulation of Simulation

Writers and researchers have speculated for a century about AI, and even as far back as a couple of millennia if one extrapolates early myths. AI was a topic at my university. I found myself assigned to a team working on economic simulations. Other universities pursued weather models, the effects of tides, and even earthquake simulations, so economics seemed relatively manageable. 

In an idle moment, I submitted a paper about simulating human intelligence, arguing that as simulations became more refined, they would eventually become indistinguishable from the real thing. I didn’t immediately realize I was backing into postulates and theorems by well-known theorists in the field, but I visualized simulation as an unexplored path to artificial intelligence.

Eliza

We’ve discussed prerequisites and pattern recognition, but in the mid-1960s, a fun experiment surfaced at MIT. Joseph Weizenbaum created a program, Eliza, that could communicate in a primitive way with humans. Although MIT disclaimed artificial intelligence, it nevertheless contained seeds of AI, the recognition of certain words resulting in a reaction. Beyond its small vocabulary, note the lack of specifics:

ELIZA: How do you do. Please tell me your problem.
Leigh: I'm concerned about the dark side of artificial intelligence.
ELIZA: Please go on.
Leigh: AI may not only be used, but also abused.
ELIZA: That is interesting. Please continue.

An anecdote told of an associate professor outside the computer room who sat down at an untended terminal, not knowing it was connected to Eliza. He attempted in vain to access his apps, and met with an implacable Eliza whispering soft platitudes in this teletype. Eventually the screaming professor drew the attention of staff, who soothed him with hot tea and an explanation.

Eliza lives on. You can try her out for yourself.

It all comes together.

Not long ago, our friend ABA had a government contracted business where she and her staff sorted through historical photographs to identify content, i.e, this is Nelson Mandela visiting Table Mountain on a mid-July afternoon.

AI has made her job obsolete. Computers can now recognize Madiba and distinctive geography with little to no human interaction other than feeding the machine.

Robots

As we speak, commercial robots are mating software with hardware, matching AI with mechanical precision. Applications are numerous. Think of elder care. Instead of slings and motorized hoists, a robot that may or may not look human softly scoops arms under a patient and gently lowers her on a toilet or into a bath, adjusting water temperature as desired. Another robot tenderly feeds a paralyzed patient and reads her an email from home.

I predict– yes, I’m willing to take such risk generally fraught with failure– I predict within a Christmas or two, we will see toys with built-in AI. Children can talk with them, ask questions, tell them their concerns, and let them know when they’re hungry or tired or need soothing. In the middle of the night, a toy might sing a child back to sleep. Advanced toys might resemble futuristic robots, whereas toys for young toddlers could look like Tribbles or teddy bears.

Jesus at the Wheel

We see one robot almost every day, a highly-intelligent ’bot on wheels. I’m referring to an electric vehicle (EV) with full self-driving (FSD)– the Tesla.

The average person may take FSD for granted, but those of us in the computer field are amazed at advances, especially in recent months. It is possible to enter the car, tell it where to go, arrive at its destination, and park itself without a person once turning the wheel or tapping the brake.

My friend Thrush, having traded in his Model 3 for a Model Y, cogently commented he doesn’t so much drive his Tesla, he supervises it. Close your eyes and you’ll never imagine a real person isn’t steering the car or– no disrespect– Jesus, as bumper stickers read.

Thrush no longer feels comfortable driving a traditional car. He has a point. Tesla accidents make the news simply because they are rare. Their accident and fatality rate is considerably lower than human-driven vehicles.

The console screen in a Tesla identifies traffic lights, stop signs, trucks, cars, and bicycles. It used to show traffic cones, but they’re no longer displayed. If a child darts into traffic, the car will brake and swerve to avoid him.

I’ll dare make another prediction, that the day will soon come when self-driving vehicles talk to one another. What? Why? Say you’re following a truck. An EV cannot see through a semi tractor-trailer any better than a person. But a smart vehicle in front of the semi sees a pending traffic accident and shouts out an alert to like-minded vehicles, giving them a chance to react without seeing what they’re reacting to. In a step beyond, intelligent vehicles could deliberately coordinate with one another to mitigate collisions.

The Dark Side

Those of us in the industry are well aware of abuses of computer intrusions and malware. At least one downtown bank here in Orlando won’t let you enter if you wear dark glasses or a hat. They want their cameras to pick you up and soon enough, identify you with AI.

London is one of the most monitored cities in the world. With AI, their computers could scan sidewalks and plazas snapping photos. Cameras sweep Trafalgar Square, its AI circuits identifying each person, innocent or not. AI can analyze faces at political rallies and protests, the wetdream of a police state. That is one creepy intrusion.

AI is already used to identify target marketing weaknesses and desires. In fact, AI was forced to cut back because it was targeting pregnant women before husbands and the women themselves were aware of their condition.

Finally, among other joys, AI can be used for warfare, electronic and traditional.

A Starting Point

We need AI legislation not only to protect us from corporations, but protect us from government. Decades ago, Isaac Asimov proposed three Laws of Robotics, later refined with an overriding fourth law. I can think of little better to form a foundation (there’s a joke here) for statutes to protect us against government and corporate excesses.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  4. Superseding other laws, a robot may not harm humanity, or, by inaction, allow humanity to come to harm.

What now?

Experiment with ChatGPT and other AIs. You’ll encounter flaws, many, many flaws, but what they get right can be impressive. But I suggest three works of fiction from the past to give an idea of the future, what’s possible and what terrifies. They are novels after all, not inevitable.

book cover: Shockwave Rider
The Shockwave Rider (1975)
John Brunner may have been the greatest Sci-Fi writer of the latter 20th century. He didn’t depend upon evil aliens or hungry monsters (except for an embarrassing awful early novel). Each of his books deals with a topic of concern such as pollution and over-population. His novels read like tomorrow’s newspaper and he had an astonishing prescient record of predicting the future. In Shockwave Rider, he foresaw computer viruses and hackers. Like Stand on Zanzibar, the story is an entertaining peek into the future– now our present.
book cover: Adolescence of P-1
The Adolescence of P-1 (1977)
Thomas Ryan also foresaw hackers and viruses and he provided the most realistic descriptions of computers and data centers in fiction, eschewing goofy tropes often seen on television and in bad movies. In its day, P-1 was little known, even in the computer community, but was adapted into a 1984 Canadian TV movie titled Hide and Seek. Like The Shockwave Rider, it ultimately foresees a promising, positive future.
book cover: Colossus
Colossus: The Forbin Project (1966)
D.F. Jones turned this into a trilogy. It’s the best known of the novels here and was made into a 1970 film, The Forbin Project. A powerful supercomputer is turned over to the military, despite warnings to the contrary. Meanwhile in the Soviet Union, a similar computer goes on-line. They meet. Warning: Unlike other stories here, this is one depressing novel. I mean unrelentingly depressing. It makes Dr Strangelove read like Pollyanna. The only good note is it serves as a warning of what could happen.

Wanted: Data Entry Clerks with No Knowledge of English

Colossus is so damn depressing, I couldn’t end with it. In early days, data storage was expensive and computer memory even more so. Scanners were huge, slow, and inaccurate, so how might large bodies of information such as law library Lexus/Nexus be imported into computers?

They were typed in. Twice. At least.

Building a digital, searchable law library was the goal of one enterprise. Despite being surrounded by emerging automation, the group depended upon manual data entry. They hired pairs of keypunchers to retype law books in tandem. In other words, two women would be assigned the same text in a section, each duplicating the work of the other. A pair of women would attack another section, and two more yet another. The result was two streams of text hypothetically identical.

At the end of a daily run, each pair’s output was compared. Differences indicated a problem and a third person would locate the error and correct it. A lack of knowledge of English was a key requirement on the theory their output wouldn’t be influenced by their language.

Verily, let us cautiously give thanks for artificial intelligence.

13 January 2024

A Near-Luddite Tries Bing AI


My Windows 365 updated a few weeks ago, and there on reboot was Microsoft selling hard to check out Edge’s new AI tool. Come on, Edge said. Try making a fun holiday image.

Was I tempted? A little. Mostly, I regard the rise of Big AI such as large language models with a combination of dread (look out for the bad Terminator!) and intellectual curiosity at what advances these unlocks (hooray, it's the good Terminator!). The risks and rewards of AI’s future applications are for expert thinkers. AI’s impact on writing is more in my wheelhouse, and there was my writing laptop wanting me to check out Bing's Copilot.

I’ve never Venmo-ed money. Don't know how. Don’t even have the app. I don’t know how to deposit checks by smartphone scan. I don’t use Alexa or Siri, and our newer smart appliances aren’t set up on the home Wi-Fi. I’m doing swell without all that. I’m not a technophobe, though. I use voice remote for TV and my smartphone for the usual stuff: music, news, texts, video calls, pet photos, health monitoring, and so forth. These help me stay connected and get where I want to go faster.

PART 1: I BANG ON ABOUT AI AND CREATIVE ETHICS

As with all discussions, let's start from intellectual honesty. The modern writer has long been using AI. Internet search algorithms, word predictors, spell check, Grammarly plug-ins. It's all narrow AI. What’s new is AI’s computing power and availability to the masses. AutoCrit’s AI critiques your story and gives style comparables (full disclosure: I've started using AutoCrit’s free version to spot repeated words and phrases). Sudowrite’s Story Engine handles the writing for you, including that dreaded synopsis. Other tools abound, and that number will mushroom.

I write when I can carve out time. In a productive year, I’ll write a handful of stories (I wrote three in 2023). About half of those will be publishable with effort. AI can crank out stories 24/7. They're junk. Fine, AI has almost won literary prizes. That's one in a billion, from what I've seen. People submit this anyway. As their own work. I don’t understand that justification. Someone presses a button. A prompted algorithm spitting out words is no more authorship than is copying pages straight from a Dickens novel. Hell, the algorithm may have copied Dickens.

I despair of rot. AI will evolve to produce more natural prose. Even so, AI will never be imaginative. Insightful, sure, but AI has no mind with which to imagine. And it’s a copycat, even of itself. The more AI builds its dreck into its model, the more AI will reproduce dreckier dreck. Markets will struggle for quality and even survival. There's no human effort barrier to slow the AI onslaught. Let’s stay honest, though. Many readers won’t mind so long as AI entertains.

PART 2: I ACTUALLY TRY THE DAMN THING

So in that mindset, I clicked the link to generate a holiday image. My command: a festive boiled custard drinking contest in old-time illustration style. Here’s its swing at my curveball:

generated by AI

generated by AI

generated by AI

Weird. But you know what? AI tried its robot best. And it brought fun.

Game on. I asked Edge to make a theoretical book cover of “Lord, Spare the Bottom Feeders,” a story of mine from AHMM a few years back. My prompt described the story as crime fiction with a few plot particulars about the lawyer character and let the tool rip. 

The tool returned an over-the-top cartoon cover with a generalissimo-type crime lord surrounded by central casting mafia goons, everyone armed as if up against the Untouchables and the Five Families and even the bad Terminator. Vivid, but it highlights important AI concerns. My prompt didn’t say what kind of crime ring was involved. I didn't mention guns. Fine. The tool made these assumptions. AI is teaching itself from an internet archive that, as we all know, includes significant biases and worse. Absent way more controls than we have today, AI is going to spread these biases rapid-fire.

AI is a parrot, so I re-prompted with a load more story specifics. The tool returned:

Okay, better-ish. It's still cartoonish tropes, but this one at least addresses the prompt about mussel smuggling. No lawyer, but props on the broken scales of justice. And it does grab attention, as a cover should do. 

I don’t worry for folks who design bad book covers. They need to find other gigs. I do worry for the many great cover designers on some level. Hopefully, those folks can lean into AI as a way to craft even better covers in less time.

For my next test, I took a story idea that has refused to gel in my head for months now. It’s about a guy getting initiated into the burly beard grooming life. As usual, the character came right to life. A workable plot hasn’t. So I prompted with:

Write an outline for a 3,000-word crime short story with three twists about a criminal who can't grow a beard despite trying.

AI returned (again, this is NOT my work):

"That sounds like an interesting premise for a crime story. Here is a possible outline for your short story:

  • Title: The Beardless Bandit
  • Genre: Crime/Comedy
  • Summary: A young man, Eddie, dreams of becoming a notorious criminal, but he has a problem: he can’t grow a beard. He believes that having a beard is essential for being a successful and respected criminal. He tries various methods to grow a beard, but nothing works. He decides to resort to drastic measures: he robs a bank wearing a fake beard. However, his plan goes awry when he encounters three unexpected twists.
  • Outline:
    • The story begins with Eddie preparing for his bank robbery. He has a fake beard that he bought online, a gun that he borrowed from his cousin, and a getaway car that he rented. He is nervous, but also excited. He thinks that this will be his big break into the criminal world.
    • He enters the bank, points his gun at the teller, and demands the money. He tries to act tough and menacing, but his voice cracks and his fake beard slips. The teller recognizes him as a regular customer and calls him by his name. Eddie panics and runs out of the bank, leaving behind his gun and his fake beard.
    • The first twist is that the bank was actually a front for a money laundering operation run by a powerful crime boss, Mr. Big. Mr. Big is furious that someone tried to rob his bank and orders his men to find and kill Eddie. He also wants his gun and his fake beard back, as they are valuable evidence.
    • The second twist is that the gun that Eddie borrowed from his cousin was actually a rare and antique weapon that belonged to a famous gangster, Historic Boss. The gun is worth millions of dollars and is sought after by collectors and criminals alike. Eddie’s cousin had stolen it from a museum and was planning to sell it to Mr. Big, but he lied to Eddie and told him it was a cheap replica.
    • The third twist is that the fake beard that Eddie bought online was actually a real beard that belonged to a famous actor, Major Celebrity. The beard was cut off by a crazed fan and sold on the internet. The beard is also worth millions of dollars and is sought after by fans and celebrities alike. Major Celebrity himself is looking for his beard and is willing to pay a huge reward for its return.
    • The story ends with Eddie being chased by Mr. Big’s men, the police, the museum security, and Major Celebrity’s agents. He manages to escape with the help of a friendly barber, who gives him a shave and a haircut. He realizes that he doesn’t need a beard to be a criminal, he just needs to be smart and brave. He decides to give up his criminal dreams and start a new life. He also returns the gun and the beard to their rightful owners and collects the rewards. He uses the money to buy a house, a car, and a dog. He names the dog Beardy."
    • — generated by AI

I changed one thing in AI’s summary for here. AI's fixation on famous people went as far as to use actual famous people’s names. Let’s leave them and their attorneys out of this.

AI's outline hangs together, mostly. Parts of this are even good. Resorting to a fake beard was something I hadn’t thought of, and that fake beard becoming a McGuffin is an actual twist. The stakes ramp up as stakes should, and the happy ending with Beardy made me grin.

There are problems.

AI crams in way too much plot for 3,000 words. The gun twist pulls the thing thematically out of whack. The friendly barber is funny, but fake beards don’t need to be shaved off. And you cannot run up to someone and cut an entire, reusable beard off their face. That’s either magic realism or horror.  

I didn’t ask AI to write the story. I won’t, and I won’t write “The Beardless Bandit” myself. This isn’t my idea, and I won't pretend otherwise. I do reserve the right to take inspiration from this.

I tried other story summary prompts and got repeated interesting nuggets and major plot holes. I was having fun. Did it feel like I’d created anything? No. It was like playing with a toddler while they explained their toys. I did feel creative-adjacent in a way. Using the tool forced me to consider prompt sharpness and to read the generated content critically.

In the debate about whether or not AI undermines and supplants fiction writers, I’m still in some despair. AI's expansion is a cycle that threatens to drag us downward. Downward, but not out. Fatalism is a human quality and usually a mistake. Enhanced AI tools can help us carbon-based writers. We’ll be better researchers, better self-editors, better brainstormers. These same AI tools are great at spotting missed cancers and asymptomatic Alzheimer’s. If AI can do that, there is a place for it to boost our craft.

We’ll need to find that place. Soon, because the AI debate is pointless. The technology is here. What we humans do with it– and about it– will determine whether we get good Terminators or bad ones. Until then, this near-

Luddite will get back to my Venmo-less life.

22 September 2022

The Novel-Writing Machine


Many years ago, the Industrial Revolution marked the first great change from natural time to clock time. Before that, if there was a clock in the village or town, it was on the church tower, and marked only the hours, because there was no need to mark the minutes. Nobody paid attention to that. In a 90% agricultural world, the rhythms of life were based on the land, which meant constant work in the spring and fall, lighter work (except for the haying) in summer, and a winter spent mostly just trying to stay warm. People got up when it was light, went to bed when it was dark, and that was that.  And, besides Sundays off, there were more holidays throughout the year than today, thanks to the rhythms (and rule) of the Church.

Then came the Industrial Revolution, began in Europe, the usual factory hours were 12 hours a day, 6 days a week, 52 weeks a year. (It was considered a luxury when the factory workers were given half-days for Saturdays, and the 40 hour workweek was only put into law in 1940.) Humans became servants of the clock and of the machines. And now it's considered normal to get up, eat, work, and sleep by the clock, by the job, by the cell phone, not by nature. 

And now by AI and algorithms?

I read an adaptation of the The Great Fiction of AI by Josh Dzieza in last week's "The Week" (The Last Word:  The novel-writing machine) and found it pretty damned disturbing from start to finish.  Read the entire article here:  (LINK) To begin with:

"On a Tuesday in mid-March, Jennifer Lepp was precisely 80.41 percent finished writing Bring Your Beach Owl, the latest installment in her series about a detective witch in central Florida, and she was behind schedule. The color-coded, 11-column spreadsheet she keeps open on a second monitor as she writes told her just how far behind: she had three days to write 9,278 words if she was to get the book edited, formatted, promoted, uploaded to Amazon’s Kindle platform, and in the hands of eager readers who expected a new novel every nine weeks."

Every nine weeks?  What world are people living in?  

"However, being an Amazon-based author is stressful in ways that will look familiar to anyone who makes a living on a digital platform. In order to survive in a marketplace where infinite other options are a click away, authors need to find their fans and keep them loyal. So they follow readers to the microgenres into which Amazon’s algorithms classify their tastes, niches like “mermaid young adult fantasy” or “time-travel romance,” and keep them engaged by writing in series, each installment teasing the next, which already has a title and set release date, all while producing a steady stream of newsletters, tweets, and videos. As Mark McGurl writes in Everything and Less, his recent book on how Amazon is shaping fiction, the Kindle platform transformed the author-reader relationship into one of service provider and customer, and the customer is always right. Above all else, authors must write fast."

Sudowrite is an artificial intelligence tool designed to break through writer's block - for fiction writers.

"Designed by developers turned sci-fi authors Amit Gupta and James Yu, it’s one of many AI writing programs built on OpenAI’s language model GPT-3 that have launched since it was opened to developers last year. But where most of these tools are meant to write company emails and marketing copy, Sudowrite is designed for fiction writers. Authors paste what they’ve written into a soothing sunset-colored interface, select some words, and have the AI rewrite them in an ominous tone, or with more inner conflict, or propose a plot twist, or generate descriptions in every sense plus metaphor."

"AI may just be another tool, but authors haven’t previously felt the need to remind themselves that they — and not their thesaurus — are responsible for their writing or have fraught debates over whether to disclose their use of spellcheck. Something about the experience of using AI feels different. It’s apparent in the way writers talk about it, which is often in the language of collaboration and partnership. Maybe it’s the fact that GPT-3 takes instruction and responds in language that makes it hard not to imagine it as an entity communicating thoughts. Or maybe it’s because, unlike a dictionary, its responses are unpredictable. Whatever the reason, AI writing has entered an uncanny valley between ordinary tool and autonomous storytelling machine. This ambiguity is part of what makes the current moment both exciting and unsettling."

Jasper.ai is another tool, more for non-fiction, but we'll see:

"They’re using it to generate Google-optimized blog posts about products they’re selling or books that will serve as billboards on Amazon or Twitter threads and LinkedIn posts to establish themselves as authorities in their field. That is, they’re using it not because they have something to say but because they need to say something in order to “maintain relevance” — a phrase that I heard from AI-using novelists as well — on platforms already so flooded with writing that algorithms are required to sort it. It raises the prospect of a dizzying spiral of content generated by AI to win the favor of AI, all of it derived from existing content rather than rooted in fact or experience, which wouldn’t be so different from the internet we have now."  (my emphasis)

"[Meanwhile, Lepp says] she’s a little embarrassed to say she’s become reliant on it [Sudowrite]. Not that she couldn’t write without it, but she thinks her writing wouldn’t be as rich, and she would certainly be more burnt out. 'There’s something different about working with the AI and editing those words, and then coming up with my own and then editing it, that’s much easier. It’s less emotionally taxing. It’s less tiresome; it’s less fatiguing. I need to pay attention much less closely. I don’t get as deeply into the writing as I did before, and yet, I found a balance where I still feel very connected to the story, and I still feel it’s wholly mine.'"

"With the help of the program, she recently ramped up production yet again. She is now writing two series simultaneously, toggling between the witch detective and a new mystery-solving heroine, a 50-year-old divorced owner of an animal rescue who comes into possession of a magical platter that allows her to communicate with cats. It was an expansion she felt she had to make just to stay in place. With an increasing share of her profits going back to Amazon in the form of advertising, she needed to stand out amid increasing competition. Instead of six books a year, her revised spreadsheet forecasts 10."  (my emphasis)

"She thinks more fully automating fiction right now would produce novels that are too generic, channeled into the grooves of the most popular plots. But, based on the improvement she’s seen over the year she’s been using Sudowrite, she doesn’t doubt that it will get there eventually. It wouldn’t even have to go far. Readers, especially readers of genre fiction, like the familiar, she said, the same basic form with a slightly different twist or setting. It’s precisely the sort of thing AI should be able to handle. “I think that’s the real danger, that you can do that and then nothing’s original anymore. Everything’s just a copy of something else,” she said. “The problem is, that’s what readers like.”

photo © Wikipedia

Is this really our future? And what if I don't want it?

10 August 2019

Technology Creeps


A decade ago, I wrote a short story about an author who upgraded his computer for a model that could talk and think. QWERTY, the new computer, had a mind of its own. It changed its name to Oscar, offered unwanted advice on split infinitives, and began to write a screenplay (after networking with Peter Jackson's computer). And then it tried to steal the author's girlfriend.
The Trouble with QWERTY
"I arrived home. I parked the car. I went inside and made my way to my office. As I climbed the stairs, I could hear voices. I could hear Oscar, and I could hear Ruby. Oscar was telling Ruby his Ernest Rutherford joke. When he got to the punch line, she cackled with laughter." The Trouble with QWERTY (COSMOS Magazine, Aug/Sep 2010)
The story was science fiction. It was a flight of (humorous) fantasy.

Consider this: The technology we have today wasn't around yesterday. My cheap cell phone (bought in January) has significantly more processing power and memory than my first Windows computer (bought in 1995); and that computer, a quarter of century ago, had more computing power and memory on board than the spaceship Neil Armstrong landed on the moon (a quarter of a century before that).

Technology creeps. Yeah, like rust, it doesn't sleep.

My computer, today, doesn't talk to me. But I can talk to it. Using voice recognition (Nuance's Dragon software, if you're curious), everything I say can be transcribed (almost perfectly) into text, and directly into an MS Word document. It's so easy, and workable, that I sometimes "write" first drafts of my stories this way.

Amazingly, scientists (in a study funded by Facebook) have already started taking steps to remove the "voice" part of speech recognition, transcribing directly from your brain waves.


They're working on this for the benefit of people who are paralyzed; remember the elaborate process by which Prof. Stephen Hawking communicated. It's an excellent field of study and development... and you just know (and this is not a negative) that once the software/equipment is up and running, market forces will see to it that it's available for everyone. One day, probably sooner than we'd imagine, we'll be able to "think" our stories into our computers.

But what really does worry me is, will there come a day when my computer doesn't need me to think for it, and it can write a story all by itself?

Technology creeps.

Yes, Virginia, there will come a day.

What do we writers do? We make stuff up. Well, there's an app for that. Actually, it's some pretty hardcore AI programming, but it can... actually... "make stuff up." I believe you can even download the code and try it out for yourself.

OpenAI.com: Better Language Models and Their Implications

(Also) Article at The Guardian: New AI fake text generator may be too dangerous to release, say creators

OpenAI.com is a nonprofit research organ (funded by Elon Musk and others), and it has an AI text generator called GPT2. When fed text, anything from a few words to a whole page, it can write the next few sentences based on its predictions of what should follow. And this output text is coherent, fluid and natural. To most readers, it could have plausibly been written by a fellow human.

The OpenAI's software uses an input sample of content, e.g., 40 GB of internet text, and uses that dataset as a model to generate output. To quote The Guardian article:
"Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: “I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.
The internet is an awfully large, easily accessible (by man or machine) dataset. So imagine what might be possible if the program had more than 40 GB to play with? And frankly, 40 GB is nothing. My car radio has more memory. And it won't take long before the "opening sample" dataset isn't needed. I'm a Technical Writer by profession, but even I can write the code to randomly generate data, be it numbers or words. And how do we humans start a brand new story? With random ideas.


The Trouble with QWERTYOne of the key tasks the writer has when making up a story is bringing order to randomness. Writing fiction is a long, long series of decisions, mostly YES/NO decisions. They might start out a little more complex, but they will always eventually come down to the binary: Do I end the chapter here? Does she have dark hair? Does he know she cheated on him? Does she drink red wine? Does he drink white wine? It's 1s and 0s. 

Could a machine/computer/robot/software/app one day write a short story, or a book, that passes muster with an editor (i.e., it's good) and it gets published? Yeah, I know, there's a wide    gulf between spitting out a paragraph or two of passable content, to the undertaking of the complexity of a 6,000 word short story, or an 80,000 word novel. But then, consider how much crap out there actually does get accepted and published.


Remember, it was a breathing, walking, talking human being who wrote "It was a dark and stormy night..."

There is, right now, an argument taking place about whether AI can, or should be recognized as the "creator" of something. From a BBC article:
"Unlike some machine-learning systems, Dabus has not been trained to solve particular problems.Instead, it seeks to devise and develop new ideas - "what is traditionally considered the mental part of the inventive act", according to creator Stephen Thaler."
Article at the BBC: AI system 'should be recognised as inventor'

So, why not? AI is simply a bunch of programming. In many respects, so are we. We write what we know. We write from our 'personal' datasets.

Welcome to our brave new world. And for the record, I really did write this article, I didn't outsource it to my toaster.


Bonus Trivia Item for Mystery Writers: "It was a dark and stormy night" is the start of the opening sentence of Lord Lytton's book Paul Clifford (published 1830). This is the only book that Raymond Chandler is known to have checked out of the library at Dulwich College, when he was a student there.


Stephen Ross (in a Cafe)(Waiting for coffee)

www.StephenRoss.net

12 March 2015

Riders of the Purple Wage


Lately, a number of very famous people have been getting their knickers in a twist over Artificial Intelligence, or AI:
black and white photo of Hawking in a chair, in an office."The development of full artificial intelligence could spell the end of the human race."  Stephen Hawking
“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out."  Elon Musk
Now I can sort of understand why.  The general premise for decades has been that some day the computers/robots will take over, and run us, with only two possible scenarios:
  • Great - Robots and computers will do everything for us, and we will live a life of luxury (according to the late great Frederick Pohl, too much so), comfort and security thanks to Isaac Asimov's Three Laws of Robotics that protect mankind from the revolt of the machines.  
  • Bad - Everything by Philip K. Dick, and, of course, "The Matrix".  
Which it will be depends upon the mood of the times.  Currently, we're not a particularly optimistic species, so the common response is, "We're doomed! We're doomed!"  (Unless you're Sheldon Cooper, and then it cannot happen soon enough.)
Maybe.  Maybe not.  But what concerns me about the takeover of the machines isn't that they use my stasis body as a heat source while providing my mind innumerable alternative reality jaunts to keep me a content and unquestioning host organism.  Or even AIs killing us all (for one thing, logically, they'd do it quickly - only humans are sadists.  And cats.).  What concerns me is the simple matter of a paycheck.  Eating.  Rent.  Utilities.

Look, the main reason we have computers and robots is to do our work for us.  Anything boring, repetitive, heavy, dangerous, etc. - eventually, we'll make a machine to do it.  Calculators mean I don't have to add up the columns of figures for which they used to hire Nicholas Nickleby.  Payloaders mean we don't need an army of physical laborers hoisting earth. Tractors, etc., mean that today's Pa Ingalls doesn't need to muscle his way through the sod with horse and plow.  Computers mean I don't have to write everything out long-hand, or type it over and over again until it's perfect.  It's great.

hamburger robotOn the other hand, modern technology has eliminated and is eliminating a whole ton of jobs. Typesetters; typists; clerks; gas station attendants; innumerable factory workers; graphic designers; paralegals; low-level tax preparers; most farm hands; most farmers; bank tellers; airline check-in agents; retail clerks; accountants; actuaries; travel agents; most reporters, etc.  Soon there will be far fewer surgeons, teachers, and other high-level jobs as robots take over.  And in the fast food industry, the robots are coming to flip those burgers and make those fries.

The point is that, as we use technology to do 40, 50, 60, 70, 80, 90% of the work, we will also unemploy a significant number of people.  There will still be jobs, at all levels - just infinitely less of them.  Perhaps only a handful, here and there.  Which leaves the elephant in the room:  what do you do about the people?

Yes, everyone talks about retraining.  See a typical chirpy article on "The Future of Work" .  BUT, I've always had two basic questions:

(1) There is a significant number of people who can't be retrained.  Some will be too old, some will be too set, and some - frankly - whose mental ability to learn complex problem-solving skills is extremely limited.  I run into some of them at the pen.  (In case you don't know it, prisons are the modern housing facility for many of the mentally disabled, as well as the mentally ill.)  These are the people who are never considered in future planning talks, the ones that are ignored by all economists and pundits, but shouldn't be.  As I once said about a former student who was caught stealing, "Well, how else is he going to make a living?"

(2) If you have 250 people in a town, and there are only 100 actual jobs, it doesn't matter how much retraining you do.  There are still 150 people without work because there are no jobs.  Urbanize that.  Nationalize that.  Globalize that.

In Philip Jose Farmer's "Riders of the Purple Wage", he posited a society in which they coped with the problem of almost complete unemployment by giving everyone a salary just for being born.  It's enough to keep them housed and fed and hooked up to the Fido, a combination cable TV/videophone, along with a little wet-ware called a fornixator (you translate it).  To get anything else, you have to prove your exceptionality, but most people are happily occupied without it.  For those who aren't, well, there are wildlife reserves where they can go off and be weird - but they have to give up the purple wage.

Soylent green.jpgIt's a successful society, in its own way - and perhaps the only logical one. Because the truth is, sooner or later, in a society where technology is doing 90% of the work, there will have to be a "purple wage".
That, or
(1) society comes up with innumerable "make work" jobs, like picking oakum in the workhouse.  (Personally, I foresee a lot of crime.)
      That, or
(2) the unemployed masses (a la "Soylent Green" or "Zardoz", etc.) will be pounding at the armored enclaves of the fabulously wealthy.  (As I said, I foresee a lot of crime.)
      That, or
(3) a whole lot of people are going to have to die, leaving just enough to run the machines, and do the few jobs that still cannot be done by machines, and the fabulously wealthy (there is always a group of fabulously wealthy) to enjoy unending leisure.  Wall-E, call home!
      That, or
(4) The Matrix.

Anyway, here's the question:  As we pursue technological advancements, can we let go of the Protestant Work Ethic?  Let go of the idea that we are what we do?  Must people work or starve, even if there's plenty of everything except jobs?  Can we tolerate, support, even design a society where the norm for everyone (instead of just the wealthy) is "the leisured class"?

Now, you may think the last question is nonsense.  For one thing, we've been promised endless leisure for a century now, and most people are still working their butts off.  On the other hand, we do have more leisure than almost any other society in history.  This began with the industrial revolution, and one of the most interesting things about reading "Consuming Passions" by Judith Flanders is watching the development of ways for the working classes to spend their new-found leisure.  (Hey - they had all of Saturday afternoon and Sundays off!)  Thanks to advertising, sports, vacations, theater, and literature were turned into major industries.  (Drinking had always been a favorite activity.)  And, instantly, the pundits, poets, philosophers, and religious thinkers started decrying the horrible waste of human time and energy on trivia.  And talking about the nobility of hard work, piety, thrift, self-denial and sobriety:  for the lower classes only, of course.

File:Victorian cricket team 1859.jpg
Victorian cricket team

We have pretty much the same discussion going on today:  in certain circles, if you don't have a paying job, you're worthless.  (Unless you're wealthy enough not to.)  And the idea that someone who's unemployed has a television, a cell phone, and computer games for the kids - well, they're obviously spending too much money on all the wrong stuff.  Not to mention, if they have such things, they can't be "really" poor.

NOTE 1:  In Florida they give cell phones to the homeless, for a variety of reasons.  (Contact from parole officers, call-backs on jobs, etc.)
NOTE 2:  I'm always amazed at the people who check out other people's grocery carts and then post, outraged, if someone who's on food stamps buys candy or other luxury items.  (See this article for the alternative view:  People on Food Stamps Make Better Grocery Choices.)  God forbid the poor eat something other than gruel...

Basically, I'm leisured, you're lazy, and they're useless.

Anyway, today we've got smart phones, social media, computer games, Netflix, and innumerable other ways to waste what time we have (on the job or off) in the modern equivalent of Fidos and fornixators.  And it seems like the list is going to expand at algorithmic rate. Meanwhile, the list of available jobs is decreasing, at least geometrically, every time we turn around.  IF we get to where technology performs most of the work, and IF we get to where we have a regular unemployment of 30, 40, 50, 60, 70 percent, can we change our thinking from "unemployed" to "leisured"?  Can we develop a new idea of what people "should" do?  Of what people are "supposed" to do?

Without work, what are people for?
"Tompkins Square Park Central Knoll" by David Shankbone -
David Shankbone, licensed under CC BY-SA 3.0 via Wikimedia Commons