We live in an age when it’s easy to get discouraged about the state of human nature. First, there was a rash of strangers declining to wear a simple face mask to shield the vulnerable from harmful pathogens. And since the pandemic, there’s been a rise in shoplifting (also known as “retail shrink”), freight fraud in the trucking sector, and overall bad manners in public places like theaters, airplanes, and restaurants.
Then in November 2022, OpenAI released its generative artificial intelligence (GenAI) tool, ChatGPT. At first blush, the clever chatbot seemed charming and even witty, with a downright impressive ability to provide concise answers to complex queries. But upon closer inspection, the technology revealed the dangers of relying on a large language model (LLM) for its “education”: If your only professor is the sum of human knowledge found on the internet, and people are behaving badly and showing little regard for the truth, then your output can only reflect that level of discourse.
Last month, we even saw a human shopper take an AI chatbot to court on illegal wiretapping charges. The case involves the interactive texting boxes running on many websites—in this case, Gap’s Old Navy clothing brand—to answer basic customer questions. This particular chatbot was allegedly such a skilled conversationalist that it elicited personal details from consumers, and then recorded them and stored the information in a database for the company’s commercial gain.
Equally troubling, chatbots have a tendency to lie, or “hallucinate.” This happens because AI chatbots create sentences not through reasoning but by calculating what word is statistically most likely to follow the last one. The resulting hallucinations would be merely comical if you were using the tool to brainstorm ideas or search for a movie recommendation. But it can create real problems for people who use the technology to, say, plan a trip, according to reports about the use of AI tools found on travel websites like Expedia, Tripadvisor, and Priceline—tools that failed to produce even remotely viable itineraries. Chatbot hallucinations also got a team of New York lawyers in hot water last May when they used ChatGPT to write a legal brief but didn’t double-check the results. The judge promptly fined them $5,000 when he learned that the brief contained no fewer than six references to fictitious cases.
So I was taken by surprise by my interaction with yet another AI, Amazon’s Alexa, on a recent evening while cooking dinner for my family. I was busy chopping onions and minding the oven, so I called out “Alexa, set timer for 10 minutes!” and the speaker confirmed “Ten minutes, starting now.” Distracted by my work, I mumbled “Thank you” and was startled to hear the reply: “Absolutely, glad I could help. Hope you’ve had a good Thursday.”
Startled, I set another timer to make sure the onions hadn’t merely made me dizzy. “Alexa, set timer for seven minutes!” The machine replied, “Seven minutes starting now,” and this time I tried an informal “Thanks.” She quickly answered, “My pleasure. Just doing my job.”
Perhaps you noticed that I’m now referring to my new friend Alexa by the human pronoun “she.” That’s obviously wrong, whether you’re going by the Associated Press Style Guide or by standard business journalism practice, but I want to give credit where credit is due. To be sure, the Amazon corporation is no angel. In this magazine, we’ve covered legal complaints by OSHA and the Teamsters Union about worker injury rates linked to the e-tailer’s demanding—and some say unrealistic—warehouse work quotas, for example.
But good manners matter—or at least that’s what I teach my kids. And in a world where we’re likely to be spending more time conversing with our chips than our chums, maybe even the computers still have something to learn about the finer points of human comportment.Copyright ©2024. All Rights ReservedDesign, CMS, Hosting & Web Development :: ePublishing