AI EDUCATION: What Is a Chatbot (and Could It Kill Me)?

588

Each week we find a new topic for our readers to learn about in our AI Education column. 

This week in AI Education we’re going to do something a little bit different. Generally, we strive to define or clarify commonly occurring but rarely explained AI terms and topics, in part to help our readers gain fluency in artificial intelligence. The hope is that, after a while, we all become familiar enough with the jargon around AI that we can approach some of the more technical work in the field without being intimidated. 

This time, however, we’re going to talk about chatbots, a topic that virtually everyone already knows at least something about because they’re literally everywhere you go: on the internet, in stores, at ATMs, even at fast food restaurants. However, few people have considered or fully understand how chatbots work, the vast number of applications for the technology and their potential to improve our world—nor have they recognized the dangers and drawbacks of the technology. 

Let’s start with a simple definition: a chatbot is, again, software. It’s a computer program that you can have a conversation with. Artificial intelligence chatbots use machine learning and natural language processing to understand prompts from users and to respond conversationally. 

How We Got Here 

Well, I’m a recovering news guy, and for those of us who cut our teeth in the newspaper business, we got here because I followed the negativity bias—which describes the human tendency to pay more attention to information that seems bad. This time, it’s a tragic story from October that has stuck with me, even through a particularly noisy election season.  

The family of Sewell Setzer III, a 14-year old Florida boy who committed suicide in February, is suing Character.AI, an app that allows users to create and interact with custom-built chatbots, alleging that the company’s technology was used to create a chatbot that encouraged Setzer to take his own life. Setzer’s family claims that he felt like he had fallen in love with the program. 

Clearly, this is an extreme case, but it’s not altogether unique. Last year, it was reported that a Belgian man had taken his own life after having discussions with an AI chatbot about the ecological future of the planet—discussions, again, in which the AI is purported to have eventually encouraged the user’s suicide. Both these cases have touched off a discussion about who should be using chatbots and how they should be used. 

How Do Chatbots Work 

A chatbot, conceptually, is a simple program: a user asks for data or for a task to be completed, and the chatbot retrieves that data or performs that task. Today’s chatbots are powered by generative AI, enabling them to understand common language and answer complex questions in specific formats. Many chatbots are still simple, task-oriented programs that serve a single purpose, like providing technical support. While these simpler programs can be supported with legacy technology, they’re increasingly incorporating artificial intelligence due to evolving consumer and user preferences. 

Machine learning allows chatbots to optimize their ability to predict user interactions and answer them. With natural language processing, chatbots find meaning in open-ended input. The most sophisticated chatbots incorporate deep learning and machine learning simultaneously, enabling them over time to provide nuanced answers to very specific questions. 

Modern, conversational chatbots are interactive, personalized and can be used to deliver general knowledge, entertainment or companionship—obviously, with some risks. These programs are capable of anticipating the needs of their users and displaying a modicum of empathy. 

The Evolution Of Chatbots 

Chatbots weren’t always so complex, according to IBM. Chatbots had their roots in telephony. In the 1950s, when automated enterprise call systems were first developed, the first chatbot-like technology was a phone tree, where customers dialed in and were led through a series of telephone menus to find the customer service they needed. The first computer chatbot-like applications were interactive FAQ (frequently asked questions) programs where a set of common questions would be matched up with pre-written answers. When a user input a question, they would usually be asked to select from a short menu of keywords or phrases to direct the FAQ to bring up the pre-written entry most similar to the user’s request. 

Gradually, these answer-seeking programs became more complex, incorporating rules-based programming enabling them to answer more complex questions. Natural language processing emerged, giving chatbots the ability to understand queries posed in a conversational manner. Users no longer had to manually identify keywords and phrases. 

Eventually, the most sophisticated of these programs evolved to the point where they were far more than FAQs or phone menus directing you to the right line. For many businesses, a chatbot has now become the first point of contact with consumers, prospects, customers, users and subscribers, capable of fulfilling almost all of a customer’s needs automatically with no need for human intervention on behalf of the business. 

But Are They Bad? 

Look, we’ve listed two specific examples this week of people who may have died due in part to their interaction with a chatbot. There are likely more cases of chatbot-related deaths, and, given the way people have interacted with new technologies and ideas throughout history, there will certainly be more chatbot and AI-related deaths and accidents to come. 

In the end, however, a chatbot is just a piece of software. While there may be some nefarious individuals making addictive and otherwise harmful chatbot personas on platforms like Character.AI, a chatbot itself is an aid to the human user intended to improve their experience with technology. 

To editorialize for a moment, I think where we’re getting into trouble with chatbots is in the application. We know they’re effective from a customer service and technical support standpoint and, while some people may not love them, they’re a vast improvement from traditional FAQs and telephone menus. They’ve also shown promise in retail applications as shopping assistants, travel agents, even bank tellers. I don’t think we’re ready for chatbots as companions. 

I propose that our experience with software, even artificial intelligence, is not just in the lines of code written to serve us—it’s also in the feelings and knowledge we bring to the command prompt. Garbage in, garbage out, right? Negative input creates negative output. If we look to AI as a stopgap solution for what are essentially human problems like a mental health issue, or to fulfil human needs like companionship to stave off our loneliness, we’re may not only find that it is not up to the task, but that it exacerbates our maladies.