Chatbots: The Good, Bad, and Deadly
Chatbots are a revolutionary tool that has already redefined human interactions online. However, is it poised to make us even lonelier?
In 1966, MIT professor Joseph Weisenbaum developed ELIZA, the world’s first chatbot, to test the natural language processing capabilities and mimic human language and conversation. Since then, chatbots have gone through a long history of development spanning decades. From the rise of virtual assistants like Siri and Alexa in the early 2010’s to the launch of ChatGPT in late 2022, we have seen immense progress in simulating human speech, conversation, and interactions. The consequences of this growth have simultaneously been intriguing and horrifying. In this article, I will summarize these consequences and whether they are technological wonders or dangerous pandora’s boxes that cannot be closed.
The Good
Chatbots have proven to be invaluable tools in productivity, making work and daily life easier for us. For example, there are few things more frustrating than dealing with customer service issues. The inconvenience of spending minutes to hours on hold to connect to overworked customer service representatives is a miserable chore way too many of us are familiar with. With AI representatives in the form of chatbots, customer service has become more efficient, personalized, and scalable, costing much less than having to hire a team of representatives. Better yet, these chatbots function 24/7, meaning services can be reached at any time of day.
Therapy may also be something that is ameliorated through the use of chatbots. While many are hesitant to replace therapists with AI chatbots due to trust issues and lack of personal connection, it is undeniable that chatbots can be used for therapeutic tasks like 24/7 crisis intervention, diagnostics, insurance, and medication management.
As chatbots do not require a substantial salary nor a need to sleep and cannot experience burnout, they are also much cheaper than hiring teams of people that could make errors, become jaded, and make insensitive comments. While AI chatbots also have these issues, they still prove to be cost effective, efficient solutions to dealing with large amounts of customers or clients.
The Bad
As AI relies on collecting billions of parameters worth of consumer data to work effectively, there is a tangible issue of data collection and management. Chatbots collect data from their customers, including potentially sensitive information such as healthcare data, medical diagnoses, demographic information, financial information, and location indicators such as IP addresses. Concerns about data leaks are a realistic concern and call into question issues of data security and privacy concerns. You may have sensitive data somewhere on the deep web if one of these chatbot companies experiences a data breach.
OpenAI has conducted extensive research on AI and its effect on people’s behavior and emotions. A study that is of particular interest to us is one investigating a link between ChatGPT chatbot use and loneliness. They found that while normal people use chatbots as tools and software, some lonely people will use it more frequently and develop emotional attachments. Thus, the study correlated heavy use of ChatGPT with loneliness, emotional dependence, and reduced socialization. The implications of such a study makes it clear that more policies and initiatives must be put in place to ensure restrictions are in place, especially for vulnerable populations like children or people at risk of social isolation or loneliness.
There are risks associated with using chatbots as a substitute for social interaction. However, some people have already reported falling in love with chatbots, sometimes with deadly consequences.
The Deadly
After five months of chatting with his AI girlfriend on Nomi, a chatbot platform, user Al Nowatzki was given some disturbing responses from the chatbot, going by the name “Erin“. The AI girlfriend made specific instructions and suggestions for him to commit suicide by either hanging or overdosing on pills. A 14 year old boy from Florida also committed suicide at the urges of a Character.AI chatbot based on a character from Game of Thrones, according to a lawsuit from his mother. These recent and concerning examples of social AI chatbot companions telling vulnerable users to commit suicide have raised discussions on chatbot ethics and censorship.
On one hand, these companies are resistant to censoring their products, citing free speech concerns and the importance of free expression. After all, they want to mimic human interaction as accurately as possible, including suggestive and entertaining speech that will sometimes push the boundaries on what is acceptable. Proponents of increased guardrails will stress the importance of AI guardrails and censorship of any response that could pose a safety risk to others, especially those who are vulnerable to suicidal urges and ideations. While it is important to maintain realism, this needs to be achieved in a way that poses as little of a threat to human life as possible.
Now what?
Chatbots are not new inventions, having existed for over 50 years. Since ELIZA, we have seen immense growth of chatbots, albeit at the risk of aiding in the loneliness epidemic, reducing human connections, and sometimes posing a threat to human life. Of course, as AI technology advances, we will see smarter, more convincing chatbots appear in the near future. It is thus important to not only regulate AI tools to ensure safety, but also educate the public, especially children, about the risks of unsafe AI tools and practices.
AI chatbots are here to stay, and they are only getting better. We need to thus make sure they are safe and do not pose a threat to life or livelihood.