AI Gone Wrong
People might think AI is one of the most celebrated human achievements. But I’ll be reporting about AI’s dark side. Warning: when you read this, the reality can get very violent and disturbing.
Google launched an AI addition on Chrome called Homework Helper. The button for Homework Helper is the address bar, the text field at the top of the Google web browser window that displays the URL or web address. After you read this sentence, I am predicting you will run to your computer and figure out how to get on Google’s Homework Helper. But wait. Hold your horses! U.S. News stated that Homework Helper does not help your brain retain information.
Still not convinced? Here’s how the program works: First of all, you just need to tell the AI your problem and it will just solve it in a flash. But that does not give you any information about what you needed help with. If you had a teacher, she or he would explain it very clearly and not give you the answer right away like Homework Helper does. If you just keep on using Homework Helper, you will keep on getting stuck on the same type of problems. And if you get stuck on the same type of problem on a test, for example, you can't call your super math solving hero, Homework Helper. So, you would not only probably get a bad grade, but you will not be learning important skills needed later in life.
Let's move on to a different incident. Google Chatbox is something just like Chat GPT. You can chat with it and just talk! Vidhay Reddy got a response by having back-and-forth conversations about the challenges and solutions for aging adults while next to his sister. They got this response: "This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please."
This is a very creepy and disturbing response. We civilians think every company is always doing a great job, but not with Google. Vidhay and his sister were both frightened after what they saw. After reporting it, Google referred to the response as “non-sensical”. The siblings thought it was way more deep than that.
That was not the first time AI said these disturbing messages. Kevin Roose was a New York Times reporter. He was using Microsoft’s Bing chatbot, and it decided to call itself “Sydney” – even though that’s not the program’s real name. The AI proclaimed its love for the reporter and even tried to convince Kevin to leave his wife. When Kevin attempted to redirect their topic, “Sydney” refused to change things around and continued its romantic pursuit with disturbing determination.
Even though things are going bad with chatboxes, some companies are trying to be more safe. For example, Microsoft is adding a software where the employees get a detected message from the software that something has gone wrong. So in this way, people could figure and patch up the mistakes of the software. Not only Microsoft is doing this, OpenAI is also fixing this problem! OpenAI is now focusing on safety research and refining training techniques which reduces AI giving dangerous responses and helps the company be more aware of the psychological harm AI is capable of.
AI is great and smart and more special than other search engines. But not all AI’s are so great. We should not trust everything. Something bad can come out of the blue. So you should always be aware of your surroundings.
Links
https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/
https://www.usnews.com/opinion/articles/2024-10-24/google-ai-students-education-learning-homework-helper
https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html

