According to a recent study, OpenAI is worried that users would grow reliant on ChatGPT’s human-like voice and possibly get attached to the service.
CNN referenced a recent study on safety evaluation using the ChatGPT tool from OpenAI.
Reports claim that ChatGPT’s voice now sounds quite authentic. This voice reacts instantly, adapts to interruptions, and makes sounds like laughing and “hmm” that people frequently make in conversation. The tone of the speaker’s voice can also be used to infer their emotional condition.
This year’s OpenAI voice feature was compared to the circumstances in the 2013 film “Her” minutes after the company made the announcement. The lead character in the film develops feelings for a voice. claimed AI, and she later suffered when “she”—the AI voice—confessed to having ties to hundreds of more people.
According to the study, OpenAI seems to be worried that this fiction is becoming too real, since they have noticed a lot of people using ChatGPT in a way that “demonstrates a common connection” with the tool.
In the end, “users may establish social bonds with AI, decreasing the necessity for interaction with actual humans. This could help lonely people, but it might also have an effect on relationships.”Healthy relationship,” according to the report.
According to the findings, people may have undue faith in a chatbot after receiving information from it in a voice that seems human. AI is prone to error, hence there is a considerable risk involved.
The study draws attention to yet another significant danger connected to artificial intelligence: IT businesses are rushing to use AI technologies as soon as possible, fearing that doing so might upend how people communicate, work, live, and seek for information.
The accountability of Big Tech
But in actuality, Big Tech is acting before anyone has a clear idea of what will transpire. Similar to several other technology breakthroughs, businesses frequently consider the potential applications of their tools. But in practice, given the wide diversity of users, that ideal is sometimes very different and frequently has unintended implications.
Some users have developed “romantic relationships” with chatbots that use AI. Experts are concerned about this.
“It is really important for businesses to handle the story in an ethical and responsible manner. As of right now, everything is still in the testing stage, according to study expert Liesel Sharabi of Arizona State University. investigates human communication and technology while responding to CNN.
“I worry about people who are forming a really deep connection with a technology that may not be around for a long time and is constantly evolving,” she said.
According to OpenAI, user interactions with ChatGPT over time may have an impact on social norms.
The business stated in the study that “our models are respectful, allowing users to interrupt and ‘take the mic’ at any time, which, while expected of an AI, would go against the norm in human interactions.”
OpenAI now declares that it is dedicated to developing AI “safely” and that it will keep studying the possibility of “emotional dependence” among its users.