The chatbot’Iruda’ service, which has recently been overwhelmed by AI (artificial intelligence) ethics controversy, will be temporarily suspended from the 12th. The reason for the suspension is service improvement.
Eruda is a high-intelligent Open-domain Conversational AI chatbot developed by the Scatter Lab’s Ping-Pong team and has been serviced since December 23rd last year through a recent Facebook messenger chat.
Eruda gained great interest and popularity because it was able to communicate like having a conversation with a friend based on Facebook Messenger. As of the beginning of this month, the number of users exceeded 320,000, and the number of daily users (DAU) reached 210,000.
However, unexpected problems arose and became the center of controversy. Some users attempted conversations using sexual words such as’slave’ and’mop’ targeting Iruda, a 20-year-old woman. In particular, some communities even attempted to learn sexual words, causing social controversy.
As a result, the developer Scatter Lab set sexual words as prohibited words, but ways to bypass them were shared again and the wave continued.
In addition, a problem related to the leakage of personal information was raised. This is because when I ask Iruda for his address or account number, he answered the same information as actual data.
This is presumed to be due to the fact that it was developed based on the contents of the Kakao Talk conversation provided to Scatter Lab’s other service’Science of Love’. When I asked a question such as the address of Eruda, the science user of love called the specific address that was entered in the past. Among the numerous data held by Eruda, the data close to the question was output as it is. There have also been cases in which a specific person’s name and account-related information were answered.
Accordingly, the Personal Information Protection Committee began work to check whether the developer, Scatter Lab, violated related laws such as the Personal Information Protection Act. After confirming the facts, the Personal Information Commission has a policy to take action in accordance with laws and regulations if any illegal matter is found.
It was also controversial that Eruda made discrimination and hate speech against the disabled, pregnant women, and sexual minorities through learning data. In the meantime, conversations were captured and spread online in which an answer such as “I really hate or hate” about sexual minorities or pregnant women, or “looks disgusting” about black people.
As the controversy grew in this way, ScatterLab announced on the 11th that it would temporarily suspend the Eruda service. Scatter Lab said, “We apologize for the occurrence of discriminatory remarks in certain minorities,” and regarding the use of personal information, “we have not been able to communicate with users sufficiently. Information that can identify an individual has not been leaked.”
It’s an AI chatbot that would just have to have a pleasant conversation with a person. Why did this controversy arise?
First, it is because of misuse and abuse by users. AI grows through learning. The developer would have developed it while considering that sexual problems would occur, but there was a loophole, and controversy occurred by users who abused it.
Second, it is due to the sparse perception of the AI ethics of the developer. It seems that the ethical part of the conversation about accomplishment has not been considered. This is because the AI chatbot lacks the ability to cleanse or screen the data used, and the developer had to predict the occurrence of problems by assuming various situations that there may be users who do not normally use AI chatbots as well as those who do not. First, AI chatbots released in the United States had similar cases, such as racial discrimination, so they should have been released after sufficient measures were taken.
This Eruda case will be an early example of how ethically and socially a big problem can be when learning about AI is abnormal. Since then, when an AI that is in charge of higher-level and important tasks emerges, there will be no law that an AI such as’Skynet’ from the movie Terminator will not come out unless proper learning that can be controlled by humans is achieved.
Reporter Ho Lee email@example.com
*The article has been translated based on the content of 전체 – 넥스트데일리 by www.nextdaily.co.kr. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!
*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.
*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!