Earlier this week, a man discovered conversations of unknown origin in his history with ChatGPT, revealing a lot of private data. The lack of security of the chatbot is singled out.
This will also interest you
[EN VIDÉO] The 10 Most Dangerous Artificial Intelligence Threats Artificial intelligence has enabled advances in the field of health, research and…
A new controversy has just erupted around the security of data from ChatGPTChatGPT, and chatbots in general. The site Ars Technica revealed the story of one of its readers who had a disturbing experience using OpenAI’s artificial intelligence.
This is not the first time that ChatGPT has revealed worrying flaws. His propensity for angry outbursts, even blackmail, mixed with his processing of personal data, writes the recipe for a dangerous cocktail. © Futura
The man explained that he discovered conversations in his ChatGPT history that were not his. Each conversation appears to have been conducted by a different person, and often contained personal information. This included someone working on a presentation, details of an unpublished research article, or a script in progress. PHPPHP. However, the most serious involved a pharmacy employee who was trying to resolve issues with their website using the chatbot. The chat contained several usernames and passwords, the name of theapplicationapplication and even the phone number of the pharmacy.
Be careful what you share with an AI
OpenAI responded and indicated that their account was allegedly compromised and used to provide free access to ChatGPT 4ChatGPT 4 to a user group in Sri Lanka. However, the man is not convinced, and indicates that he used a strong password which is not associated with any other account, apart from an account MicrosoftMicrosoft. Unfortunately, OpenAI does not offer any way to secure your account with thetwo-factor authenticationtwo-factor authenticationnor to consult a connection history.
Gary Marcus, researcher in AI and cognitive psychology, immediately reacted on TwitterTwitter) indicating that ChatGPT is absolutely not secure. He reminds us that everything we share with a chatbot can be used to train it, shared with other users, used for targeted advertising or even resold.
rewrite this content and keep HTML tags