In a context where artificial intelligence like ChatGPT is gaining popularity, Europe is considering strengthening the GDPR to better regulate their use of personal data. These AIs, used in connected devices like smartwatches and online platforms, can access sensitive information, often without users being fully aware. Faced with these risks, the European Union is proposing regulatory measures aimed at protecting personal data and informing users about the use of their information by AIs.
This will also interest you
[EN VIDÉO] Interview: how was artificial intelligence born? Artificial intelligence aims to mimic the functioning of the human brain, or at least its logic…
While the European Day dedicated to the protection of personal data seeks to raise awareness among populations so that they can take control of their private data on the Web, the future of these could well experience a revolution. It is that of the thunderous arrival of powerful artificial intelligence (AI), like ChatGPT. If these AIs know how to convince with their rhetorical talents, they will also be increasingly exploited to manage the numerous personal data that we provide to the platforms, often without realizing it. AI is not without risks from this point of view.
For this reason, Europe wishes to complete its general data protection regulationgeneral data protection regulation (GDPR). The institution plans to add a set of harmonized rules on the use of AI. It must be said that this famous AI is now everywhere. We have it porteporte on the wrist night and day with connected watches and bracelets which can collect health data and even detect certain pathologiespathologies. However, consumers are not always aware that asking personal questions, of a medical nature for example, to a conversational tool means providing the companies that manage this artificial intelligence with sensitive information that could be exploited for commercial purposes. And this is not the only concern, because artificial intelligence involves many players, whether the developer, the supplier, the importer, the distributor and the user. This set remains rather opaque for the consumer. It is therefore difficult to know who actually has access to personal data and who would be responsible in the event of problems.
Better information on AI algorithms
With the increasing use of these AIs, the risk of leakage or loss of control over personal data is also significant. This is why, to protect them, consumers must find out about the company that collects their data and its policy for processing this personal information. It is not always easy to do so, even if some players in the sector are more virtuous than others. This is particularly the case ofAppleApple who wants to champion data confidentiality by forcing developers toapplicationapplication to automatically request consent for data collection, for example.
To better protect users, the European Union has therefore proposed three texts: a regulatory framework on artificial intelligence, a directive on liability in mattermatter of AI, a directive on product liability. Among its additional regulations, the EU wants to oblige, for example, digital giants and other platforms and social networkssocial networks to better inform users about their algorithms. And to oblige them, the text provides for significant sanctions. They could range from 10 to 30 million euros or 2 to 4% of turnover in the event of failure to comply with these new obligations. It remains for the institution to adopt these texts quickly enough before AIs grant themselves even more freedom.
rewrite this content and keep HTML tags