Automating the Evaluation of GDPR using Artificial Intelligence

The European Consumer Organisation (BEUC) and researchers from the European University Institute in Florence released a study about how Artificial Intelligence can help scan and analyse privacy policies.


The research team analysed the privacy policy of 14 popular online companies, taking the requirements of the General Data Protection Regulation (GDPR) and the relevant European guidelines of data protection authorities as a basis.

Based on this analysis, the university researchers are training an automated evaluator of privacy policies, called “Claudette”. The goal is that this Artificial Intelligence tool will be able to automatically scan companies’ privacy policies and detect clauses that potentially fail to meet GDPR requirements.

The research suggests that one month after the entry into force of the General Data Protection Regulation (GDPR), the privacy policies of some of the biggest online services – including Facebook, Google and Amazon – leave much room for improvement as none of the analysed policies fully met the requirements of the GDPR.

In total, all the policies amounted to 3,659 sentences (80,398 words). Of these, 401 sentences (11.0%) were marked as containing unclear language, and 1,240 (33.9%) contained “potentially problematic” clauses or clauses providing “insufficient” information.

The identified problems include:

  • Not providing all the information which is required under the GDPR’s transparency obligations. For example, companies do not always inform users properly regarding the third parties with whom they share or get data from.
  • Processing of personal data not happening according to GDPR requirements. For instance, a clause stating that the user agrees to the company’s privacy policy by simply using its website.
  • Policies are formulated using vague and unclear language5, which makes it very hard for consumers to understand the actual content of the policy and how their data is used in practice.

Monique Goyens, Director General of The European Consumer Organisation, said:

“A little over a month after the GDPR became applicable, many privacy policies may not meet the standard of the law. This is very concerning. It is key that enforcement authorities take a close look at this.

“This innovative research demonstrates that just as Artificial Intelligence and automated decision-making will be the future for companies from all kinds of sectors, AI can also be used to keep companies in check and ensure people’s rights are respected. We are confident AI will be an asset for consumer groups to monitor the market and ensure infringements do not go unnoticed.

“We expect companies to respect consumers’ privacy and the new data protection rights. In the future, Artificial Intelligence will help identify infringements quickly and on a massive scale, making it easier to start legal actions as a result.”

The researchers have developed this innovative technology to support consumer groups and public authorities to ensure better enforcement of and compliance with important consumer rights. This can also be very helpful for consumers themselves. Privacy policies are typically voluminous and complex. In a world where consumers are increasingly surrounded by connected products and use digital services for everything they do, assessing such policies is essential to protect people’s privacy and autonomy.

BEUC will inform the European Data Protection Board about these findings. We will continue monitoring market practices closely and will consider taking further legal actions as appropriate.


Read the full Report here