The Drum Awards Festival - Official Deadline

-d -h -min -sec

Open Mic Marketing

Simple steps marketers can take to stay compliant with the EU AI Act

Acxiom

|

Open Mic article

This content is produced by a publishing partner of Open Mic.

Open Mic is the self-publishing platform for the marketing industry, allowing members to publish news, opinion and insights on thedrum.com.

Find out more

May 21, 2024 | 6 min read

The EU AI Act shares many similarities with GDPR – including the fact that marketers shouldn't be so worried about it, says Dr. Sachiko Scheuing (European privacy officer, Acxiom), and co-chairwoman of the Federation of European Data and Marketing Association (FEDMA).

I get a lot of questions from marketers about how the EU AI Act, which was adopted by the European Parliament in March, will affect them. Many are confused – and more than a little concerned – about how they’ll stay compliant with laws regulating data use, today and in the future. It reminds me a lot of 2018, when the General Data Protection Regulation (GDPR) came into effect and brands feared the end of data-driven marketing was nigh.

Another reason the EU AI Act gives me flashbacks to GDPR is that my advice to marketers is broadly the same today as it was then. You’ve probably seen plenty of headlines focused on the compliance burden placed on brands, and the potential penalties for breaches. But sensationalist accounts of a restrictive law don’t tell the real story for the majority of brands, and they miss the whole spirit of regulations like these.

Regulating for growth

The GDPR was actually born out of an optimistic vision for growth through the safe use of personal data. It served to standardize approaches across nations, level the playing field in Europe, and make it easier for everyone to work together while protecting individuals and their data. The same is true of the EU AI Act.

A risk-based approach to tech-led growth

Both the GDPR and the EU AI Act take what’s known as a risk-based approach to safeguarding individuals and their data. In the case of the EU AI Act, this breaks down to four levels of risk and corresponding obligations for businesses:

Unacceptable risk 

These are uses of artificial intelligence (AI) that are incompatible with the fundamental rights of individuals in the EU and are therefore prohibited. This includes AI systems that manipulate human behavior or exploit vulnerabilities, such as the use of emotion recognition AI at a workplace.

High risk

This is the riskiest category that’s permitted in the EU and therefore faces the highest level of regulation. It includes uses of AI to handle emergency calls made to police, ambulance, or fire.

Limited risk

Regulated to a lesser degree, the use of limited-risk AI systems comes with obligations of transparency because they could lead to manipulation or deceit. For example, you have to tell people when they’re interacting with a chatbot and not a person.

Minimal risk

Representing the majority of AI applications out there today for marketers, the minimal-risk category faces no mandatory regulation, but some best practices are advised. Everyday examples include the use of spam filters and AI-enabled video games.

Marketers have it easy

Now the good news for marketers. And I think it’s best delivered as a reminder that, at the end of the day, marketing is usually pretty low-risk stuff. So, the application of AI in marketing use cases – helping to shape the way brands reach and engage consumers, or deciding what ad to serve an audience segment – tends to fall into the lowest, minimal-risk category.

This cannot be said for other fields. For example, in a healthcare context, if you don't maintain good data hygiene, a simple error like matching the wrong fields in a dataset could result in an individual being given the wrong prescription or instructions. This could cause real harm to the individual.

For most marketers in most industries, though, it’s likely that you won’t be restricted in your run-of-the-mill AI applications. Nor will you face mandatory obligations.

However, just because you don’t have to take action, that doesn’t mean you shouldn’t take proactive steps.

My advice to marketers

Once again, I’m getting flashbacks to the early days of the GDPR, when I would advise brands on how to establish corporate governance structures for data privacy. A cornerstone of that work was – and remains – data protection impact assessments (DPIA).

Today in the AI context, it’s very similar. In fact, the EU AI Act has basically done a repeat of the DPIA, but for AI it’s called a fundamental rights impact assessment (FRIA).

What does this mean? It means creating a document that records what you’re doing with AI, the assessments you’ve made into the possible impacts, and the actions you’re taking as a result; something very similar to Register of Processing Activities (RoPA) under GDPR. So, if you do decide to use a churn prediction tool or a campaign optimization tool, you might say you checked it out and established there was very little risk involved, but it would be super helpful to your organization if you document this as a proof that you have evaluated the tool.

Now you’ve got your document. It’s kept up to date. It’s stored somewhere safe. If you ever need to produce it for an audit or a check, you’re prepared. And it sets you on a path of good practice and governance for the future, when regulations will undoubtedly evolve and possibly heighten.

If that’s as onerous as it gets in terms of due diligence, it’s a small price to pay to unlock some of the amazing potential of AI for your brand.

Learn more about Acxiom’s data privacy, data governance, and data ethics practice.

Open Mic Marketing

Trending

Industry insights

View all
Add your own content +