Business & Legal
AI, artificial intelligence, Artificial Intelligence Act, gdpr, Pega, Pegasystems

Global Organizations Team up to Discuss and Support AI Regulations

Over the last several years, consumer data privacy breaches have brought regulations to the forefront of the collective global business conscience. National, local, and regional governments involved in data privacy legislation are very diverse, making them hard to understand and cumbersome for organizations to navigate given the regional differences in laws and regulations and how they are enforced.

In 2018, the European Union was the first to codify restrictions into the toughest privacy and security law in the world: the General Data Protection Regulation (GDPR). GDPR struck fear into the hearts of legal departments everywhere by imposing obligations onto organizations that target or collect data related to consumers in the EU. It does not matter where an organization is set up to do business, GDPR penalties apply to any company that violates the law if they are leveraging EU consumer data. The penalty for organizations that violate the privacy and security standards of the law are steep – 20 million Euros or 4% of global revenue – whichever is higher. These costs would be significant for any organization regardless of size which is why businesses have been forced to make real changes. Ignoring the law is not an option.

A domino effect has occurred, and similar laws have come to several individual states in the US. And now that artificial intelligence technology is being leveraged across all kinds of business functions, ranging from marketing and sales to risk and supply chain management, there is now a movement to regulate it. True to form, the EU is the first to create a regulatory framework that seeks to mitigate the risks associated with using artificial intelligence for users “without constraining technological development”.

Introducing the Artificial Intelligence Act (AIA)

In April 2021, the EU Commission released a proposed draft of Artificial Intelligence Act (AIA). The AIA seeks to regulate “high-risk AI” aimed primarily at facial recognition software. Included in the ban is any technology that:

  • Provides real-time remote biometric identification in publicly accessible spaces by law enforcement.
  • Exploits vulnerabilities of any group of people due to their age, physical, or mental disability.
  • Enables governments to use general-purpose social credit scoring.
  • Uses subliminal techniques to manipulate a person’s behavior in a manner that may cause psychological or physical harm.

This commonsense approach categorizes artificial intelligence capabilities into risk levels and is aimed at creating a level playing field for businesses using a set of shared guidelines, so that AI can be applied responsibly and continue to have a positive impact on economies and society as a whole. The type of AI that’s being regulated is broadly defined, which Peter van der Putten (Director of Decision Sciences at Pega and a professor of Artificial Intelligence and Creative Research at the Leiden Institute of Advanced Computer Science) explains is intentional.

“It’s a broad definition of which AI technologies could be impacted and it’s more of an outcome and risk-based approach. This means we need to take a step back from what kind of techniques we’re using because AI algorithms are fairly generic. The same algorithms that are used to cure cancer, for example, can also be used for something like mass surveillance of citizens.”

For example, multiple tech companies are partnering with restaurants to create mini-autonomous food delivery vehicles. These small robots can navigate pavement, pedestrian areas, and access places that a car cannot, solving efficiency and scale issues for food delivery brands. However, this type of general artificial intelligence can, in theory, also be used to create autonomous weapons, which is why the technology itself isn’t what needs to be scrutinized, rather the way that we apply it in the real world. Per Peter:

“It’s most important to look at the outcomes and the purpose of what we are using AI for — whether the outcomes are desired or not — and make a classification between prohibited or high-risk AI systems and less risky, more common AI systems and whether there’s any risk of causing harm.”

If you compare and contrast the potential outcomes of the two, they are as diametric as the difference between using AI to automate and improve a consumer service versus exploiting the same technology to cause harm. Similar technology, significantly different result.

Proactively protecting the consumer

Consumers usually welcome regulation, especially if they can understand the source of the need. If we’ve learned anything from the consumer data and privacy struggle, it’s that we shouldn’t wait until the consumer is at their boiling point to make changes to the ways we do business. Privacy breaches and scandals erupted before GDPR was enacted. At that point, the consumers began to conflate innocuous activities, such as using anonymous browsing data to serve display ads online, with real privacy breaches like exposing consumer’s credit card information.

And they are already very skeptical of artificial intelligence as a concept. For example, Pega surveyed 5000+ consumers from North America, the UK, Australia, Japan, Germany and France about their views on AI and empathy and roughly 30% of respondents said that they’re comfortable with business using AI to interact with them. And that distrust varies by region.

Overall, consumers in Europe are more skeptical of a brand’s intentions. In the same study, when asked if the businesses you interact with truly have the customer’s best interests at heart, an average of 34% said yes across UK, Germany, and France while in the US, 41% said yes. This demonstrates that the EU is a warmer environment for regulatory consumer protections, and it sheds light on why the EU was primed to be a test case for data privacy legislation. We should expect a similar trajectory with the AIA.

Also worth noting is that the AIA is still in draft legislative form, but all eyes should still turn to the EU. Their region will be a bellwether for what happens in other countries moving forward.

Global collaboration for AI regulation

Artificial intelligence regulation was addressed at the G7 summit this year with EU Commission President, Ursula Von der Leyen, promising to further global alliances on setting standards, promoting human-centric AI, and establishing an EU-US Trade and Technology Council. Additionally, the Global Partnership for Artificial Intelligence (GPAI) summit took place in Paris this November. Member countries of the GPAI are Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, Republic of Korea, Singapore, Slovenia, the U.K., U.S, the European Union, Brazil, the Netherlands, Poland, and Spain.

It’s understandable that the uncertainty around the future of AI regulation and its timing causes anxiety for businesses. Preparing for compliance against legislation is both costly and arduous. The GPAI released a report which cited “on average, firms are forecasting spending in excess of €1.3m ($1.4m) on GDPR readiness initiatives” prior to the law going into effect in the UK. Ultimately, however, compliance with AI regulation isn’t just good for the consumer, it’s good for organizations as well. Provided, as stated above, that legislation is designed to guide technological development and not constrain it, it presents an opportunity to gain consumer trust in a landscape where it’s been eroded over time. It’s an opportunity to create a future-facing sustainable business by adapting ahead of the legislative curve instead of waiting until regulation is on our doorsteps before committing to creating responsible and trustworthy AI systems.

___

This post was originally published on the Pega blog.

Pega logo

Upcoming Events

Share

Related Articles