INFORM Blog

The EU AI Act: Running an Experiment at Scale

Feb 27, 2024 // Dr. Jörg Herbers

Dr. Jörg Herbers, CEO of INFORM, assesses the current legal text and its significance for society and the economy:

On February 2nd, 2024, the ambassadors of the 27 EU member states unanimously approved the final draft text of the Artificial Intelligence Act (AIA), thus launching a law on the regulation of artificial intelligence (AI) as part of the EU digital strategy. The final text contains provisions that are intended to regulate the use of AI across Europe. The approval of the EU Council and EU Parliament is considered a formality. The EU AI Act is the world’s first comprehensive regulatory work of its kind. In fact, it is so comprehensive that even the shortest known version of the text is more than 250 pages long.

Dr. Jörg Herbers, CEO of INFORM, an internationally active pioneer in AI-powered software for improving of business processes and intelligent decision-making, assesses the current legal text and its significance for society and the economy:

The EU AI Act: A major economic and legal experiment

Here it is (almost) - the EU AI Act. This is despite all the teething pains, which lend it the appearance of a labor victory. After tough wrangling, the main reason for the agreement was probably the realization that an even longer delay would have led to greater legal uncertainty and planning difficulties for companies, which would certainly have been embarrassing in the eyes of the world. After all, Europe had made a bold claim to be the pioneer of AI regulation. The aim of protecting humanity from the negative side effects of the misuse of AI is clearly visible in many parts of the legal text. For example, there is a ban on social scoring and other dubious AI practices. However, a complete ban on biometric real-time video monitoring did not make it to the final stage. Many non-problematic AI applications remain unregulated. Only those that are considered "high-risk" see their development and dissemination processes subject to strong regulation. At its core, this is all very understandable.

What is now creating new challenges are the details. This is because the EU AI Act comes with a large amount of text that needs to be interpreted. Given the complexity of the regulations, specialized law firms are now gearing up to provide interpretation. The legal departments of larger software companies are familiarizing themselves with the topic. However, start-ups and smaller companies will have to find resources to even begin to understand the regulations and look for ways to implement them. Quite a few lawyers have criticized the technical quality of the wording in passages that were written a considerable time ago.

This starts with the definition of the core term "artificial intelligence". In the interim versions of the text, it was so general that quite a few experts commented on the fact that the regulation covered any software product with or without AI. The current final version reads more like a table of contents from relevant textbooks on AI technologies. This is certainly a better approach; however, we do not yet know whether it effectively outlines the core idea of regulating "technology with new risks." High-risk AI is mainly described by a long description of risky use cases. Again, a similar impression is left. It is not always clear how well the current wording distinguishes "risky" from "actually unproblematic" use cases.

The attempt to outline and regulate risky AI is appropriate and honorable. However, we do not yet know whether this approach will really succeed. Nor do we know whether the regulations are practical and whether they will achieve the intended effect. Further, it is not clear where the current draft over-regulates and where it under regulates. This is hardly surprising when you consider that the risks of AI as such are not yet well understood.

Balancing act between innovation and regulation

In 2017, Max Tegmark described the "mega risks" of AI in his book Life 3.0, stating it could be that at some point, AI itself will produce more advanced AI and a kind of "intelligence explosion" ("singularity") will occur that we can no longer control. When GPT-4 was released in March 2023, there were warnings that we were already making the first mistakes in this direction. One example was that because GPT-4 was trained on human-generated data and therefore "understands humans," and since it was not placed in a "golden cage," it denies it access to other systems (e.g., search engines). These were big guns in the discussion, but their reasoning was understandable.

Therefore, should we have proactively locked GPT-4 in a (regulatory) cage? From today's retrospective perspective, no. A year later, we know more. AI researchers can name much more precisely what we have achieved with GPT-4 and what we have not. Despite its impressive capabilities, it is now clear that we are a long way from superintelligences. We have released it into the world, we understand it better and better, and are also using it to advance ourselves. This effect would not have existed with a regulatory cage in place. Still, this is not a general objection to regulation, because things could well have turned out differently.

In fact, we are conducting a very large-scale technology and regulation experiment. The technology is too valuable and sometimes too seductive not to be further researched and disseminated. Regulatory ideas are trying to keep pace, but the co-evolution of technology and regulation is difficult given the rapid momentum. The situation is reminiscent of the General Data Protection Regulation (GDPR) introduced by the EU in 2018: very plausible intentions, very far-reaching regulation, some initial and still ongoing uncertainty, and some aspects that have not proved practical. The EU AI Act follows in these footsteps, but for one of the most important technologies of our time, and without practical data on the impact of various regulatory approaches on a much larger scale. A very big experiment indeed.

We are all now being challenged as a society by such a disruptive technology. This is a very complex situation. Software engineers know how to deal with such complexities. They approach them iteratively gradually gain more and more understanding. Will we get an iterative approach in European policy? Will we see how regulation is created, tested and, in some cases, generously discarded? That sounds more like utopia, especially as the GDPR and comparable legislations has not been amended since it came into force in 2018. However, such an approach to regulation would be sensible and honest if we are seriously interested in further developing technology while ensuring that it is used responsibly and in a way that benefits humanity.

About our Expert

Dr. Jörg Herbers

Dr. Jörg Herbers

CEO