INFORM Blog

The EU AI Act: Why a Political Trilog Will Not Advance Humanity

Dec 12, 2023 // Dr. Jörg Herbers

A Comment

The EU has finally reached agreement on the EU AI Act. In several rounds of negotiations, the so-called “trilog” of the European Commission, the European Parliament and the European Council has negotiated the terms of a regulation of AI in the EU. EU politicians are eager to stress the importance of their first regulation of artificial intelligence worldwide. At the same time, they claim that the EU AI Act will foster a kind of responsible AI “made in EU”, to compete with the US and China in the race to reap the benefits of AI.

So everything is settled now? Well, in the end, it was a tough fight for finding an agreement at all. The drafts of the EU AI Act had been circulating for more than two years. Many of the aspects of the regulation had been debated for quite a while. Some important adjustments were made by the European Parliament in the summer of 2023. In the home stretch, it was mainly the so-called “foundation models” that became the cornerstones of intense debates.

Foundation models include Large Language Models such as OpenAI’s GPT-4. The European Parliament had proposed obligations for foundation models. In the recent debates, it appeared that Germany and France wanted to protect their flagship startups Aleph Alpha and Mistral, which offer such language models, from stricter regulation. As part of a “tiered approach”, EU policymakers decided to measure the power and the potential risks arising from foundation models by the number of computer transactions (“flops”) used to train the models. The only foundation model currently in the highest-risk category is GPT-4.

A ”Lex GPT-4” to advance Europe’s AI position?

At least for now, European lawmakers are therefore imposing additional regulations on OpenAI (and potentially Google with its new Gemini models) when it comes to marketing its language models in Europe. On the frontstage, there are good reasons for this, but on the backstage there is a hint of protectionism. From a technology and strategy perspective, the approach is irritating.

American companies are being discouraged from entering European markets. We already saw Google delay the release of its Bard chatbot in the EU in spring. Most recently, they are delaying the release of their new Gemini model, each time citing regulatory requirements as a reason. This means that European companies that are keen on adopting language models as part of their offering are at a disadvantage compared to their American counterparts.

The truth is: European companies don’t currently have foundational models as powerful as their US-American counterparts. And it is very unlikely they will reach that level soon. Even flagship models such as those provided by Mistral or Aleph Alpha, or the open-source European model OpenGPT-X, are way behind the capabilities of their US competitors, at least as general language models. At the same time, there seems to be an increasing tendency in the EU to use taxpayers’ money to support the training of competitive European models.

Is this where we want to go? Even Aleph Alpha does not seem to believe that it should compete with OpenAI and Google on general-purpose models. Even with lots of venture capital or taxpayer money, European startups are unlikely to beat the marketing and sales power of the Microsofts and Googles of this world. The truth is, even their European counterparts seem to know they shouldn’t try to enter that race. They are rather heading somewhere else. They are positioning themselves in special, albeit smaller sectors of the expected future market, e.g. with models that explain their reasoning and that comply with the strict regulations of the public sector.

This is one of the evasion strategies of the second-tier LLM providers. Another is in LLMs for specific domains. However, at least with for larger applications and markets, the “big tec” companies have already made their mark: Google with Med-PaLM for medicine, Meta with Galactica for research, OpenAI with Codex for code generation.

Representing languages and their cultural underpinning

Still other companies or projects are looking for their piece of the pie in less occupied market segments. Only since GPT-3 has OpenAI included larger non-English text corpora in its training material. Other world regions have responded in part very actively, notably China, which has developed advanced language models such as Baidu’s Ernie 4.0. In the Arab world, Jais and other language models have been developed. Europe has started OpenGPT-X, which specifically addresses support for European languages other than English.

These models do not only represent the structures of the respective languages better, but also the imprints and beliefs of the respective cultures. Linguists have known for a long time that languages also codify culture. GPT-4 has sometimes been criticized for having a Western, if not US-American bias, which is not surprising given the material on which it has been trained on. Models like Ernie or Jais will also reflect the imprints and beliefs of the cultures behind their languages. Large Language Models reflect the cultures they stem from. The landscape of Large Language Models therefore reflects the cultural world order.

In this light, what exactly is the European Union trying to achieve? Isn’t it common belief that both the US and Europe are part of the “Western world”, sharing cultural values and beliefs? And aren’t Western languages comparatively close to each other in terms of how they reflect these cultural imprints, at least when compared to other languages and cultures?

When training general-purpose LLMs, we should therefore expect that what is useful in the US is also comparatively useful in most European countries. And even if China’s Ernie or the Arab world’s Jais are different, we might be able to learn a lot from these models. We could e.g. try to find out in which way these respective models represent different cultural norms and beliefs. We might begin to understand these differences better. We may even be able to find “bridges” between the LLMs used in different cultures, to bridge gaps and to improve intercultural communication. Every European and non-European politician should have a vital interest in such research.

Large AI models for domains other than language

Let us look at yet another aspect. Large Language Models are known to not perform well in domains such as mathematics. Even on relatively simple mathematics tasks, ChatGPT regularly fails. This is the reason why hybrid approaches with other models have been devised. ChatGPT has plugins to solve certain mathematical problems. Google seems to have taken special measures to enable Gemini to perform better in mathematics.

But the story does not end with mathematics. With LLM providing the natural language bridge to users, researchers have started to build hybrids with other AI models. As an example, GPT-4 has been used to complement AI models that are used in robotics. It is likely that we are going to see more of these specialized “Large AI Models” in many fields of interest – with or without LLM bridges. Examples are DeepMind’s AlphaFold, which predicts protein folding, and DeepMind’s GNoME, which enables for the discovery of new crystals and materials.

Each economy should be interested in using its specific strengths to devise new kinds of “Large AI Models” – AI counterparts of its economic and skill structure. For example, Germany and Italy should be interested in Large AI Models for mechanical engineering, France should have an interest in large models for aerospace, many European countries in AI models for manufacturing and logistics. With each one building on respective strengths, the industrial structure could be considerably reinforced by “AI twins”.

Mindful and strategic economic development

Imagine this world of AI model development: LLMs reflecting world order and fostering mutual understanding, complemented by “Large AI Models” promoted by skill sets and economic structures. I believe that channeling efforts in such directions would promote AI enhancements worldwide. I’m not talking about “AI for AI’s sake”, but about AI applications that benefit humanity. With this in mind, the European Union’s regulatory approach to foundation models seems backward-looking. Given the cultural proximity, the European Union would do better to join forces with the U.S. Europe should rather embrace these models – to use them as starting points for its own models.

European policymakers would be better off creating forward-looking AI visions and strategies. The authors of the EU AI Act can be proud of the results of their work, but rather than preaching how to do AI properly, they should bring their ideas into a joint discussion with other cultures. Contrary to European politicians’ promotional pitches, the EU AI Act will not be an enabler for more “made in Europe” AI. With a different mindset, politicians in Europe and elsewhere could be drivers of mutual understanding and economic development. Let’s hope that the current version of the AI Act will not be the final word on AI in Europe. There are better ways to spend time and money on a bright future for AI, including a substantial role for Europe.

About our Expert

Dr. Jörg Herbers

Dr. Jörg Herbers

CEO