Hungarian Researchers Critique EU’s AI Legislation and Warn About Societal Impacts

Silas Stein/DPA/dpa Picture-Alliance via AFP
The EU’s AI Act fails to protect human rights effectively, Hungarian experts argue in a new multidisciplinary book. Highlighting legal, media, and societal perspectives, the authors critique current regulations and explore AI's growing impact on society and governance.

The product conformity rules of the European Union’s (EU) Artificial Intelligence (AI) Act are inadequate for the effective protection of human rights, according to criticisms outlined in the first volume of a new AI-focused scholarly series. Published through a collaboration between Hungarian publisher Gondolat Kiadó and the Media Council of the National Media and Infocommunications Authority (NMHH), this volume brings together insights from various professionals to examine the features, behaviours, and operations of artificial intelligence (AI), the NMHH informed on Saturday.

The new book, Responsible AI – A Practical Approach to Governing AI, authored by Árpád Rab, Zoltán Majó-Petri, and András Koltay, adopts a multidisciplinary approach, combining legal, media, and social sciences perspectives. It analyses the impact of AI-based systems and services on society, the economy, and the future while addressing questions about the appropriate and responsible use of emerging technologies, according to the NMHH’s announcement.

The first section, focusing on legal perspectives, includes a contribution by Zsolt Ződi, a senior researcher at the National University of Public Service. Ződi examines the contrast between engineering and legal approaches, particularly in the EU’s AI Act. He concludes that the product conformity rules of the Act are unsuitable for preventing violations of human rights. Ződi explains that the concept of product conformity was originally designed for tangible, physical goods and relies on measurable, quantifiable, ‘engineering-like’ parameters. This approach, however, cannot be applied to human rights issues such as the prohibition of discrimination or freedom of expression, nor to inherently unquantifiable concepts like the rule of law and democracy, the statement notes.

‘34 per cent of Hungarians have used some form of generative AI-based service, and 8 per cent have paid for such services’

The second section, from a media studies viewpoint, opens with an essay by Petra Aczél, head of research at the NMHH’s Media Science Institute. She highlights that as AI becomes increasingly global and integrated into everyday life, its interpretation must also evolve, incorporating new perspectives continuously. Aczél characterizes AI as a system defined by the three A’s: algorithmic, autonomous, and automated. She even describes AI as the digital world’s first ‘living entity’, asserting that it is more independent and advanced than its smart predecessors, more mysterious than algorithms, and freer than traditional programmes. According to Aczél, AI shares more traits with living beings than any previous digital format. To illustrate the distinction between human and artificial intelligence, as well as how AI’s knowledge could potentially surpass humanity’s, Aczél offers an example: like humans, AI does not know where the grave of Attila the Hun lies, as it can only access the digitized knowledge available in humanity’s collective data. However, she speculates that if asked repeatedly, AI might eventually deduce its location by reconfiguring the available data in novel ways.

The third section, rooted in social sciences, begins with an essay by László Z Karvalics, an information scientist. He emphasizes the importance of contextualizing AI within broader societal, cultural, and historical frameworks, referred to as ‘master contexts’. Árpád Rab also contributes to this section, presenting findings from a December 2023 representative study on perceptions of AI and the everyday use of AI-based services in Hungary.

The study reveals that 34 per cent of Hungarians have used some form of generative AI-based service, and 8 per cent have paid for such services. Notable regional disparities emerge from the data: Budapest residents use generative AI services at a significantly higher rate, and the 18–35 age group is far more receptive to the technology in the capital than in other areas.

Another key finding is that individuals with higher levels of education are more likely to use AI-based services, including paid ones.

In his essay, Rab argues against categorizing Hungarians into simple groups of ‘adopters’ and ‘rejecters’ of new technology. Instead, he advocates for a more nuanced examination of the diverse opinions, expectations, and fears surrounding AI in society.


Related articles:

A Deeper Insight into the EU Artificial Intelligence Act
Hungary at the Forefront of Embracing the EU’s Artificial Intelligence Regulation
The EU’s AI Act fails to protect human rights effectively, Hungarian experts argue in a new multidisciplinary book. Highlighting legal, media, and societal perspectives, the authors critique current regulations and explore AI's growing impact on society and governance.

CITATION