Who gets to define what an “unacceptable-risk application” is?
How to they decide?
The European Union’s AI Act, the world’s first major artificial intelligence legislation, officially comes into force today.
This landmark regulation seeks to address the negative impacts of AI by setting a comprehensive regulatory framework across the EU.
Targeting primarily large U.S. tech firms like Microsoft, Google, Amazon, Apple, and Meta, the Act imposes stringent requirements on the development and deployment of AI systems, especially those deemed high-risk.
These include autonomous vehicles, medical devices, and loan decisioning systems.
The Act also bans unacceptable-risk applications such as social scoring and predictive policing.
Companies breaching the rules face hefty fines up to 7% of global annual revenues.
While the law’s main provisions will not be enforced until at least 2026, this move sets a precedent for global AI governance, encouraging other countries to adopt similar risk-based frameworks.
Meanwhile, Zuck, per my recent post, recently released llama 3.1, to run like a feral cat down the alleys of Europe.
Who gets to define what an “unacceptable-risk application” is?
The AI Act defines what an unacceptable risk application is in article 5.
how do they decide
The European Parliament decided based on the consultation and negotiation processes when the act was being drafted
The following types of AI system are prohibited:
deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.
assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
'real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when:
searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited;
preventing substantial and imminent threat to life, or foreseeable terrorist attack; or
identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.).
The addition of the level of detail about who gets to define what an unacceptable application or an unacceptable risk is and how they decide is not answered in the legislation such that you have outlined above.
More framework around how the questions I posed might be answered is provided, but it seems to me this simply opens more cans of worms.
It’s a bit like trying to get broad agreement for a Bill of Rights in a democratic society such as our own that does not have one.
The complexity of implementing the AI Act requires us to consider the corner cases and develop guidelines by consensus with thorough oversight and continuous dialogue across society to implement them.
The European Union’s tendency to try and crystallise this into a rule book is grist for the mill for bureaucrats, but it does not necessarily produce good outcomes for society at large.
For example, with respect to all of the criteria by which those decisions are supposedly made, I raise the following specific problems:
Determining “significant harm” from subliminal techniques.
Identifying socio-economic vulnerabilities without bias.
Differentiating lawful biometric categorisation from prohibited uses.
Defining “detrimental treatment” in social scoring systems.
Balancing objective profiling with human judgment in crime risk assessment.
Scraping facial images for lawful uses vs. privacy breaches.
Using emotion recognition in medical vs. non-medical contexts.
Ensuring appropriate use of RBI technology in public spaces under exceptional circumstances.
I think this discussion goes to the heart of the challenges faced by all AI alignment/safety work. Made more difficult by the world’s general status quo of misalignment on everything.
This is why I am leaning toward (though only slightly 51/49) an open source AI development. I just don’t think we should be trusting anyone to make these decisions so although an open source development makes the chances of chaos go way up the alternative isn’t looking appealing to me.
Limitations and Loopholes in the E.U. AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond by Prof S Wachter
In this Essay, [the author] show[s] how the strong lobbying efforts of big tech companies and member states were unfortunately able to water down much of the AIA. An overreliance on self-regulation, self-certification, weak oversight and investigatory mechanisms, and far-reaching exceptions for both the public and private sectors are the product of this lobbying. Next, [the author] reveal[s] the similar enforcement limitations of the liability frameworks, which focus on material harm while ignoring harm that is immaterial, monetary, and societal, such as bias, hallucinations, and financial losses due to faulty AI products. Lastly, [the author] explore[s] how these loopholes can be closed to create a framework that effectively guards against novel risks caused by AI in the European Union, the United States, and beyond.
It is a good system that has served humanity well so far, the problem is each law needs to be tested and therefore broken in order to really see what it is made of, the concern there is that the one time the law is broken could be the paperclip machine…