On June 14 th the European Parliament adopted their final version of the AI Act. It expands the list of prohibited AI practices, adds mandatory labeling of AI-generated content, and distinguishes AI categories based on risks, amongst more things. The proposal will now be sent to the Council for approval, after which an amended version will be sent back to the Parliament for final approval.
The AI Act, proposed by the Commission on 21 April 2021, is the EU’s attempt at a harmonised legal framework for AI applications, based on risk. Presently without specific AI legislation, the Commission aims to finalise the Act by Autumn 2023, with adoption expected later that year.
Categories and consequences
The Act’s proposal segments AI applications into four categories: Unacceptable risk (illegal); High risk (strict obligations); Limited risk (transparency obligations); and Minimal or No risk (no measures). Non-compliance fines range from €10 million or 2% of annual turnover to €30 million or 6% of annual turnover. The legislation’s primary goals include setting out distinct AI categories, refining AI definitions, and distributing responsibility equally across the AI value chain.
Significant proposals include stricter obligations for Foundation models, such as ChatGPT, necessitating independent expert assessment for potential health, safety, environmental, democratic, and rule of law risks. All residual, unmitigable risks must be documented, with data governance measures implemented to monitor biases and data source sustainability. The establishment of an AI database will also facilitate further checks on AI performance, safety, and cybersecurity. The Act does not ban AI tools monitoring interpersonal communications but states that large online platforms’ AI recommendations will be classed as High-risk. AI solutions mentioned in Annex III (page 5) will be deemed High-risk only if they pose a significant risk to health, safety or fundamental rights. The Act makes mandatory extra safeguards against negative biases. The Act also introduces several bans, including biometrical identification software (ex-post in serious crimes), purposeful manipulation, and predictive policing.
Implications for audit
Implications for audits: due diligence and reporting on AI usage will increase, especially for companies providing ‘High-risk’ AI products and services. There is a fear these new burdens will stifle growth and innovation.
To learn more about the impact of AI, read the ECIIA paper, Auditing a Digital Insurance World.