In recent years, lots of news has centered on AI systems development. While the technological development has been impressive, a response from legislators has been anticipated in light of the risks that AI poses to the rights and freedoms of individuals.
On 13 March 2024, the AI Act passed through legislative procedures and was finally adopted.
The new act is the first comprehensive legal framework regulating AI systems as a whole rather than addressing specific issues associated with their use. It aims to promote the uptake of human-centric and trustworthy artificial intelligence, harmonise transparency rules for specific AI systems, and establish precise requirements and obligations for AI developers and deployers regarding specific AI uses.
As is customary in the digital realm, the AI Act is not limited to EU countries and may apply on an extra-territorial basis.
We will analyse the main provisions of the AI Act below.
Whom does it apply to?
Although the AI Act has been adopted in the EU, it may still apply to foreign companies that meet certain criteria.
The following extra-territorial applicability is possible:
Thus, not only should European companies bear the new rules in mind, but any company targeting the EU market with its AI products may fall under the scope of the AI Act.
Classification of AI systems
Following the GDPR approach, legislators have applied a risk-based approach to regulating AI. The idea behind this is to set rules restricting inappropriate and dangerous AI use cases while still providing freedom to AI developers in spheres where the risks to rights and freedoms are not that drastic.
The AI Act proposes a division of AI systems depending on the risks that may result from their use.
Prohibited AI Practices
The new act completely prohibits certain use cases of AI. This includes subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques which may influence individuals’ decision-making processes. Another example is AI systems inferring the emotions of a natural person in the workplace and educational institutions. Social scoring techniques are also not allowed.
Thus, there are several cases where the use of AI is prohibited.
High-risk AI systems
AI systems that negatively affect safety or fundamental rights will be considered high risk. For instance, this includes AI systems in critical digital infrastructure and AI systems in recruitment and educational processes.
The list is quite extensive, so companies should check whether their AI systems can pose high risks.
The AI Act provides for certain requirements with respect to high-risk AI systems, including compliance and risk management systems, criteria for datasets used for AI learning, maintaining technical documentation and logs, etc. All high-risk AI systems will be assessed before being put on the market and throughout their lifecycle.
Thus, all AI systems that pose a high risk will need to carefully go through the compliance procedure in line with the AI Act.
General-purpose AI
In view of the current pace of GenAI development, there is specific regulation for this type of model.
Providers of general-purpose AI models must prepare relevant technical documentation, ensure copyright compliance, disclose a description of the learning dataset, assess the model and provide security.
Limited risk
Certain AI systems pose a limited risk and result in some obligations for the companies providing or deploying the AI systems.
Examples include interaction with individuals, e.g. through chat-bots, where the obligation for transparency comes into play.
Another case is the obligation to inform the audience regarding AI-generated content and the use of deep-fakes.
Minimal risk
Reportedly, most of the current AI systems fall into the minimal risk category, which provides companies with freedom for further development.
Sanctions
Similar to the latest EU acts in the digital sphere, the AI Act is supported by high fines. Non-compliance with the new rules may result in significant fines of up to EUR 35m or 7% of total worldwide annual turnover.
Recommendations
At this stage, the following actions are advisable, including for foreign non-European companies:
More AI laws will undoubtedly follow in other jurisdictions. AI compliance procedures will thus become a common practice in the coming years to ensure observance of the applicable laws, as well as to show adherence to ethical principles when developing and using AI.
Co-authored by Ekaterina Kharina, Paralegal in the Intellectual Property & Digital Law.