Due to the increasingly widespread deployment of artificial intelligence systems, the European Parliament has recently made the AI law was finally approved, completing the legislative process to publish the first laws aimed at regulating their use and application, mitigating risks and creating a single market where international players can act in accordance with EU rules. A set of rules to hedge security and respect for fundamental rightspromoting innovation and creating “a tool with the ultimate goal of improving the well-being of human beings”.
International organizations participating in the work intend to AI “an automated (machine) system designed to operate with varying levels of autonomy and which may exhibit adaptive capabilities once installed and which, for explicit or implicit goals, infers from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions , which may affect the physical or virtual environment” (Article 3 of the AI Act).
Based on the principle thatArtificial Intelligence it must be properly developed and used safe, ethical and respectful of fundamental rights and European valuesThe AI Act specifically aims to protect democracy, the rule of law and environmental sustainability from AI systems, starting with assessment of their possible risks and level of impact. In short, the higher the potential risk, the stricter and stricter the regulation must be. But what will change?
Artificial intelligence systems with unacceptable risk
Some applications with unacceptable risks that are in conflict with fundamental EU values and principles (including respect for human dignity, democracy and the rule of law) will prohibited or limited by strict restrictions. These include systems that manipulate human behavior to circumvent personal will, enabling activities such as “social scoring” by public authorities, or predictive policing that uses sensitive data to calculate, for example, the likelihood that a person will commit a crime .
Even the police will not be able to use the systems biometric identification with AIwith the exception of some specific situations expressly provided by law. “Real-time” identification can only be used if strict security measures are followed or if the cases fall within the permitted ones, which include, for example, the search for a missing person or the prevention of a terrorist attack.
High risk artificial intelligence systems
High-risk systems that could therefore cause significant harm at a “systemic” level (impacting on people’s fundamental rights and security), such as those used in the selection of personnel, admissions to education or the provision of essential services, including health care, will be subject to strict obligations and strict requirements before marketing and use to increase safety, human control and guarantee traceability of results.
Developers and users of high-risk AI systems will also have to meet different obligations recording, monitoring and reportingconducting model and system risk assessments and reporting any incidents or data breaches.
Limited Risk Artificial Intelligence Systems
Limited-risk AIs that are considered capable of affecting the rights or will of users will be subject transparency requirements, to make people aware that they are interacting with an AI system and to understand its characteristics and limitations. This category includes systems for generating or manipulating videos and photos (e.g deepfake), or have personalized conversations (like chatbots). So it establishes the right to know whether you are talking to a robot or that the image is created or artificially created by AI. Developers and users of these systems will also need to ensure that the information provided is clear, understandable and accessible.
Application with little or no risk
Artificial intelligence systems for general purposes that have no impact on human safety and offer wide possibilities of choice and control, such as recreational applications (video games) or for aesthetic purposes (photo filters), will have to comply exclusively with transparency and in accordance with EU copyright rules during the training phases of the different models.
The AI Act also aims to support innovation and research in artificial intelligence by funding projects and initiatives that focus on quality, social impact, interdisciplinarity and transnational collaboration. In this regard, EU countries will have to establish and make available at the national level regulatory test spaces and test mechanisms in real conditions (so-called sandbox) to enable SMEs and start-ups to develop and train innovative AI systems before bringing them to market.
We are facing the first steps towards a smarter and safer use of artificial intelligence: the law has yet to be formally approved by the Council and will enter into force twenty days after publication in the Official Journal of the EU, taking effect 24 months after entry into force. In case of violation of the provisions, warnings, cautions and sanctions will be provided depending on the severity of the violation.