|
On december the european institutions (commission, council and parliament) finally reached a historic political agreement on the regulation of artificial intelligence (ia), a pioneering regulation in the world that aspires to become a global standard for ai regulation in other jurisdictions. The final text is expected to be approved in early 2024 and will be payable in full within a period of two years, with some exceptions for specific provisions, such as prohibitions on the use of ai, which will apply six months after entry into force. According to the details that have been revealed of the provisional agreement, the definition of ai system seems to be the one already introduced by the european parliament amendments to the commission's text, which is aligned with the approach proposed by the oecd. Likewise, systems used exclusively with military or defense purposes, research and innovation, and the use of ai systems for non-professional purposes by individuals. Nor will it affect the powers of the member states in matters of national security.
The regulation establishes a risk-based approach what does Italy Telegram Number Data it entail the use of ai systems, distinguishing between four categories. They are prohibited unacceptable risk systems for being considered a clear threat to the fundamental rights and values of the european union. This category includes, among others, social classification systems o social scoring that rate citizens based on their behavior and reputation or ai systems for emotion recognition in workplaces and educational institutions. The ai systems high risk they will be subject to obligations that they must comply with before and after they are placed on the market. For example, the preparation of detailed documentation, traceability, human supervision or ciberseguridad, as well as the implementation of quality and risk management systems.
These are systems that can negatively affect fundamental rights or the safety of people if they are not used properly. These would include ai systems related to the provision of critical infrastructure. The vast majority of ai systems we use today fall into the category of minimal risk systems, for which the regulation does not establish additional obligations, but encourages them to adopt voluntary standards of conduct to promote citizen trust. Finally, systems that present a limited risk they will mainly be subject to transparency obligations. Fundamentally, they must reveal that the content was generated by ai so that users can make informed decisions about their further use. The obligations and responsibilities are required of the different agents of the value chain of ai systems, fundamentally to the suppliers and users or so-called deployers. The approach, in short, seeks the no dilution of responsibility through the supply chain of this type of services. One of the most controversial points in the negotiations has been that of general purpose ai (gpai) systems.
|
|