Scoop has an Ethical Paywall
Licence needed for work use Learn More

World Video | Defence | Foreign Affairs | Natural Events | Trade | NZ in World News | NZ National News Video | NZ Regional News | Search

 

EU Takes Minimal Steps To Regulate Harmful AI Systems, Must Go Further To Protect Rights

Today, April 21, the European Commission launched its Proposal for a Regulation on a European approach for Artificial Intelligence.

Access Now has been advocating for a human rights-based regulation to ensure that both the private and public sector respect and promote human rights in the context of artificial intelligence, including a call for red lines for applications of AI that are incompatible with fundamental rights. Unfortunately, the new proposal falls short of meeting the minimum requirements needed to safeguard human rights in the EU.

“In her presentation of the AI Regulation, Executive Vice President Vestager said that ‘we want AI to be a force for progress’ in the EU. Introducing a provision for prohibitions on certain uses is a first step towards that. Unfortunately, those prohibitions are too limited, and this legal framework does nothing to stop the development or deployment of a host of applications of AI that drastically undermine social progress and fundamental rights,” said Daniel Leufer, Europe Policy Analyst at Access Now.

Advertisement - scroll to continue reading

“Five years ago, the world was watching the EU as it spearheaded the General Data Protection Regulation, and created the world-standard in data protection. With this new AI legislation, we are again at a cornerstone moment, where the EU can lead the way — if it puts people’s rights at the center,” said Estelle Massé, Global Data Protection Lead at Access Now. “If we have learnt one thing from the GDPR, it is that the enforcement chapter of this future regulation will matter a lot to make this legislation a success.”

Positives

Access Now welcomes several of the measures included in this proposal:

  • In line with calls for red lines, Article 5 of the Regulation outlines a list of applications of AI that are to be prohibited because their “use is considered unacceptable as contravening Union values, for instance by violating fundamental rights.”
  • Article 60 introduces important transparency measures by establishing a publicly accessible EU database on stand-alone high-risk AI systems.

Needs improvement

A number of measures introduced raise serious questions, and will require strengthening to ensure the protection of fundamental rights:

  • While it’s an important step for Article 5 to acknowledge the need for prohibitions on certain applications, the current language is too vague, contains too many loopholes, and omits several important red lines outlined by civil society. Many of civil society’s red lines have only been classified as high risk, and the current obligations on high risk systems are insufficient to protect fundamental rights.
  • The current proposal does not foresee a mechanism for the addition of more use cases to the list of prohibitions in Article 5. The provisions in Article 7, for the addition of new applications to the list of high-risks uses, should be expanded to allow for the addition of new prohibitions.
  • The treatment of ‘biometric categorization systems’ are deeply concerning. The current definition applies equal treatment to banal applications of AI that group people according to hair colour based on biometric data, and to dangerous, pseudoscientific AI systems that determine our “ethnic origin or sexual or political orientation” from biometric data. There is no option but to ban this latter group of systems.
  • In line with Access Now & AlgorithmWatch’s joint recommendation on public registers, the EU database should be expanded to include all AI systems used in the public sector, regardless of risk level.
  • The proposed enforcement mechanism which includes the EU Artificial Intelligence Board and the appointment of national supervisory authorities lacks clarity. The creation of a new board and appointment of supervisory authorities with responsibilities and competencies that may overlap with the European Data Protection Board and existing Data Protection Authorities could cause confusion, and in the worst case undermine the authority of the EDPB and the DPAs on matters which are central to their competencies. The role of DPAs and the EPDB should be clarified.

The current proposal is a crucial first step in the legislative process. Access Now is confident that the Parliament and Council can work constructively to ensure that fundamental rights are given sufficient protection under this legislation.

Access Now will continue efforts to protect people’s rights in the context of AI systems, and looks forward to the prospect of future cooperation to ensure that AI developed and deployed in the EU respects human rights and sets a global standard.

Read via the Access Now website.

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
World Headlines

 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.