3.8 C
Brussels
Sunday, December 22, 2024
EuropeEU mulls ban on AI-assisted mass surveillance

EU mulls ban on AI-assisted mass surveillance

DISCLAIMER: Information and opinions reproduced in the articles are the ones of those stating them and it is their own responsibility. Publication in The European Times does not automatically means endorsement of the view, but the right to express it.

DISCLAIMER TRANSLATIONS: All articles in this site are published in English. The translated versions are done through an automated process known as neural translations. If in doubt, always refer to the original article. Thank you for understanding.

Artificial intelligence, as well as its as-of-yet more commonplace subsect of machine learning, has not been without its fair share of dissenting voices, with concerns being raised around three key ethical issues: the implications on bias and discrimination, mass surveillance and the privacy of citizens, as well as the somewhat more metaphysical issue of simulating the human-like ability to judge a complex situation and act on it.

Earlier this week, a report from political journalism site Politico reported on a leaked proposal suggesting that the European Union is mulling the potential implementation of a wide-ranging piece of legislation which would effectively ban the use of artificial intelligence in certain applications and types of usage.

The main areas artificial intelligence may affect include the controversial topic of credit scores, as well as mass surveillance. This would provide a substantial differentiation between the European Union and the United States and China, where such applications of artificial technology are either in development or already in use.

The mass surveillance system in China was recently the centre of attention of activists and political analysts due to its application in Xinjiang for the monitoring of the Uyghur population. “The Uyghurs have long been under constant high-tech surveillance that tracks, analyses and records their every move and scours their personal communications for evidence of dissent,” Michael Chertoff and N. MacDonnell Ulsch wrote in a Washington Post article earlier this month. “Compounding this culture of surveillance is the evolution of artificial intelligence from a novelty designed to win games of chess against humans into a science now capable of facial recognition and individual profiling,” the article added.

In relation to the EU, the leaked proposal would include a mandate that all member states set up specialised committees for the specific assessment and evaluation of artificial intelligence systems with a high risk factor (i.e. whose application would be in a sensitive business or social sector).

A draft copy of the proposal, which has been reported on by numerous media publications, says the legislation would explicitly prevent the use of artificial intelligence technology for “indiscriminate surveillance, which would include the automated monitoring and tracking of people; prohibit the use of artificial intelligence applications whose aim it is to create social credit scores using a number of factors to determine a perceived level of trust and financial means; and require authorisation by a committee or other dedicated body to use remote biometric identification systems such as facial recognition in public locations.

The proposal would also entail the formulation of a special agency to deal with these affairs. The agency is tentatively called the European Artificial Intelligence Board and would be comprised of special agents representing each member state. These representatives would also be able to help the European Commission decide on which artificial intelligence systems should be designated as ‘high-risk’ and may also be able to facilitate certain changes to the proposed bans and limitations of AI usage.

Early reactions from analysts suggest that though this is a step in the right direction for the protection of privacy and other human rights in the EU, the current language used in the leaked version of the draft is vague and indeterminate enough to allow companies and organisations to circumvent it entirely or work through it.

“In my opinion, it represents the typical Brussels approach to new technology and innovation. When in doubt, regulate. Replete with a new database for registration of high risk AI systems (Title VIII). Quite a throwback to the days of the 95/46 DPD. And a very 1970s approach to tech regulation,” said Omer Tene, vice president of nonprofit IAPP (The International Association of Privacy Professionals).

“Annex II [of the leaked document] defines “high risk AI systems”. These systems are subject to the full thrust of the regulation. It’s broad – and includes AI systems used for acceptance to educational institutions and educational testing, recruitment to work, credit scoring, the criminal justice system and more. The key provision of the regulation is Article 4, which defines “prohibited AI practices”. It will cause great consternation because it’s vague and potentially all encompassing,” added Tene.

- Advertisement -

More from the author

- EXCLUSIVE CONTENT -spot_img
- Advertisement -
- Advertisement -
- Advertisement -spot_img
- Advertisement -

Must read

Latest articles

- Advertisement -