12 C
Brussels
Sunday, April 28, 2024
InstitutionsThe EC asked to label texts and images when using artificial intelligence

The EC asked to label texts and images when using artificial intelligence

DISCLAIMER: Information and opinions reproduced in the articles are the ones of those stating them and it is their own responsibility. Publication in The European Times does not automatically means endorsement of the view, but the right to express it.

DISCLAIMER TRANSLATIONS: All articles in this site are published in English. The translated versions are done through an automated process known as neural translations. If in doubt, always refer to the original article. Thank you for understanding.

Gaston de Persigny
Gaston de Persigny
Gaston de Persigny - Reporter at The European Times News

The European Commission has for the first time asked this month companies to offer a label to identify the texts and images generated by artificial intelligence to fight disinformation.

The Vice-President of the European Commission, Vera Jurova, proposed today that companies voluntarily adopt in their code of ethics a rule to warn when they use artificial intelligence to generate texts, photos or video. According to her, it is necessary for social networks to immediately start labeling information created by artificial intelligence. This intelligence can expose societies to new threats, especially with the creation and spread of disinformation, Yurova explained. Machines have no freedom of speech, she added.

Vera Jurova, who is responsible for values and transparency at the EC, and Thierry Breton, Commissioner for the Internal Market, met with representatives of around 40 organizations that have signed up to the EU Code of Practice against disinformation. They include Microsoft, Google, Meta, TikTok, Twitch and smaller companies — but not Twitter, which has left the codex.

“I will ask the signatories to create a special and separate topic within the code” to deal with disinformation generated by artificial intelligence, Yurova said. “They should identify the specific risks of disinformation posed by content-generating artificial intelligence and take appropriate measures to address them.”

Signatory countries that integrate generative AI into their services, such as Bingchat for Microsoft, Bard for Google, should build in the necessary safeguards so that these services cannot be used by malicious actors to generate disinformation, Yurova explained. “Signatory countries that have services with the potential to spread AI-generated disinformation should in turn introduce technology to recognize such content and put up clear labels to warn users.”

Labels should be applied to all AI-generated material that can be used to create disinformation, including text, images, audio and video.

For now, they will not be mandatory as they will be part of the voluntary code of practice. However, the Commission is considering including it in the Digital Services Act (DSA). Obligations to label AI content could also be included in the AI Act during negotiations between EU countries, the European Parliament and the European Commission.

Illustrative Photo by cottonbro studio: https://www.pexels.com/photo/a-woman-looking-afar-5473955/

- Advertisement -

More from the author

- EXCLUSIVE CONTENT -spot_img
- Advertisement -
- Advertisement -
- Advertisement -spot_img
- Advertisement -

Must read

Latest articles

- Advertisement -