12.3 C
Brussels
Sunday, November 24, 2024
InternationalAn artificial intelligence was trained to recognize irony and sarcasm

An artificial intelligence was trained to recognize irony and sarcasm

DISCLAIMER: Information and opinions reproduced in the articles are the ones of those stating them and it is their own responsibility. Publication in The European Times does not automatically means endorsement of the view, but the right to express it.

DISCLAIMER TRANSLATIONS: All articles in this site are published in English. The translated versions are done through an automated process known as neural translations. If in doubt, always refer to the original article. Thank you for understanding.

Experts from New York University have trained an artificial intelligence based on large language models to recognize irony and sarcasm, reports the magazine “Computer Science”.

In artificial intelligence today, there are several language models that can analyze texts and guess their emotional tone – whether these texts express positive, negative or neutral emotions. Until now, sarcasm and irony were usually misclassified by them as “positive” emotions.

Scientists have identified features and algorithmic components that help artificial intelligence better understand the true meaning of what is being said. They then tested their work on the RoBERTa and CASCADE LLM models by testing them using comments on the Reddit forum. It turns out that neural networks have learned to recognize sarcasm almost as well as the average person.

On the other hand, the Figaro site reported that artists “infect” their works themselves in order to fool artificial intelligence (AI). The Glaze program, developed by the University of Chicago, adds a markup to the works that confuses the AI. Faced with data exploitation by AI, artists set a “trap” in their creations, rendering them unusable.

Paloma McClain is an American illustrator. AI can now create images in her style, even though McClane never gave her consent and will not receive any payment. “It confuses me,” says the artist, who lives in Houston, Texas. “I’m not famous, but I feel bad about that fact.”

To prevent the use of her works, she used Glaze software. Glaze adds invisible pixels to her illustrations. This confuses the AI because the software’s operation makes the images blurry.

“We’re trying to use technological capabilities to protect human creations from AI,” explained Ben Zhao of the University of Chicago, whose team developed the Glaze software in just four months.

Much of the data, images, text and sounds used to develop AI models are not provided after express consent.

Another initiative is that of the startup Spawning, which has developed software that detects searches on image platforms and allows the artist to block access to their works or submit another image instead of the one searched for. This “poisons” the AI’s performance, explains Spawning co-founder Jordan Mayer. More than a thousand sites on the Internet are already integrated into the network of the startup – Kudurru.

The goal is for people to be able to protect the content they create, Ben Zhao said. In the case of the Spawning startup, the idea is not only to have prohibitions against the use of the works, but also to enable their sale, explained Jordan Meyer. In his view, the best solution would be for all data used by AI to be provided with consent and for a fee.

- Advertisement -

More from the author

- EXCLUSIVE CONTENT -spot_img
- Advertisement -
- Advertisement -
- Advertisement -spot_img
- Advertisement -

Must read

Latest articles

- Advertisement -