9.9 C
Brussels
Monday, November 4, 2024
InternationalTolerance, human control and slowing down progress: what will the adoption of...

Tolerance, human control and slowing down progress: what will the adoption of the AI code in Russia lead to?

DISCLAIMER: Information and opinions reproduced in the articles are the ones of those stating them and it is their own responsibility. Publication in The European Times does not automatically means endorsement of the view, but the right to express it.

DISCLAIMER TRANSLATIONS: All articles in this site are published in English. The translated versions are done through an automated process known as neural translations. If in doubt, always refer to the original article. Thank you for understanding.

Newsdesk
Newsdeskhttps://europeantimes.news
The European Times News aims to cover news that matter to increase the awareness of citizens all around geographical Europe.

More and more news appears in the media about how Artificial Intelligence begins to make decisions on its own, but gets out of control. He educates himself on controlled data, but even that does not stop him from giving the patient clearly harmful advice, being rude on the Internet or expressing politically incorrect views. Experts say that we have reached a critical point – they believe that the algorithms need to be regulated. Therefore, a code of ethics for artificial intelligence has appeared in Russia. Hi-tech turned to AI specialists to find out if the document can get rid of all the problems of algorithms and whether it will slow down the process of technology development.

Iskander Bariev, Vice-Rector – Head of the Department for Design and Research Activities, Innopolis University

We have partnered with leading players in the AI ​​field to develop an ethical guidelines and standards of conduct document for those working in this field. It provides ethical guidelines for regulating relationships related to the creation, implementation and use of AI. This is necessary so that AI does not make decisions that are contrary to generally accepted norms and standards.

There are several illustrative examples of AI bias. For example, scientists at Stanford and the University of Chicago have shown that AI of credit scoring lowers women’s ratings, and an algorithm that analyzes candidates for jobs at Amazon almost always rejects women applying for technical jobs. IBM Watson began to recommend dangerous drugs, and a chatbot in France, reducing the burden on the doctor, recommended the patient to commit suicide. A chatbot from Microsoft crossed all moral boundaries in a day and took the side of Nazi Germany.

The code implies that the final decision will be with the person, and the autonomy of AI is limited. Companies will not be able to leave the actions of AI without explanation: developers will have to carefully approach the creation and implementation of security systems and make sure that AI algorithms are not biased.

Compliance with the provisions of the code is voluntary. Its implementation could become ubiquitous if signatories show that the accepted rules of the game are the only way to make the AI ​​industry transparent and safe.

Ramil Kuleev, Head of the Institute of Artificial Intelligence, Innopolis University

The AI ​​Code of Ethics is not legal enforcement, as the document prohibits nothing. It is based on the conscious responsibility of developers for their actions. This is a general agreement, not a prohibition list. Therefore, we cannot say that compliance with the signed rules will negatively affect the work of companies.

Leaving AI unchecked when the algorithm proposes solutions that are contrary to ethical standards is impossible. Therefore, the AI ​​code of ethics states that the final decision is made by humans.

The actors tried to take into account all aspects, but it is obvious that the code will be refined – technologies are developing, and we need to adapt to them. For example, Singapore issued the same code in 2019 and 2020. The latest version is relevant now, but changes will be made to it.

Anna Meshcheryakova, CEO of Third Opinion

The identification of AI in human interaction and the responsibility of the developers should positively influence the application of the technology. It also seems reasonable to consolidate a risk-based approach in industry regulation of AI systems: the higher the cost of an error, the stronger the supervision.

When providing government services, for example, in the healthcare sector, the party responsible for the quality of services should control the algorithms. The criteria for strong AI should be clearly described. For example, in our practice we are faced with a lack of understanding of the limits of the capabilities of artificial intelligence technologies in the analysis of medical images. Even industry experts often expect superpowers and complete autonomy from algorithms, although medical decision support systems are digital assistants with clearly defined functionality and limitations of the program.

Since the right to make the final decision when working with an AI system remains with the person, it is logical to give him responsibility for the consequences of using the AI ​​algorithm. But this should not lead to a total double-check of the results of the work of AI systems, which are created to reduce the routine load and the likelihood of human error. Therefore, it is worth separately emphasizing and detailing the issue of human responsibility for determining the boundaries and conditions for using an AI service, for delivering accurate and complete information about the service to the system operator.

We are often asked about the potential dangers of AI in terms of a possible decline in the level of medical skills due to “reliance on the opinion of digital assistants.” The highest value of the code is the development of human cognitive abilities and his creative potential, this is in line with our position on the principles and goals of the development of applied AI technologies for healthcare.

The recommendation to ensure the availability of data for the development of technology is seen as highly controversial in the code. In a similar report from the Council of Europe, which proposes to move towards maximum openness, it is said about the need to provide public access to the program code for auditing and data for training AI solutions. Code and tagged datasets are part of the know-how, unique commercial approaches and solutions protected by the trade secret regime. Developments are the result of market competition and investment. And in this matter, a balance of interests of society, state and business must be observed.

Arthur Kamsky, digital product designer, co-founder of Anybots

The code describes the obvious things, as experts have worked in the past. Much overlaps with the rules of Asimov’s robotics: do no harm, mechanisms must obey people and work for the good, and the process must be ethical. I am sure that most companies already adhere to these rules.

There are no points that I disagree with, but I have questions about their implementation. For example, the issue of control over what is being done in companies. In an amicable way, states should regulate this, but no state has the competence to really understand the development of AI specialists. This should be done by practitioners, and not people from the outside, who can evaluate projects in the format “dislike – ban” or “your artificial intelligence should speak like this.”

The code contains clauses about discrimination. This is an unobvious moment and a large field for speculation. For example, in the United States there was a story that the AI ​​police more often considered black citizens as criminals. But those who deal with algorithms understand that it is all about the data that came to the program. For example, if in one city blacks robbed shops more often, the system will automatically draw conclusions based on this information. The only question is whether these data reflect reality. If they do not reflect, they should be supplemented with a larger sample. Of course, AI should treat everyone in the same way, but it only has data about people, builds on them and makes assumptions, not statements.

The main thing is that the code does not impede the development of the industry. No one engages in AI to discriminate against anyone. I don’t want situations where companies are being judged because their AI has hurt people. For example, Netflix conducted an experiment in which white people were shown the covers of films with white actors, and black people with black actors, and these actors actually played in these films. After that, their metrics about views increased. Is it offensive to users? In my opinion, no. Their task was to get people to interact more with the content. They achieved this and did not deceive anyone.

Enthusiastic companies are now at the forefront of technology development and will face challenges. Rather, the question is how each of these incidents will be handled. Ideally, we would like a commission of leading companies with practical experience in using AI and development to participate in this, and not people from the conditional State Duma. And over time, the percentage of incidents will gradually decrease.

For the code to work, the government must understand these technologies better than companies that develop and spend millions of dollars on it. If you interfere with the technology without understanding it, then the state will not be able, following the law, to conduct a normal audit in companies. Accordingly, everything will be on the conscience of company leaders and ordinary employees.

And here the question arises – which is more important, profit or ethics?

The first AI code will obviously not be the last. There is such a document in various forms in the USA, Europe and other countries, but it is impossible to write it down once and for all. That is why medical specialists, where AI is applied, fix the algorithms at a certain point in their training and then submit documents to the regulatory authorities. But the technology itself is built on the fact that it is getting better and smarter every second – which means that the regulations for it must also change.

But this is the first case in Russia when uniform rules for AI appear, and engineers no longer need to answer the question whether they are responsible for their developments or algorithms can develop in their own, sometimes very strange ways.

- Advertisement -

More from the author

- EXCLUSIVE CONTENT -spot_img
- Advertisement -
- Advertisement -
- Advertisement -spot_img
- Advertisement -

Must read

Latest articles

- Advertisement -