5.4 C
Brussels
Monday, December 23, 2024
AmericaArtificial intelligence in court: how algorithms decide the fate of people and...

Artificial intelligence in court: how algorithms decide the fate of people and why it is dangerous

DISCLAIMER: Information and opinions reproduced in the articles are the ones of those stating them and it is their own responsibility. Publication in The European Times does not automatically means endorsement of the view, but the right to express it.

DISCLAIMER TRANSLATIONS: All articles in this site are published in English. The translated versions are done through an automated process known as neural translations. If in doubt, always refer to the original article. Thank you for understanding.

Gaston de Persigny
Gaston de Persigny
Gaston de Persigny - Reporter at The European Times News

Since 2014, forensic algorithms have been increasingly applied to almost every aspect of the US justice system. The goal is to simplify the work of departments, and, most importantly, to make it more impartial. We are figuring out whether it is reasonable to give AI the ability to decide the fate of people at this stage of technology development.

Some experts are confident that the use of AI in weapons technology will reduce the number of human errors in situations involving life or death. In turn, supporters of the use of forensic algorithms argue that they will lead to a more objective assessment of data from the crime scene, reduce the number of prisoners and eliminate inappropriate sentences.

And while AI is often referred to as a technology that can solve many of the world’s problems and lead humanity to a better future, it is not without its drawbacks.

Why does justice need AI?

A year ago, the United States resumed a project on the use of artificial intelligence in court, earlier it was suspended due to a pandemic. A special algorithm for assessing risks gives judges recommendations: to release on bail or to take into custody a person who is under investigation, wrote The Wall Street Journal.

Of course, AI needs to be trained first. To do this, in 2017, the Investigation Department provided two research companies with a database of more than 1.6 million criminal cases from 2009 to 2015. There was information about the people under investigation, their age, gender, place of residence, race, ethnicity, as well as the crimes they were suspected of. Also, there were details, when comparing which it was possible to conclude whether a person would come to court or try to hide. For example, whether the person voluntarily provided the phone number and address to the police during the arrest. Those who did this voluntarily were more likely to appear in court on their own.

As a result, a system appeared, which, according to the initial estimates of the developers, should have significantly reduced the bias of judges when making decisions depending on the racial and ethnicity of the persons under investigation. The system was launched in November 2019.

How the AI ​​was wrong

Earlier in the United States, several times have tried to use similar systems – and their results have been mixed. According to Wired, in 2014, a pilot project was launched in New Jersey, thanks to which the load on prisons was actually reduced by almost half. However, this system could not solve the problem of racial inequality.

For example, in 2014, a black girl was arrested for a crime that most resembled a misunderstanding. Brisha Borden took the Huffy child’s unfastened blue bicycle. She tried to ride it down the street in the suburbs of Fort Lauderdale, and then gave up. But it was too late – the neighbors called the police.

Sometime later, 41-year-old white male Vernon Prater was arrested for stealing $ 86.35 worth of tools in the same county. He was a more experienced criminal, more than once convicted of armed robbery, for which he served five years in prison.

And yet something strange happened when Borden and Prater were taken into custody. The computer program, which predicted the likelihood of each of them committing a future crime, made an amazing conclusion. Borden, a black girl, was classified as high risk. The white male Prater, who had already been tried, was classified by the AI ​​as a low-risk group.

Last July, 27 scientists and researchers from leading American universities, including Harvard, Princeton, Columbia University and MIT, published an open letter expressing “serious concerns” about the inaccuracies that might underlie the algorithms. used to assess the risks in relation to a particular person under investigation.

In addition, AI sometimes interferes with forensic results. And this is where the problem becomes even more acute.

There is a human behind every AI

Any technology is vulnerable: it has the same disadvantages as the people who develop it. An example of this fact is the 2017 District of Columbia court case. At that time, the anonymous defendant was represented by attorney for the Public Defenders Service, Rachel Chicurel. The prosecutor in this case initially agreed to a suspended sentence as a fair sentence for her client. However, then a forensic report came, in which a special algorithm “accused” the defendant. Then the prosecutor changed his mind and asked the judge to place the accused in juvenile detention.

The defendant’s lawyer demanded that her team be shown the mechanisms underlying the report. She found that the technology itself was not reviewed at all by any independent judicial or scientific organization. And the results themselves were based to some extent on racially biased data.

What lies behind forensic algorithms

Many people are worried and outraged that the forensic algorithms that affect whether a person will be free or not are completely opaque. Why is this happening? Companies that develop algorithms often state that their methodologies, source codes, and testing processes must remain open and protected by intellectual property law. They don’t explain how their AI works so that no one can steal their technology; developers are trying to protect trade secrets.

But when these algorithms are hidden from cross-examination in the courtroom, the defendants are forced to admit the validity and reliability of the programs used to provide evidence. Trust AI, which is also wrong. After all, people create it.

To address the issue, in 2019, US Rep. Mark Takano, California, introduced the “Justice in Judicial Algorithms Act 2019”. Its purpose is to ensure the protection of the civil rights of defendants in criminal cases and to regulate the use of AI in forensic science. Takano resubmitted the bill earlier this year with Dwight Evans, also a member of the Pennsylvania US House of Representatives. Policymakers are confident that bringing more transparency through technology will ensure that civil rights are respected in litigation. Time will tell whether the law will be passed.

A rare exception when the company had to disclose forensic algorithms was the request of the New Jersey Court of Appeals. He ordered Cybergenetics, a forensic software company, to provide the defendant’s legal team with access to the source code for the DNA analysis program.

But such a case is more the exception than the rule.

So what’s the problem?

Machine learning algorithms are great for finding patterns in data. Give them enough crime statistics and they’ll find interesting patterns in the dataset. But, as the MIT Technology Review rightly notes, human interpretation of this information can often “turn correlative insights into causal mechanisms of judgment,” potentially distorting reality. This is a dangerous trap.

Forensic algorithms that are used to determine the guilt or innocence of a person in court must be assessed from all angles. In the end, the fate of a person depends on it.

- Advertisement -

More from the author

- EXCLUSIVE CONTENT -spot_img
- Advertisement -
- Advertisement -
- Advertisement -spot_img
- Advertisement -

Must read

Latest articles

- Advertisement -