As presently worded, the rules would ban AI designed to manipulate people “to their detriment”, carry out indiscriminate surveillance or calculate “social scores”. Much of the language is vague enough that the regulations could cover the entire advertising industry or nothing at all. In any case, the military and any agency ensuring public security are exempt.
Some “high risk” activities would be allowed, subject to strict controls, including measures to prevent bringing racial, gender or age bias into AI systems. As possible targets, the legislation mentions systems to automate job recruitment, assigning places at schools, colleges or universities, measuring credit scores or deciding the outcome of visa applications. Companies in breach could be fined up to €20 million, or 4 per cent of global turnover.
In a way, the news is no surprise, as the president of the European Commission, Ursula von der Leyen, promised to urgently bring in AI legislation when she was elected in 2019. But Lilian Edwards at Newcastle University, UK, says the draft laws will concern the tech industry. “I applaud the ambition, but you can’t imagine it getting through in this state,” she says.
Edwards compares the approach to the way EU regulates consumer products, which must meet certain requirements to be imported. “That’s much harder to do with AI as it’s not always a simple product,” she says. “You’re heading inexorably towards a trade war with Silicon Valley or weak enforcement.”
China and the US have already made huge strides in implementing AI in a range of industries, including national security and law enforcement. In China, the everyday movement of citizens in many cities is monitored by facial recognition and there are many public and private trials of a “social credit score” that will ultimately be rolled out nationwide. These scores can be lowered by infractions such as playing computer games for too long or crossing the street on a red pedestrian light and can be raised by donating to charity. If your score drops too low, you may be denied rail travel or shamed in online lists.
Meanwhile, in the US, where many tech giants are based, a light-touch, free-market approach to regulation was encouraged by Donald Trump’s administration, while current president Joe Biden has taken no firm public stance.
Daniel Leufer at Access Now, one of the groups that has previously advised the EU on AI, says Europe has long had a strategy to take a third way between the US and China on tech regulation, and says the draft legislation has promise.
But he warns that there are “big red flags” around some elements of the draft legislation, such as the creation of a European Artificial Intelligence Board. “They will have a huge amount of influence over what gets added to or taken out of the high-risk list and the prohibitions list,” he says, meaning exactly who sits on the board will be key.
The EU has had previous success in influencing global tech policy. Its General Data Protection Regulation, introduced in 2018, inspired similar laws in non-EU countries and in California, the home of Silicon Valley. In response, however, some US firms have simply blocked EU customers from accessing their services.
It remains to be seen whether the UK will follow the EU in regulating AI now that it has left the bloc. The UK Department for Business, Energy & Industrial Strategy told New Scientist that the government has formed an independent panel called the Regulatory Horizons Council to advise on what regulation is needed to react to new technology such as AI.