Skip to main content
EURAXESS
NEWS15 Feb 2020European R&I Update

EC to set out AI rules: white paper to consider mandatory testing for ‘high-risk’ AI and updated safety and liability laws

ec

The European Commission will next week release its long-awaited plan on how to proceed toward laws for artificial intelligence (AI) that ensure the technology is developed and used in an ethical way.

The latest leaked proposal suggests a few options that the Commission is still considering for regulating the use of AI, including a voluntary labelling framework for developers and mandatory risk-based requirements for "high-risk" applications in sectors such as health care, policing, or transport.

However, an earlier proposal to introduce a three-to-five-year moratorium on the use of facial recognition technologies has vanished, suggesting the Commission won’t proceed with this idea.

The bloc’s executive arm is expected to propose updating existing EU safety and liability rules to address new AI risks.

“Given how fast AI evolves, the regulatory framework must leave room for further developments,” the draft says.

AI requires a “European governance structure”, the paper says, potentially replicating the model of the EU’s network of national data protection authorities.

EU governments are beginning to move forward on AI, “risking a patchwork of rules” throughout the continent, the draft says. Denmark, for example, has launched a data ethics seal. Malta has introduced a certification system for AI.

Following the release of the Commission white paper next week, the EU will spend months collecting feedback from industry, researchers, civil society and governments. Hard laws are expected to be written up in the autumn.

High risk AI

The Commission’s thinking on AI – ordered by new President Ursula von der Leyen as one of the initiatives she wants to launch in her first 100 days in office – is part of a global debate about these new technologies. Several researchers have been sounding the alarm that AI, unregulated, could undermine data privacy, allow rampant online hacking and financial fraud, lead to wrong medical diagnoses or biased decisions on lending and insurance. Last year, leaders of the 20 largest nations agreed to a set of broad ethical principles for AI, but haven’t yet gotten into the kind of specific ideas being discussed by the Commission.

According to the Commission’s draft paper, the challenge in pinning rules onto AI is that many of the decisions made by algorithms will in the future be illegible to humans – “even the developers may not know why a certain decision is reached”. This has become known as AI’s “black box” decision making.

EU laws should anyway differentiate between “high-risk” and “low-risk” AI, with high-risk applications tested before they come into every day use.

It will be necessary to set “appropriate requirements” on any data fed to AI algorithms, in order to ensure “traceability and compliance”, the paper says.

AI algorithms should be trained on data in Europe, “if there is no way to determine the way data has been gathered.”

 

Responsibility for AI applications should be shared between “developer and deployer”. Accurate records on data collection will need to be maintained.

Trustworthy AI

The EU should aim to attract €20 billion a year in AI investment over the next decade, the paper says. The continent lags its global rivals, investing €3.2 billion in AI in 2016, compared with €12.1 billion in North America and €6.5 billion in Asia.

According to the AI draft, “In addition to a lack of investment, the other main thing holding back the uptake of AI is lack of trust”.

The Commission wants to find ways to “incentivise trustworthy AI” without smothering companies in red tape, limiting innovation potential.

One idea discussed in the draft is to introduce a voluntary certification scheme for AI companies. If they meet all the requirements, they would receive a quality label, signalling they have “trust-worthy AI”.

Europe also needs “a lighthouse centre of research and innovation” that will be a world reference for AI. Member states should each aim to establish at least one digital innovation with “a high degree of specialisation on AI”. The Commission will chip in €900 million for these hubs, money drawn from its Digital Europe Plan.

The European Investment Fund and the Commission will together run a pilot scheme worth €100 million to boost the AI market. The Commission will also establish a new public-private partnership on AI and robotics.

Source: Science Business

European Commission AI white paper digital technology