AI + EU: regulating the future?

How is the European Commission proposing to make the most of opportunities offered by artificial intelligence?

Artificial intelligence has become an area of strategic importance and a key driver of economic development.

(Image: A scene from a conference on the future of artificial intelligence in the European Parliament held on 7 March 2018 / Photo: juliareda.eu/events/future-of-ai)

It can bring solutions to many societal challenges from treating diseases to, for example, minimising the environmental impact of farming. However, the socio-economic, legal and ethical impact of AI must be carefully addressed, something several noteworthy experts have not been slow to do.

For example, the late Stephen Hawking famously expressed his fear that AI might one day take over, saying that thinking machines “could spell the end of the human race.” There is irony in his concern, considering that Hawking had to rely on AI to give him the voice that allowed him to interact with the world.

Anja Kaspersen, former head of geopolitics and international security at the World Economic Forum, has spoken about AI potentially becoming weaponizable. Russia has already unveiled their “Iron Man” military robot that aims to minimise risk to soldiers. 

However, Kaspersen also balances the negative with recognition of the hugely positive potential of AI. In her article on the artificial intelligence arms race, she says, “Many AI applications have life-enhancing potential, so holding back its development is undesirable and possibly unworkable. This speaks to need for a more connected and coordinated multi-stakeholder effort to create norms, protocols, and mechanisms for the oversight and governance of AI.”

When it comes to AI, the main fears seem rooted in uncertainty about how others will leverage the technology. With AI intended to automate work and collect data on a massive scale, citizens are concerned about how their data will be managed and who will have access to it.

"Citizens are concerned about how their data will be managed and who will have access to it."

It has come to the point where governments have decided to release directives with the intentions to regulate AI.

In March 2018, the European Commission opened applications to join an expert group in artificial intelligence tasked with:

  • advising the Commission on how to build a broad and diverse community of stakeholders in a "European AI Alliance"
  • supporting the implementation of the European initiative on artificial intelligence 
  • preparing draft guidelines for the ethical development and use of artificial intelligence based on the EU's fundamental rights.

In addition, the EU has increased its annual investments in AI by 70% under the research and innovation programme Horizon 2020. It will reach EUR 1.5 billion for the period 2018 -2020.

Furthermore, the European Commission and the Member States published a coordinated action plan on the development of AI in Europe on 7 December 2018 to promote the development of AI.

The EU Plan is necessary because:

  • Only when all European countries work together can they make the most of the opportunities offered by AI and become a world leader in this crucial technology for the future of our societies.
  • Europe wants to lead the way in AI based on ethics and shared European values so that citizens and businesses can fully trust the technologies they are using.
  • Cooperation between Member States and the Commission is essential in order to address new challenges brought by AI.

AI applications raise questions about liability or fairness of decision-making. The General Data Protection Regulation (GDPR) is a significant step in building trust here, and the Commission wants to move forward on ensuring legal clarity in AI-based applications. Moreover, in 2019 the Commission will develop and make available AI ethics guidelines.

These EU initiatives are a good development, but don't go far enough. AI has made big steps in the past few years, but those rapid advances are now raising major ethical dilemmas.

"These EU initiatives are a good development, but don't go far enough."

For example, a new report from the AI Now Institute, an influential research institute based in New York, has identified facial recognition as a critical challenge for society and policymakers.

The speed at which facial recognition has grown is due to the rapid development of a type of machine learning known as deep learning. Deep learning uses massive tangles of computations - roughly analogous to the biological wiring in a brain - to recognise patterns in data. It is now able to carry out pattern recognition with high accuracy. 

How is the EU going to cope with the rapid implementation of AI systems without adequate regulation and only AI guidelines - which are still to be made available in 2019? How will facial recognition fit into the remit of the GDPR?

The EU provides many interesting initiatives, which could have a positive impact in the long term. However, it seems to foster a slow and soft approach. In order to develop, AI needs the trust of citizens now. To earn this trust, AI will have to respect ethical standards reflecting EU values. Decision-making should be understandable, human-centric and be delivered on time. Technology does not wait.

Stavros Papagianneas

With a background including positions such as communication officer at the European Commission and press officer and spokesperson to various diplomatic missions in Brussels, Stavros Papagianneas is currently managing director of public relations consultancy StP Communications. He is a senior communications leader with more than 20 years’ experience in strategic communications, public affairs, public relations, media relations and event management. He has also been a member of the Working Party on Information of the Council of the European Union and is the author of the book Rebranding Europe. He has been listed in the Top 40 EU Influencers in 2017 and 2018. Follow him on Twitter at @StPapagianneas and @stpcomms.