What do you think when you hear Artificial Intelligence (AI)? Personally, my first thought always goes to the film ‘Her’ and human-like robots or machines that interact with us. In reality, AI has been among us for decades. It is used in medical diagnosis, search engines, online assistants, spam filtering, energy storage, playing games like Chess, and much more.
Its growing potential and diverse applications – from health, to transport, to energy, agriculture, tourism or cybersecurity – has pushed the European Commission to develop the first legal framework on AI, which, after three years in the making, was officially presented yesterday. It was the 2018 Coordinated Plan on AI that first laid the ground for cooperation, defined areas for investments and encouraged Member States to develop national strategic visions on AI.
Aiming for an ideal third way
“Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it,” said European Commission Executive Vice-President Margrethe Vestager. The regulation is trying to walk the tight rope between fostering trust and boosting investments and innovation. It also wants to propose an alternative path to the liberal approach of the United States and the state-sponsored tactics of China.
Cities applaud the Commission initiative to propose the world’s first regulation on Artificial Intelligence, and support EU’s efforts to continue positioning Europe as a global technological rule-maker based on our common democratic values and human rights.
“Artificial Intelligence is indeed an opportunity, but it also represents a risk for social inclusion and fundamental rights and freedoms,” says Laia Bonet, Deputy Mayor of Barcelona, Chair of the Eurocities’ Knowledge Society Forum. Studies have shown issues with AI such as bad data sets and real-world biases creeping into algorithms. “We believe AI can and should serve the interests of society”, adds Touria Meliani, Deputy Mayor of Amsterdam, “we should approach AI in the first place as a way of improving the lives of our citizens.”
Regulation based on potential risk
The new EU regulation is based on a pyramidal scheme that splits AI systems into four categories according to their potential risk: minimal, limited, high and unacceptable. Simply speaking, the higher the risk, the more the system will be regulated. The controversial aspects emerge when looking at the specifics of the two highest categories as they involve sensitive systems like facial recognition.
“There is no room for mass surveillance in our society,” said Executive Vice-President Vestager in her speech, and European cities agree welcoming the ban on practices such as social credit – the use of AI, facial recognition systems and big data analysis to track and evaluate the trustworthiness of individuals and businesses resulting in the compilation of blacklists and whitelists – and any other forms of government-conducted social scoring.
“We are however concerned for the fact that the proposed regulation keeps an open door to large-scale surveillance in its provisions regarding real-time biometric recognition systems,” says Deputy Mayor Bonet. There is sufficient evidence on the threats posed by facial recognition and other forms of biometric recognition to social and human rights to support an indefinite ban on their application. “We call for an outright, European-wide ban to biometric recognition systems,” insists Deputy Mayor Bonet.
The new regulation forces high-risk systems to undergo a conformity assessment, be registered in an EU database and sign a declaration of conformity before entering the market. They will also have to comply with European health, safety, and environmental protection standards. But the devil is in the detail, as they say, and European cities are concerned by the possibility that some AI providers are allowed to self-assess their compliance with such standards.
“Self-assessment is not up to the trustworthy model the Commission wants to promote,” says Deputy Mayor Bonet. “In our view, conformity with AI standards cannot be left in the hands of private companies providing AI technology.” Cities therefore call on the EU to adequately invest in the appropriate, independent mechanisms to guarantee that all AI providers meet EU standards.
To help national authorities supervise and implement the new rules and continuously work on critical issues such as the concrete categorisation of ‘high-risk’ AI applications, the European Commission has proposed the creation of a European Artificial Intelligence Board. Cities welcome this decision and call for local governments to be included as key stakeholders in the AI Board in light of the history of European cities being reliable partners to the EU on many fields of digital policy and in light of local governments’ capacity to provide first-hand inputs and recommendations.
“Cities can do a lot in putting the idea of human-centred AI into practice – ranging from applications in social inclusion to smart health,” says Deputy Mayor Meliani. “In Amsterdam we have launched the Civic AI lab to do exactly that. We can innovate in society through inclusive AI technology, by designing AI to fight discrimination and to protect our digital lives.”
The regulation is on its way through the European process, which hopefully will bring an even better result than this promising start.