News

Cities discuss regulation of AI and its many challenges

3 January 2022

The Artificial Intelligence Act (AI Act) has the potential to shape policy beyond Europe’s borders, and cities are looking for a seat at the table to have their voices heard and the rights of citizens fully respected. Privacy, transparency, accountability and explainability are major issues. Citizens’ security depends on users and developers of AI abiding by existing legislation and updating this legislation in the face of new risk. They must also guarantee public participation and respect the privacy of citizens. Citizens must be aware of when and how their data is being collected and subsequent uses of such data.

The AI Act — proposed in April 2021 and under consultation until August 2021 — will create a common regulatory and legal framework for the 27 countries in the European Union. The framework assigns different risk levels to AI, and creates a searchable public database of high-risk AI, which will be more heavily regulated, to be maintained by the European Commission. A European AI Board will be created to define what falls under the high-risk category.

Even if it’s not a question of AI-powered killer robots (yet?), the AI Act will be essential to guarantee the rights of citizens to privacy and transparency of data as well as the role of cities, governments and businesses in implementing and managing AI systems.

AI and privacy concerns

In February 2020 the Commission published an Artificial Intelligence white paper, which was titled ‘A European approach to excellence and trust’. It signifies the importance of artificial intelligence and its potential but also, the European Union’s concern to deploy AI effectively and safely by fostering excellence and trust among the users and providers.

Lodewijk Noordzij, Policy officer for the digital transformation at Eurocities explained that “in April 2021, the artificial intelligence act proposal was published by the Commission on the ethical and legal requirements for the use of AI. Artificial intelligence is an opportunity but also poses risks to the application of various other EU rules and citizens rights. This proposal wants to ensure the safe use of artificial intelligence within the European Union.

On the 29th of November, the Slovenian presidency produced a compromise text with some changes to social scoring, biometric recognition and high-risk applications,” he added.

Yordanka Ivanova, Legal and policy officer at the EU Commission CNECT A2 noted that “we want to continue this dialogue and cooperation because the use of AI by public authorities is quite important for us. We have the ambition to encourage governments to use AI to the benefit of all public authorities and citizens with all the benefits we can take from it.

We also want to create an environment for trust by users, the protection of our existing rights and also the right conditions for Europe to become a leader in trustworthy AI and we want to harmonise legislation and to create the single market for trustworthy AI.”

There are different interests on the table. On the one side, big businesses say that the draft legislation goes too far and would stifle innovation, on the other side, human rights groups say that the legislation doesn’t go far enough, leaving citizens vulnerable. Also, the bill does not cover the uses of AI by the military, which brings yet more questions and room for concern.

High-risk AI

Ivanova explains that the EU Commission follows a “risk-based approach”. This means, for example, that AI considered having minimal or no risk won’t need specific rules, while lower-risk applications will encounter specific transparency obligations, leaving the legislation to deal with “mainly problems with applications that could have very serious consequences for people’s rights and for safety.” Social scoring by public authorities is considered of unacceptable risk, therefore a prohibited practice.

“The main obligations that come for public authorities are related to the high-risk use cases that are explicitly in our regulation,” says Ivanova. One example is Care Systems. In Helsinki, for example, the city is exploring the use of predictive health care, analysing patient’s data in risk groups. Such data cannot be leaked and deserve special treatment from authorities.

She further explains that “not all public services that use AI are regulated. But some, like those that are assessing the eligibility of people to access social assistance and benefits, are. There are also some use cases in the area of law enforcement. This is linked to the detection and investigation of crime as well as in the area of migration, asylum and border control management.

In general, the requirements if you fall within one of these use cases are based on the high-level expert group guidelines for ethical and trustworthy AI and all the existing principles to make sure that the system is subject to appropriate safeguards from its design and development.”

The Commission is proposing six extra requirements:

  • It’s quite important to have appropriate data governance procedures and make sure that the system is built with good quality data because the algorithm performs only as good as the data with which it has been trained.
  • Documentation for the system itself and to ensure its traceability with logs that could allow its future stability.
  • The appropriate degree of transparency on how the system itself operates and is interpreted by the user as well as the information that should be given to the human operators who actually deal with the system so they can appropriately use it and know its capabilities, and limitations on quality.
  • In the daily activities of public authorities, they will also have to exercise human oversight.
  • Ensure the robustness, accuracy and security of the systems.

Public authorities and AI providers will have to have a quality management system and demonstrate compliance with the new requirements and a new EU board composed of member states representatives and an expert group “where all perspectives will be represented including the city and the public authorities” will oversight the implementation of AI systems among members along with national competent authorities.

Cities do’s-and-don’ts

Federica Bordelot, Policy advisor on digital and innovation policies at Eurocities notes that there’s still a lack of clarity on the processing of personal data by AI systems and its real impact on local governments.

“We are in favour of the broad definition of AI systems that really allow for effective regulations to be put in place. We have a few concerns specific to the legal requirements. For local government, there is a need to really specify a bit more the different categories concerned.  We are supporting the risk approach and typically also the policy on unacceptable uses. Local governments, for example, ask that biometric identification should be banned until they are fully respecting fundamental rights and are compatible with GDPR regulation.”

Bordelot asks for public authorities to be “effectively involved in the definition of AI systems initial requirements. We think that they are still very much driven by the private sector,” as the proposed approach allows corporate self-regulation by the private sector. Also, the responsibilities and role of the AI Board must be made clear “to avoid the risk of overlap with other boards that are already created like the European Data Protection Board.”

There’s also the question of who is to define whether or not certain users or systems fall under the category high-risk as there’s a clear divide between the interests of businesses and civil society, with governments stuck in the middle.

“The AI act will not be a standalone Act, it is a piece of a puzzle and you have to link it to other legislative frameworks, other regulations, ethical norms, frameworks,” says Luca Bolognini, President of the Italian Institute for Privacy and Data Valorisation. For him, “there will be a strong need for multidisciplinary skills and knowledge. This could be a problem for a single city, for a single player. It will be key to have people competent and very expert in many, many different fields. There will be a need for the precise identification of roles according to the different disciplines.”

Many actors are still struggling to comply with the rules of the General Data Protection Regulation (GDPR) and the new demands for regulating AI are even greater — a community shared approach is, then, necessary.

Michael Mulquin, from Open and Agile smart cities (OASC), agrees, adding that “cities lack resources or expertise, but still need to comply with the regulations that have been developed, and need to be able to provide better services to their citizens.

Cities recognise the value that AI and algorithmic decision-making systems can bring to help them do a better job, make better decisions more quickly. But obviously, cities are justifiably worried and concerned that when they employ these algorithms these should be fair and transparent.”

By 2025, AI is expected to enable over 30% of smart city applications, among which urban mobility solutions, therefore a broad discussion on legislation and cities’ needs is urgent. Self-learning algorithms, smart data, sensitive data, privacy, transparency and high-risk uses of AI are major concerns for institutions, citizens and businesses alike. The biggest challenge is to speak the same language and be able to find a common ground.

“This is a complicated picture,” says Mulquin, “but clearly cities need to have clear guidance about how they need to manage AI and what they need to do to make sure that as it continues to learn, it will continue to be trustworthy.”

Contacts

Wilma Dragonetti Eurocities Writer
Raphael Garcia Eurocities Writer

Recommended