The outbreak of Covid19
a year ago intensified the debates about regulation of Artificial Intelligence.
Given the role of AI technologies in fighting the pandemics, the attention of
all stakeholders to its impact, both positive and negative, was all too
obvious. As a result, the extensive use of these technologies by some
governments to control the propagation of coronavirus brought to light
controversial and disturbing aspects of AI. Human rights advocates and experts
sounded the alarm about large-scale use of the facial recognition AI
technologies for surveillance purposes. Beyond the pandemic situation, concerns
were also raised about military use of Artificial Intelligence and other AI
applications that could endanger privacy, amplify the polarization of societies
and empower the autocrats. Those concerns are
raised periodically by members of the expert community and leaders of the
hi-tech industry. Take for example Elon Musk, who already in 2014 said to MIT
students: “I
think we should be very careful about artificial intelligence... I’m
increasingly inclined to think that there should be some regulatory oversight,
maybe at the national and international level, just to make sure that we don’t
do something very foolish.” However, the conference organized last week by the
Council of Europe showed that now senior government representatives of the
continent add their voices to those concerns, sending a clear message: Europe
will not sit idle in face of this challenge.
The
conference’s title “Human rights in the era of AI: Europe as an international
standard setter for Artificial Intelligence” embodies the essence of Europe's
ambitious goal. The conference was organized by Germany who took over the
presidency of the Council of Europe last November and announced that the
question of human rights and technologies will be one of its key priorities in
this role. German minister of Foreign Affairs, Heiko Mass, who opened the conference
set the tone for the discussions that followed when he declared, loud and
clear, that it is
for democratic parliaments and governments to decide on AI regulation.
Minister
Mass emphasized two major elements of the European strategy of AI. First, he
stressed the importance of developing Europe’s own capabilities in AI
technologies and announced that to achieve this goal the EU budget earmarked
200 billion euro for digitalization. The second element of the plan is to
establish European standards of Artificial intelligence that “can create benchmarks worldwide”. According to
him, the Council of Europe, working with European Union, possesses a
significant legal framework and instruments to address the problems of AI
regulation.
Beyond the operational
items of its AI strategy, the European vision of the “geopolitics of bits and
bytes” and “digital bipolarity” merits attention. German Minister of Foreign
Affairs describes the emergence of two poles of power around Artificial
Intelligence: the Chinese digital model that prioritizes surveillance, and the
heavily market-oriented Silicon Valley model. Instead of choosing one side in
this rivalry, Europe will be open to all partners who share the conviction that
AI technologies should reinforce democracy and human rights, not erode them.
Mr. Mass also had a message for the new US administration: while expressing
hope that President Biden will be a partner to his vision, he asserted that
“... we shouldn’t wait
for Washington. Our ambition
must be to continue building our own European digital model that puts humans at
the center, remains open to the world and protects our values and democracy.”
( Speech by Federal Foreign Minster Heiko Maas
at the virtual conference “Human Rights in the Era of AI: Europe as an
international standard setter for Artificial Intelligence”, https://www.auswaertiges-amt.de/en/newsroom/news/maas-human-rights-artificial-intelligence/2435928)
The contours of the
European digital model for AI regulation were first formulated in European
Union’s Strategy for AI in 2018. In November 2019 the Council of Europe,
another European multilateral organization, headquartered in Strasbourg and
whose raison d'être is to safeguard democracy, rule of law and human rights in
Europe, entered the fray and established a special ad-hoc Committee on
Artificial Intelligence (CAHAI). Aside from expertise in human rights
protection, the Council of Europe’s additional advantage in
forging all-European consensus around AI regulation comes from its
significantly larger membership. All European states, with exception of
Belarus, are members in the Council, including Russia and Turkey; countries
outside Europe also joined CAHAI as observers, among them USA, Canada, Japan
and Israel. In December 2020 the CAHAI published a feasibility study, which
provides concrete recommendations to the Council of Europe on regulation of AI
technologies.
The major conclusion
of this feasibility study, which was also presented in the conference, is that “an
appropriate legal framework will likely consist of a combination of binding and
non-binding legal instruments that complement each other”. The report further
elaborated the role of binding regulations:
“Any binding document,
whatever its shape, should not be overly prescriptive so as to secure its
future-proof nature. Moreover, it should ensure that socially beneficial AI
innovation can flourish, all the while adequately tackling the specific risks
posed by the design, development and application of AI systems.”
(Ad-hoc Committee on Artificial Intelligence (CAHAI)
Feasibility Study, https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da)
In my conversations
with representatives of national delegations to the Council it was quite clear
that most of them agree with the emerging approach on AI regulation, even if
some had some mild reservations. This impression was reinforced also at the
conference where an absolute majority of the panelists, including CAHAI
experts, agreed on the need to establish binding regulation. Interestingly, the
only voice of dissent came from a representative of Japan who expressed concern
about the negative impact of regulations on innovation. And yet, the prevailing
opinion of the experts refused to see the contradiction between ethics and
innovation, spoke in favor of binding regulations, while admitting that
socially beneficial innovation should enjoy more flexible regulation.
The work of the Committee will
continue throughout 2021 with the goal to complete its mission by the end of
the year. The Council’s goal of setting European standards for AI is ambitious.
However, its determination to establish a European digital model looks stronger
than ever. Its sense of direction and destination is admirable. Considering
this, it would be sensible for like-minded countries outside of Europe to join
the discussion and address together legitimate concerns and differences of
opinion. Time is of the essence.