Part I
I planned to write this article long before the global outbreak of
COVID-19. When I started six months ago my new diplomatic position representing
Israel at international organizations based in France, I was surprised to
discover how much importance all of them - OECD, Council of Europe, UNESCO -
relate to Artificial intelligence, and especially to its regulation and ethics.
As I embarked on more extensive research, I realized that this issue has become
for multilateral diplomacy both an unavoidable item on their agenda and a
reputational gold mine.
As I contemplated this, I suddenly found myself in a new reality
of “confinement” decreed in France and almost elsewhere in Europe. A reality of
social distancing, of closed restaurants, cafes and theaters (in Paris - can
you imagine that?), of frightening reports of the numbers of infected and dead,
of attempts, sometimes contradictory, sometimes desperate, by governments to
fight the pandemics. A reality where your only hope is for it to be over. What
sci-fi movies and dystopian literature showed us with such precision, is now on
full display as our everyday life. And the questions about AI and its ethics
are part of this reality, not just a fruit of sci-fi imaginary world.
So, I decided to go on with my article. After all, the best way to
deal with the quasi-dystopian present we live in is to focus on things that
will last beyond it. William Shakespeare wrote his immortal King Lear, Macbeth
and Anthony and Cleopatra during 1606 plague in London and Titian painted his
masterpieces in 16th century Venice stricken by plague. Let’s be inspired by
them. And of course, let us not forget that when humanity will turn the
coronavirus page in its chronicles, most of the challenges we dealt with before
the crisis will stay with us. Some of them will become even more compelling.
Artificial intelligence is probably one of them.
"Three laws of robotics", devised by the American science fiction author Isaac Asimov in 1942 story "Runaround" |
Just look at the role of AI technologies during the ongoing
crisis. Using AI as well as other technologies of mobile surveillance Chinese
authorities were able to spectacularly enforce their confinement efforts. AI
was used by Chinese, Taiwanese and South-Korean governments to encourage
medical research and testing, while US, UK and other countries followed suit.[1] In my home country, Israel,
the technology was used, among other things, by Public Health funds to predict
the spread of COVID-19.[2] Recruiting AI technologies to
fight the virus was welcomed and at the same time met with growing concerns.
What implications will it have on our privacy, freedoms and rights as
individuals and on our society in general ?
In this context it is worth paying attention to the observation
made by Yuval Noah Harari, best-selling Israel’s historian, that many decisions
and social experiments that at normal times would have entailed long deliberations
and debate, were enforced on us during the COVID-19 crisis in a blink of an
eye. His assessment is that at least some of those decisions and experiments
would stay with us long after - due to their contribution in fighting the
virus.[3] There is absolutely no
doubt, AI technologies have the potential to be one of those irreplaceable
tools to deal with future pandemics and other crises of global proportions.
Their success and efficiency during the cataclysmic spring of 2020 (and
beyond?) will inexorably strengthen the appetite of governments, companies,
researchers and ordinary people for promising technology-based panacea. The new
push for AI technologies given an ever-growing recognition of its advantages
will predictably re-energize efforts to regulate it.
The need for regulation of AI technologies had come under the
attention of different players already 4 years ago, resulting in multiple
documents of principles and guidelines. We can expect that the upcoming debate
on AI will be informed by the work that has been done so far, so it’s worth
looking at it. Civil society and non-profit organizations, universities and
private companies were the first to identify the importance of AI regulation.
Among the first were Partnership on AI founded by Amazon, Facebook, and IBM; Future of Life Institute; Union Network International; and Tencent Institute
(China). These initiatives were followed by national governments’ reports
like “Preparing the future of AI” by US National Science and Technology Council
(2016) and “White paper on AI standardization” by Standards Administration of
China (2017). By 2018 it was finally the turn of international
organizations to embark on the issue and since then various multilateral
bodies added AI regulation to their agendas.
The Organization for Economic Cooperation and Development (OECD)
was apparently faster than others, not only adopting AI Principles (May 2019)[4] but also moving towards
implementation of some of them. Thus, in February 2020 OECD’s AI Policy
Observatory was established with purpose to provide “evidence and guidance on AI metrics, policies and practices to help
implement the Principles, and constitute a hub to facilitate dialogue and share
best practices on AI policies.”[5]
OECD was not the first multilateral heavyweight to rise to the
challenge of AI regulation. In 2018 the European Union presented the European
strategy for AI, followed by the adoption, last February, of the White Paper on
Artificial Intelligence. UNESCO, the UN-affiliated organization based in Paris
and mandated to promote cultural and scientific cooperation, indicated its
interest in the subject publishing in 2019 a “Preliminary Study on the Ethics
of Artificial Intelligence” and launching in March its ad-hoc experts group to
draft “global recommendations on the ethics of AI”. Council of Europe (CoE), an
organization based in Strasbourg and convening 47 countries of European
continent (sometimes confused with European Union with its 27 members), opted
for establishing its own group of experts in December 2019; simultaneously the
Parliamentary Assembly of the CoE commissioned a few reports on the ethics and
impact of AI technology in the domains of labor force and health system.
Quite predictably, each one of these organizations aims
at highlighting its own vantage point or unique contribution it could make on
regulating AI technologies. OECD’s focus on “implementable and sufficiently
flexible” set of standards was devised to allow governments of its
member-states, representing the most advanced economies, to formulate better
policies on AI.[6]
The European Union's White Paper emphasized the need to define its own way,
“based on European values, to promote the development and deployment of AI”.[7] UNESCO, who joined the
discussion only recently and is aware of the work done by other bodies, argues
in its “Preliminary study” report that “there was a great heteronomy in the
principles and in the implementation of the values promoted” in documents
prepared by other organizations. According to UNESCO’s document “AI has
implications for the central domains of UNESCO’s work” and therefore its
“approach could be complementary at the international level to the OECD’s”. But
the goal set by UNESCO is far more ambitious: “by initiating a Recommendation, UNESCO
would be able to distinguish itself not only in terms of ethical content but
also through specific proposals to Member States”.
However, is the argument about “heteronomy”of guidelines and principles correct? And how significant are the differences between the
values formulated in them? Do they indeed necessitate the
growing number of principles and guidelines documents to regulate the use of AI
technologies?
Part II
Part II
Let’s have a closer look at these documents and check their recommendations. The OECD recommendations highlight five basic principles for AI policies, namely they should promote inclusive growth, sustainable development and wellbeing; human-centered values and fairness; transparency and explainability; robustness, security and safety; accountability.[8] The European Commission’s 7 principles that laid the ethics basis of the EU’s White Paper are: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; accountability.[9] Out of fear to overwhelm the readers with reiterations, I will only add the third and last example of the set of AI principles from the UNESCO’s Preliminary Study: human rights; inclusiveness, flourishing; autonomy; explainability; transparency; awareness and literacy; responsibility; accountability; democracy; good governance; sustainability.[10] The AI principles from all three organizations could not look more coalescing, or even identical.
In fact, the consensus looks even more significant when we compare all known documents produced by other organizations and countries. One of the studies published in “Nature” last September made an inventory of 84 documents on AI principles and ethics (incidentally or not, the absolute majority of those documents originated in the USA and UK). The central finding in the research showed that in most of the documents the recommendations focused on “11 overarching ethical values and principles”.[11] The list of these principles is almost identical to those of OECD, EU and UNESCO. Another more recent study published by researchers from Harvard University have identified only 8 such principles, concluding that “the conversation around principled AI is beginning to converge, at least for communities responsible for development of these documents”.[12] In the world torn by geopolitical rivalries, could we expect that at least on AI ethics the global agreement is just around the corner?
As a matter of fact, a concern that it is rather a divergence, not convergence, that is underway was raised in both studies. In one of them the researchers wondered, in somewhat subtle form, why “despite the increasing visibility of AI in human rights'' the data they have gathered does not reveal a trend toward increasing emphasis on human rights. In a second article, the researchers have made an even stronger assertion: even though the numerical data indicates convergence between the values, in fact there are “substantive divergences among all 11 ethical principles in relation to four major factors: (1) how ethical principles are interpreted; (2) why they are deemed important; (3) what issue, domain or actors they pertain to; and (4) how they should be implemented”.[13]
The discussions on AI ethics and principles taking place at international organizations reveal the same differences that exist at any other discussion between representatives coming from different countries, cultures, faiths, practices, traditions and languages. The interpretation of values and their importance, as well as their context and ways of implementation are always dependent on these factors. When it comes to relations between states, all the above-mentioned differences are being complicated even more by competing national interests and by geopolitical considerations. States disagree on so many issues - on climate change, on immigration, on human rights, on democracy. How and why could they all agree on principles of use and development of Artificial Intelligence, one of the most promising technologies with a potential to change our lives almost in any aspect?
Therefore, UNESCO’s argument about “heteronomy” of principles is essentially correct, even though the true sources of it were misidentified in the organization’s report on AI. This source of divergence is not “the consequence of the definition chosen for AI or the objectives being sought”, as UNESCO contends, but rather of competing national interests, ideologies and geopolitics that prevent reaching such a consensus. It must be said, to UNESCO’s credit, that its report correctly recognizes those hindrances: it refers, for instance, to a political climate created by “non-transparency, biases or ways of acting by big companies, or the rise of popular mistrust in the face of cyber-attacks”[14]; furthermore, it alludes to tensions between the US and China, two major digital powers, that should be taken into account. These revealing remarks are valuable (especially given the fact that all the other documents discussed above ignored those concerns) in understanding how difficult - maybe impossible - will it be to reach consensus on AI principles. Consequently, the fundamental question is whether UNESCO, or any other international organization, can forge a global consensus on AI?
To answer this question, we should have a closer look at the functioning of international organizations today. They are considered as platforms where states promote their interests by creating coalitions and getting legitimacy for their acts and policies. Sometimes, states are not able to achieve their goals in existing organizations, so they create new organizations. Let's recall that in some periods of history, we had highly competing, or even hostile organizations - think, for instance, about the Cold War confrontation between NATO and the Organization of Warsaw Pact. The United States itself and its Security Council was a place of diplomatic warfare between the capitalist and democratic West and the socialist and authoritarian East. But even when the level of hostility in the world is relatively low, the competition between states still exists, and international organizations are just one of the battlegrounds. In fact, the recent years have indicated ever growing tensions as the US-centered system is being challenged by other players turning the international organizations, again, into the places of political confrontation.
From this perspective, the chance for the international organizations to find a common ground on AI looks dubious. UNESCO itself, who aptly emphasized the gravity of geopolitical concerns in this ethics driven endeavor, is not sufficiently equipped with international legitimacy to overcome the inevitable obstacles, as it lacks the membership of the US in its own ranks: two years ago the US and Israel left UNESCO precisely for its... excessive politicization. To overcome this hurdle the UNESCO expert group that was launched in March includes an American scientist. But the AI consensus there is still a very far cry. By contrast, OECD’s recommendations have better chances to serve as the policy basis for its member-states and other countries who joined the initiative. However, absence of China and Russia from OECD limits this effort only to the Western countries. The same constraint is shared by EU’s principles since its emphasis on European values will probably not make it an easy sell in other parts of the globe. The fact that the Council of Europe launched its own expert group on AI, a separate effort from that of the EU, could also potentially highlight the differences existing between western and eastern parts of the European continent.
It may well be that the efforts to reach a global agreement on AI would fail, as it happened with cyberspace regulation a few years ago. Therefore, the most effective way to proceed will be by consolidating the principles among like-minded countries and organizations associated with them. The example of OECD and EU indicates that it is possible to reach a consensus on recommendations and to start acting upon them. These organizations should engage other countries willing to endorse the AI principles, as it happens already with OECD’s recommendations. The urgent need for AI regulation cannot be dependent on reaching a global consensus - a noble, but thus far an elusive goal.
© 2020 Yaron Gamburg. All rights reserved
against coronavirus”, 27.03.2020
[3] Yuval Noah Harari, “The pandemics compels unprecedented social experiments which will change the world” (hebrew), 26.3.2020, Haaretz https://www.haaretz.co.il/magazine/.premium.highlight-MAGAZINE-1.8710074
[4] Recommendations of the Council on Artificial Intelligence, OECD website: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
[6] “Recommendations of the Council on Artificial Intelligence”, OECD website: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
[7] “White Paper on Artificial Intelligence - A European approach to excellence and trust”, 19.2.2020, https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en
[8] Recommendations of the Council on Artificial Intelligence, OECD website: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
[10] “Preliminary Study of the Ethics of Artificial Intelligence”, 26.2.2019, Unesco website: https://unesdoc.unesco.org/ark:/48223/pf0000367823
[11] Anna Jobin, Marcello Ienca and Effy Vayena, “The global landscape of AI ethics guidelines”, in Nature Machine Intelligence, Vol. 1, September 2019. https://www.nature.com/articles/s42256-019-0088-2
[12] Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, Madhulika Srikumar, “Principled Artificial
Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to
Principles for AI”, Berkman Klein Center for Internet & Society at Harvard
University. https://dash.harvard.edu/handle/1/42160420
[14] “Preliminary Study of the Ethics of Artificial Intelligence”