Speaking at Tel Aviv University last June, Sam Altman, CEO of OpenAI, made two headlines that caught the attention of the Israeli media. Addressing the risks of AI development, he emphasized the need to take the existential threats of AI seriously by creating an international regulatory body to ensure responsible use of it by all countries, similar to nuclear power control organizations. Complimenting the audience, Altman expressed his confidence that Israel's tech ecosystem will play a "huge role" in the artificial intelligence revolution, which will transform the world. "There are two things I have observed that are particular about Israel: the first is talent density, and the second is the relentlessness, drive, and ambition of Israeli entrepreneurs," said Altman at the event.
There are good reasons for Altman's words of appreciation. Israel's Artificial Intelligence sector is growing fast and places it among the leading countries in the field. According to the AI Index Report 2022 of Stanford University, Israel ranks fifth in the Relative AI Skill Penetration rate in 2015-2021 and fourth in private investment in AI in 2021, with $2.4 billion. In absolute terms, 2200 companies in the country use Artificial Intelligence, as reported Israel Innovation Authority in May. In the last months, there was a sharp increase in Israel's start-ups entering the field of generative AI. Israeli companies use AI in cyber, fintech, agrotech, and organization software. One of the critical goals of Israel's Innovation Authority and AI experts is the development of the "National LLM," a language model that will function in Hebrew and Arabic. A significant presence of big technology companies and highly-ranked academic institutions provides a solid platform for international discussions on the future of AI, like the last conference, "Data Sciences," that attracted professionals from all over the globe.
However, the central message of OpenAI leaders to the Tel-Aviv audience and the global public opinion is a need to handle the future development of Artificial Intelligence with due caution. Sam Altman and OpenAI chief scientist Ilya Sutskever compare AI's negative potential to nuclear energy, which may sound like a stern warning. In his congressional testimony in May, Altman suggested a regulatory body to oversee the licensing and use of AI "above a certain threshold." Regarding the international community, he believes a new international organization should regulate the development and use of Artificial Intelligence in the way the International Atomic Energy Agency (IAEA) controls nuclear power.
As someone involved in the discussions on AI regulation a few years ago, seeing the CEO of OpenAI urging such regulation was encouraging. Sam Altman is not the first technology leader to express concern about the risks of AI - Ilon Musk did it as early as 2014, contemplating the need for national and international regulation publicly. Sam Altman takes this regulatory cause one step further. His recent road shows to Europe, Asia, and the Middle East constitute an effort to generate a more informed discussion of AI risks among decision-makers. This effort is a welcome initiative.
However, we should remember that AI regulation is not a new topic for international organizations and national governments. Multiple international bodies, including UNESCO, the Council of Europe, and OECD, delved into the issue, hoping to forge a broad consensus among the countries. But were they successful in finding global consensus? The debates on AI regulation started three years before Chat GPT's launch, yet those respected bodies still try to find common ground or propose a regulatory mechanism. The differences between approaches to AI regulation precluded any reasonable agreement between all member states. Concerns for human rights and privacy were at the center of the European approach to AI regulation; the United States advocated for a market-driven approach and expressed fear of over-regulation stifling innovation, while China stood up for complete government control of Artificial Intelligence. Based on observation of the current state of affairs of the UN system, it is impossible to expect global consensus on AI. The only international organization able to reach an agreement among its members is the European Union, which plans to finalize the AI Act by the beginning of 2024. This example provides an important lesson on AI regulation - reaching a consensus among like-minded countries is the right way to proceed.
Another vital lesson is the direct responsibility of the government to provide the regulatory framework and the equally unquestionable need to hold an open dialogue with the industry and civil society. Israel's government, for example, consulted with leading entities in the hi-tech sector and with technology experts and then published a draft policy on AI regulation in November 2022 for public comment.
One last lesson from the field of diplomacy: to get a consensus, we must find a compromise. Like others before him, Sam Altman discovered in his European tour that the European approach is more preventative and potentially more harmful to innovation than the American position. However, as Sam Altman did, once we realize the regulation is crucial, we can and should find a middle ground among like-minded countries. The OECD could be the best platform as it brings countries from Europe, Asia, and the Americas under the same roof who share common values but foster different cultural perspectives and traditions. Once achieved, this consensus would become a basis for an agreement on AI regulation open for other countries to join.
What could this consensus look like? In Israel, the emerging approach is that of a "soft" regulation, as explained in the draft policy papers prepared by the Israel Innovation Authority. Instead of a comprehensive legislation framework, the various regulators working in different specializations examine the need to promote concrete regulation while maintaining a uniform government policy. In addition, the regulation will be carried out in appropriate cases using advanced regulatory tools such as voluntary standardization and self-regulation. Thus, Israel's approach combines elements of the European (uniform government policy) and the American position (voluntary steps and self-regulation). The policy paper draft also suggests using a modular format and regulatory experimentation tools (such as "sandboxes") and the public's participation in the deliberation process.
A doomsday scenario of AI technologies is probably exaggerated, but the need for its regulation is not. Given the current pace of AI development, procrastination could prove too dangerous.