Speaking at the MIT Technology Review’s EmTech
Digital conference in May 2023, Geoffrey Hinton, who is often described as the Godfather
of AI, warned about the threat of Artificial Intelligence (AI) to humanity's survival
– ‘smart things can outsmart us!’ Seemingly, over the years, the astonishing
progress of AI has impressed users with its diverse abilities and smartness while
simultaneously these specific abilities have provoked concerns about their
implications on society.
The AI Act
On Wednesday, 14th June 2023,
Lawmakers in Europe signed off the world’s first set of comprehensive rules for
AI. The proposed landmark legislation, originating in 2021, will regulate any AI-based
product or service. The Act categorizes AI systems into four risk levels based
on the Risk Pyramid model, aiming to protect consumers from the nefarious use
of AI. Enforcement of these rules will be the responsibility of the European
Union's 27 member states.
Regulators of this Act have the power, even to
force companies to remove their applications from the market. In severe instances, breaches may result in
penalties reaching a maximum of 30 million euros ($33 million) or 6% of a
company's worldwide annual revenue. For tech giants such as Google and
Microsoft, these fines could potentially reach billions of euros.
While China and the USA dominate in pioneering
AI technology, European countries have lagged behind. Yet, the EU has emerged
as a trailblazer by establishing regulations that are implicitly recognized
global norms and lead the way in addressing the influence of major tech
corporations.
The risk pyramid
This is the Regulatory Framework used by the Act to
define the 4 levels of risk in AI.
- Unacceptable risk – AI systems with a clear threat to the safety, livelihoods, and
rights of people will be banned. For example, toys using voice assistance
that encourages dangerous behavior.
- High risk – AI technology used in critical
infrastructure (e.g., transport), educational
or vocational training, (e.g., scoring of exams), product safety (e.g., AI application in robot-assisted surgery), employment (e.g., CV-sorting software for recruitment procedures); and law enforcement (e.g., evaluation of the reliability of evidence);
migration, asylum, and border control management (e.g.,
verification of the authenticity of travel documents); and administration of justice and democratic processes (e.g.,
applying the law to a concrete set of facts). High-risk AI
systems will be subject to strict obligations before they can be put on the
market
- Limited
risk - AI systems with
specific transparency obligations. For example, the need for Chatbox
transparency so consumers are aware of what they are interacting with.
- Minimal
or no risk – The proposal
allows the free use of minimal-risk AI. For example, AI-enabled video
games or spam filters
Two key challenges
1. Expeditious AI Era: in February 2023,
Reuters reported that in January 2023, ChatGPT has 100 million monthly active,
a mere two months since its launch, making it the fastest-growing consumer
application in history. Although AI has been discreetly making remarkable progress for a considerable
period, it is the ferocity of ChatGPT has which catapulted AI to the global
spotlight. With Microsoft’s $13 billion investment in OpenAI to Meta CEO Mark
Zuckerberg stating, “Our single largest investment is in advancing AI and
building it into every one of our products’, the race for AI is accelerating.
However, regulatory frameworks are lagging behind in the digital world, the regulations
and legal safeguards lack the agility required to effectively address the swift
development of AI.
2. Regulation Challenges: Due to the diverse nature of AI
capabilities, implementing a universal regulatory approach would result in
excessive regulation in certain cases while insufficient regulation in others. Therefore, as Tom Wheeler, the former Chairman
of the Federal Communication Commission, USA stresses, AI regulation must be risk-based and targeted. He recommends targeting old-fashioned abuses (e.g., scams amplified by AI),
ongoing digital abuses (e.g., misinformation, disinformation, and
malinformation), and the dystopian effects of AI separately.
Around the world
- USA: In October 2022, the Biden administration released its
“Blueprint for an AI Bill of Rights,” which focuses on privacy standards and
testing before AI systems become publicly available. The White House
Office of Science and Technology Policy identified five principles that guide automated
systems' design, use, and deployment. These areas are safe and effective
systems, algorithmic discrimination protections, data privacy, notice and
explanation, human alternatives, consideration, and fallback.
- CHINA: In April of 2023, The Cyberspace Administration of China (CAC) revealed
draft rules for generative AI, mandating adherence to Communist Party’s strict
censorship rules. This requires the content of AI systems to reflect ‘socialist
core values’ and avoid information that undermines ‘state power’ or national
unity. Furthermore, companies will also have to ensure their chatbots create
words and pictures that are truthful and respect intellectual property, along
with a need to register their algorithms and send their chatbots to the CAC for
security reviews before public release.
- UK: In June 2023, the UK government announced that London would host a
global summit on AI safety later in the year. It has been said that UK Prime
Minister Rishi Sunak and U.S. President Joe Biden will discuss the risks of AI,
including strategies for risk mitigation through internationally coordinated
action.
What’s next?
It may take several years for the regulations
to be fully implemented. The next stage involves negotiations among member
countries, the European Parliament, and the European Commission, which could
result in further amendments to the Act. According to reports from EU, the final
approval is anticipated to be granted by the end of 2023, followed by a grace
period of approximately two years for companies and organizations to adapt.
Brando Benifei, an Italian member of the
European Parliament involved in shaping the AI Act, expressed the intention to
expedite the adoption of rules to accommodate rapidly evolving technologies
like generative AI. In the meantime, to bridge the gap in the legislation,
Europe and the USA are developing a voluntary code of conduct with the
possibility of expanding its scope to include other like-minded countries.