Regulating AI - what's next?
the astonishing progress of AI has impressed users with its diverse abilities and smartness while simultaneously these specific abilities have provoked concerns about their implications on society.
the astonishing progress of AI has impressed users with its diverse abilities and smartness while simultaneously these specific abilities have provoked concerns about their implications on society.
Speaking at the MIT Technology Review’s EmTech Digital conference in May 2023, Geoffrey Hinton, who is often described as the Godfather of AI, warned about the threat of Artificial Intelligence (AI) to humanity's survival – ‘smart things can outsmart us!’ Seemingly, over the years, the astonishing progress of AI has impressed users with its diverse abilities and smartness while simultaneously these specific abilities have provoked concerns about their implications on society.
The AI Act
On Wednesday, 14th June 2023, Lawmakers in Europe signed off the world’s first set of comprehensive rules for AI. The proposed landmark legislation, originating in 2021, will regulate any AI-based product or service. The Act categorizes AI systems into four risk levels based on the Risk Pyramid model, aiming to protect consumers from the nefarious use of AI. Enforcement of these rules will be the responsibility of the European Union's 27 member states.
Regulators of this Act have the power, even to force companies to remove their applications from the market. In severe instances, breaches may result in penalties reaching a maximum of 30 million euros ($33 million) or 6% of a company's worldwide annual revenue. For tech giants such as Google and Microsoft, these fines could potentially reach billions of euros.
While China and the USA dominate in pioneering AI technology, European countries have lagged behind. Yet, the EU has emerged as a trailblazer by establishing regulations that are implicitly recognized global norms and lead the way in addressing the influence of major tech corporations.
The risk pyramid
This is the Regulatory Framework used by the Act to define the 4 levels of risk in AI.
Two key challenges
1. Expeditious AI Era: in February 2023, Reuters reported that in January 2023, ChatGPT has 100 million monthly active, a mere two months since its launch, making it the fastest-growing consumer application in history. Although AI has been discreetly making remarkable progress for a considerable period, it is the ferocity of ChatGPT has which catapulted AI to the global spotlight. With Microsoft’s $13 billion investment in OpenAI to Meta CEO Mark Zuckerberg stating, “Our single largest investment is in advancing AI and building it into every one of our products’, the race for AI is accelerating. However, regulatory frameworks are lagging behind in the digital world, the regulations and legal safeguards lack the agility required to effectively address the swift development of AI.
2. Regulation Challenges: Due to the diverse nature of AI capabilities, implementing a universal regulatory approach would result in excessive regulation in certain cases while insufficient regulation in others. Therefore, as Tom Wheeler, the former Chairman of the Federal Communication Commission, USA stresses, AI regulation must be risk-based and targeted. He recommends targeting old-fashioned abuses (e.g., scams amplified by AI), ongoing digital abuses (e.g., misinformation, disinformation, and malinformation), and the dystopian effects of AI separately.
Around the world
What’s next?
It may take several years for the regulations to be fully implemented. The next stage involves negotiations among member countries, the European Parliament, and the European Commission, which could result in further amendments to the Act. According to reports from EU, the final approval is anticipated to be granted by the end of 2023, followed by a grace period of approximately two years for companies and organizations to adapt.
Brando Benifei, an Italian member of the European Parliament involved in shaping the AI Act, expressed the intention to expedite the adoption of rules to accommodate rapidly evolving technologies like generative AI. In the meantime, to bridge the gap in the legislation, Europe and the USA are developing a voluntary code of conduct with the possibility of expanding its scope to include other like-minded countries.