Well that was fast. The UK’s competition watchdog has announced an initial review of “AI fundamental models” such as the large language models (LLMs) that underpin OpenAI’s ChatGPT and Microsoft’s New Bing. Generative AI models that power AI art platforms, such as OpenAI’s DALL-E or Midjourney, are also likely to be in scope.
The Competition and Markets Authority (CMA) said its review will look at competition and consumer protection considerations in the development and use of basic AI models – aiming to understand “how basic models develop and produce an assessment of the conditions and principles that will best guide the development of foundation models and their use in the future”.
It proposes to publish the review in “early September,” with a June 2 deadline for interested stakeholders to submit comments to substantiate its work.
“Basic models, including large language models and generative artificial intelligence (AI), which have emerged over the past five years, have the potential to transform much of what people and businesses do. To ensure that AI innovation continues in a way that benefits consumers, businesses and the UK economy, the government has instructed regulators, including the [CMA], to reflect on how to support the innovative development and deployment of AI through five overarching principles: safety, security and robustness; appropriate transparency and explainability; honesty; accountability and governance; and contestability and recourse,” the CMA wrote in a press release.
The Center for Research on Foundation Models at Stanford University’s Human-Centered Artificial Intelligence Center is credited with coining the term “fundamental models”, in 2021, to refer to AI systems that focus on training one model on a huge amount of data and adapting it to many applications.
“The development of AI touches on a number of important issues, including safety, security, copyright, privacy and human rights, as well as the way markets work. Many of these issues are under consideration by the government or other regulators, so this initial assessment will focus on the questions the CMA is best placed to answer: What are the likely implications of developing basic AI models for competition and consumer protection? added the CMA.
In a statement, the CEO, Sarah Cardell, also said:
AI has entered the public consciousness in recent months, but has been on our radar for some time. It is a rapidly evolving technology with the potential to transform the way businesses compete and drive substantial economic growth.
It is critical that the potential benefits of this transformative technology are easily accessible to UK businesses and consumers, while keeping people safe from things like false or misleading information. Our goal is to help develop this new, rapidly scaling technology in a way that ensures open, competitive markets and effective consumer protection.
Specifically, the UK competition regulator said the initial assessment of AI’s basic models:
- explore how the competitive markets for foundation models and their use might evolve
- examine the opportunities and risks these scenarios may present for competition and consumer protection
- establish guiding principles to support competition and protect consumers as basic AI models evolve
While it may be early for the antitrust regulator to review such a fast-moving emerging technology, the CMA is acting on government orders.
An AI white paper published in March indicated that ministers prefer not to set bespoke rules (or oversight bodies) for the use of artificial intelligence at this stage. However Ministers said existing UK regulators – including the CMA, which was directly vetted by name – are expected to issue guidelines to encourage safe, fair and responsible use of AI.
The CMA says its initial assessment of fundamental AI models is in line with instructions in the white paper, in which the government spoke of existing regulators conducting “detailed risk analysis” to implement possible enforcement actions, i.e. on dangerous, unfair and inexplicable applications of AI, using their existing powers.
The regulator also points to its core mission — supporting open, competitive markets — as another reason to look at generative AI now.
Notably, the competition watchdog will be given additional powers to regulate Big Tech in coming years, under plans put on the back burner by Prime Minister Rishi Sunak’s government last month when ministers said it would move forward with a long-running (but much delayed) ex ante reform targeting the market power of digital giants.
It is expected that the Digital Markets Unit of the CMA, which has been operating in shadow form since 2021, will (finally) gain legislative powers in the coming years to apply proactive “pro-competition” rules tailored to platforms deemed to be have a “strategic market status” (SMS). So we can speculate that over time, providers of powerful fundamental AI models could be deemed to have SMS, meaning they could face bespoke rules on how to operate against rivals and consumers on the UK market.
Britain’s data protection watchdog, the ICO, also has its eye on generative AI. It is another existing regulatory body that the government has tasked with paying special attention to AI under its context-specific guidance plan to guide the development of the technology through the application of existing laws.
In a blog post last month, Stephen Almond, the ICO’s executive director for regulatory risk, offered some tips and a little warning for developers of generative AI when it comes to compliance with UK data protection rules. “Organizations developing or using generative AI should consider their data protection obligations from the outset and adopt a data protection by design and standard approach,” he suggested. “This isn’t optional – if you’re processing personal data, it’s the law.”
Meanwhile, lawmakers across the English Channel in the European Union are establishing a firm set of rules that are likely to apply to generative AI.
Negotiations on a final text for the EU’s incoming AI rulebook are underway, but the current focus is on regulating fundamental models via changes to the risk-based framework for regulating the use of AI that the bloc is more than published in draft two years ago.
It remains to be seen where the EU co-legislators will end up with what is sometimes referred to as general AI. But, as we recently reported, parliamentarians are pushing for a layered approach to tackling security issues with fundamental models; the complexity of responsibilities in AI supply chains; and to address specific content issues (such as copyright) associated with generative AI.
Moreover, EU data protection law already applies to AI, of course. And privacy-focused investigations into models like ChatGPT are underway on the block — including in Italy, where an intervention by the local watchdog led OpenAI to release a series of privacy disclosures and audits last month.
The European Data Protection Board also recently set up a task force to support coordination between different data protection authorities in investigations into the AI chatbot. Others investigating ChatGPT include the Spanish privacy watchdog.