Chambers Student Interview – Paul Schwartfeger

www.jpg (1)

“AI regulation is as much about political priorities and economic strategies as it is about legal doctrine.” We sit down with barrister Paul Schwartfeger to find out more about some of the burning questions surrounding AI and its regulation.

Chambers Student: Over the last few years, AI has dramatically changed, and its capabilities seem almost endless. What’s it like working with a technology that’s seemingly ever changing?

Paul Schwartfeger: It’s a really dynamic space, though I perhaps see it in a more measured way than some. Before I came to the bar, I was a technologist, so there's a certain familiarity with the current wave of hype—how AI is perceived and the excitement and enthusiasm that surrounds it at present. We’re currently seeing a rapid rise in attention, innovation and expectations, but eventually the technology will stabilise into something more mature and integrated.

When I think back to other technology waves such as the various web eras or crypto, enormous excitement surrounded them at first, particularly how they would reshape the ways people and industries operate. Now, a number of those innovations simply feel a part of our daily lives.

One thing that feels quite different about the AI wave is that it’s not just about new entrants. In the early dot-com days, the focus was often on ‘who’ was coming online next, while in the NFT space we eagerly awaited news of the next token to drop or the next product to be tokenised. This wave feels different. It’s not just about new entrants and new products, but also fundamental changes to the products and services that many of us may already be very familiar with—sometimes radical changes that appear overnight. You might have one experience with a given tool one day and a completely different one the next. As systems become increasingly adaptive, we will likely even see changes to the products and services we’re using within a single session, as they learn from our behaviour and adapt accordingly.

This rate of change can feel exciting, but I’m already seeing how it can also bring frustration for clients. When you’re looking at things like training programs for staff, for example, the idea that a tool or its outputs are familiar one day but not the next can be really difficult to manage. Mitigating those risks and remaining compliant in that type of environment can, for a lawyer, be exciting, but for a client, it can be challenging.

 

CS: Have there been any recent developments in the world of copyright law impacting AI’s regulation that you would highlight to law students?

PS: Getty Images v Stability AI has been a headline case for AI and copyright issues in the UK, although its procedural twists and later outcome mean it has not resolved every question many expected it to. Getty claimed that Stability AI had trained its generative AI model, Stable Diffusion, using its copyrighted images and that it had done so without permission. Getty filed a lawsuit in 2023 accordingly, alleging that Stability AI had violated its IP rights by scraping Getty’s watermarked images from the web.

For me, the case is particularly useful because its procedural history illuminates the kind of complaints AI models can give rise to. How AI models consume and transform content, for example, can be quite a technical argument, and as a former technologist, I find such discussions particularly engaging. The case brought to life the risks of generative AI models ingesting copyrighted content without permission, and how that might lead to system developers or even end users being liable for copyright infringement in the outputs. This demonstrates why regulators and policymakers are grappling with transparency, disclosure and accountability obligations for AI developers.

At the same time, the case is somewhat incomplete when it comes to clarifying the legal position of rights holders. A number of the copyright risks that were identified at the outset of proceedings were not fully tested by the court in the end. There were issues with evidence and certain claims were narrowed or abandoned before trial. Some important questions about training and output liability were therefore not finally resolved in the way many had anticipated. That leaves a degree of uncertainty as to how those copyright questions are ultimately to be answered by the courts.

That uncertainty matters from a regulatory perspective. Where litigation does not provide clear answers quickly enough, governments often come under pressure to intervene. The UK government is obviously looking at AI and copyright issues too, potentially setting up a tension between any future proposals for legislative reform and the position that is taken by the courts.

The Getty case shows how these claims are likely to be very fact-specific and just how responsive any regulations will need to be. If a future case comes before the courts with stronger evidence of the real-world copying questions raised, we could see an entirely different outcome. So this remains an area of significant legal development for those of us interested in AI and intellectual property law.

For law students, Getty is a fascinating case for explaining how the intellectual property risks associated with certain AI use arise, what the court was able to deal with on the evidence and what pressures any future regulation needs to respond to. It doesn’t just illuminate the underlying legal principles, but also shines a light on how important it is to understand the technical architecture of these systems. Technical design and implementation questions may ultimately affect the practical limits of regulation, as well as the conclusions the courts reach.

 

CS: A recent ICO report highlighted that ‘developers are designing modern AI agents that can create and execute context-specific plans in more variable environments, with less human direction’.   How do you think AI systems functioning with ‘less human direction’ might complicate existing regulation?

PS: The key shift here is that AI is moving from giving advice and acting in an informative capacity to actually taking action. Regulation has historically focused on what systems output. Once an AI agent can plan and execute tasks in the real world, the question stops being about what an AI system outputs and becomes about what it does and on whose behalf.

“Once an agent can plan and execute tasks in the real world, the question stops being about what an AI system outputs and becomes about what it does and on whose behalf.”

That creates immediate legal complications. If we put the regulatory aspect aside for a moment, I've written a number of articles exploring contractual actions enabled through smart contracts or AI systems, considering questions such as whether or not a party can properly understand the contract terms, how they can express an intention to be bound by those terms and how liabilities might be apportioned if things go wrong. These are not peripheral issues—they go directly to the foundations of contract and agency—and those issues, and wider regulatory concerns, are compounded as systems increasingly operate with less human direction.

In an agentic age, cyber-security risks become more acute, because agents require broader privileges to perform certain tasks. When it comes to the use of stored data, an agent could disclose confidential information or personal data, giving rise to data issues including under the UK GDPR and wider data protection regime. Fraud and money laundering risks also arise where agents are involved in making or taking payments. In each of these cases, a central issue is the attribution of acts and responsibility. As systems operate with less human direction, it becomes increasingly difficult to identify whose acts those are in law. Without further authority or regulation in these areas, I expect it is going to become harder to allocate responsibility when it comes to problems stemming from such autonomous system behaviour. This is where your question about “less human direction” complexity bites.

In the short term, my view is that these particular complexities are likely to be addressed less through regulation and more through contractual risk allocation, and ultimately through the courts. This is a trend that I’m seeing already, with participants in AI markets seeking to redistribute liability through warranties, indemnities and carve-outs. Different actors will seek to shift responsibility and compliance obligations—for example, upstream from deployers to product or system designers, holding them accountable where problems arise. That may prompt further regulatory intervention, particularly to ensure, for example, that consumers are adequately protected against companies relying on excessive contractual exclusions to avoid liability in the context of systemic failures.

“I see a real risk when regulations attempt to regulate specific technologies by name (…) That runs the risk of certain harms or issues falling through the gaps where the technology used doesn’t quite match the label…”

When it comes to how these systems should be regulated, a further concern I have relates to labelling. When we talk about AI—using terms such as “agentic AI”, “probabilistic AI”, “determinative AI”—these labels may give the impression of a shared understanding of the technology. However, I see a real risk when regulations attempt to regulate specific technologies by name. Technology evolves rapidly, definitional boundaries are often fluid and they also quickly blur. If the government were to try to regulate ‘AI’ or ‘agentic AI’ then that runs the risk of certain harms or issues falling through the gaps where the technology used doesn’t quite match the label given in the regulations, even where they give rise to the same underlying harms.

In my view, regulation is more effective when it is technology-agnostic and directed at the nature of the harm, rather than the medium through which it arises.

 

CS: Are there any lesser-known cases that you foresee having an interesting impact on how AI is regulated?

PS: Earlier software and IT liability cases will continue to have an important influence on regulation and dispute resolution. We are not starting from a blank slate. Historic disputes covering areas such as contract formation and breach, data protection and product liability are going to be the logical starting point for many AI matters when addressed by the courts or by government. However, there will be questions about how that body of law needs adapting or evolving to fit an AI setting.

St Albans City and District Council v International Computers Ltd is a good example of this point. The case is from 1997, which may seem old in modern technological terms, but it raises issues that are directly relevant to AI systems. In the case, ICL implemented a software system for the council to administer the poll tax. The system ran as intended in a technical sense. It didn’t fall over and it didn’t crash, but was nonetheless found by the court to be defective. The software contained an error that caused the poll tax charge to be set too low. That eventually led to significant losses for the council. It’s an important case for recognising that a computer system can “work” and yet still be legally or commercially unacceptable.

That principle potentially becomes more significant in an AI context, but also more difficult to apply. In the St Albans case, the loss arose from consistent cumulative errors. By contrast, AI systems may produce results that are neither consistent nor fully reproducible. A result may be right 99 out of 100 times, but on the 100th occasion it could be catastrophically wrong. How defects are defined, evidenced and measured may therefore need thinking about differently in an AI context, even though cases such as St Albans may still provide the conceptual starting point for questions of system defect, reliability and reasonable performance.

Like the courts, regulators will need to consider how to address systems that may generally be reliable but capable of significant error in individual instances, and who should bear responsibility in those circumstances.

 

CS:We saw that recently Ofcom launched an investigation into X’s Grok AI over sexual deepfakes. Over the next few years, do you think we’ll see a big difference in how UK, EU and US lawmakers move to regulate AI? If so, how do you think these regions will differ in their approaches?

PS: I think we’re already seeing that AI regulation is as much about political priorities and economic strategies as it is legal doctrine. We can therefore see divergence globally when it comes to AI regulation, though I don’t think we're going to see complete fragmentation. My view of the situation is that all three of the regions you mentioned are trying to balance the same issues of innovation, safety and competitiveness. What differs is how each of those factors is weighted.

“…all three regions are trying to balance the same issues of innovation, safety and competitiveness. What differs is how each of those factors is weighted.”

The EU has adopted a comprehensive framework through the EU AI Act. The Act has a strong emphasis on risk classification, conformity assessments and documentation, but it also focuses considerably on fundamental rights and transparency, as well as issues of systemic risk. In the EU, the focus seems to be on regulating upstream where possible. We discussed earlier how risk can be apportioned in different ways and the EU is looking at things like foundation models in particular, rather than just downstream use cases.

Here in the UK, we seem to favour a sector-specific approach, looking more at a principles-based model rather than a single AI-focused statute. Regulators such as Ofcom, the ICO and the FCA are empowered to apply their existing powers to AI-related harms. This approach preserves regulatory agility, but it can create short term uncertainty, because obligations are dispersed rather than codified in a single statute. As a potential implementer or developer of an AI solution, you therefore have to consider what the risks are across all those different regulatory bodies and regimes. The approach is potentially attractive because it positions us as innovation-friendly, with intervention principally where harm materialises, although it does introduce an element of unpredictability.

The US, like the UK, has no single federal AI framework equivalent to the EU AI Act, though there have been some attempts to regulate AI at a state level. A little like the UK, the US also places greater reliance on existing consumer protection, securities, competition and civil rights law, rather than centralising its regulation. One big difference I think we can draw out when looking at the US though is that, due to increased political sensitivity around free speech, the US doesn’t seem to have the same focus on content regulation that we’re seeing play out here in the UK, such as through Ofcom’s investigation into X.

Even if governments in these markets and jurisdictions continue to calibrate their regulations differently, we will ultimately see the market synthesising the approaches in some way. Many AI providers will be companies with global offerings, and it can be cost prohibitive for such companies to have to localise their products to each country’s specific regulatory approach.

What I tend to see happen, and what I often look at with clients, is how a solution can be designed so that it fits the highest applicable regulatory standard across all the markets the company is considering launching in. This can involve asking difficult questions, such as what markets a company might need to withdraw from because compliance just isn’t commercially viable at that time or what features in a given product might need to change across all markets.

I think this is where things can get really exciting for lawyers because it’s not just about knowing the law in your country and where you practice, but also understanding how your domestic laws fit within frameworks internationally, and therefore how a client can best manage any risks. It also requires that you understand what’s possible or realistic “at the coalface”, so to speak—getting to grips with how the technology works and what changes might be made to address any legal concerns.

 

CS:Is there anything else that you would like to highlight to students about AI and its regulation?

PS: I’ll just reiterate—it’s a really interesting space. Working with technology is fast-paced and the rate of change seems to accelerate every time we move into a new phase of development or a new technological idea takes hold. That creates real opportunities for those coming into the profession.

“There’s a lot on the horizon for tech lawyers. We've got quantum computing nipping at our heels, for example, which will bring with it a whole new set of legal issues.”

There’s a lot on the horizon for tech lawyers. We’ve got quantum computing nipping at our heels, for example, which will bring with it a whole new set of legal issues. Data centres are high up the agenda at the moment too.

In terms of AI specifically, I’m already advising clients how to navigate uncertainty before regulation or litigation catches up. This includes questions around how risk is allocated between those developing and those using AI systems, how far AI outputs can be relied on and what safeguards need to be built around their use.

Disputes in the AI space are, for now, taking on a fairly conventional flavour, often framed in terms of misrepresentation or failures in delivery, but it is unlikely to stay that way. This is an area that’s evolving quickly and there’s still a great deal to come.

 

Paul Schwartfeger is a strategic legal advisor, barrister, and commercial and technology law specialist at 36 Stone.