Apple v. Microsoft. A&M Records v. Napster. United States v. Google.

These are landmark legal battles that have come to define the relationship between technology and law. 

Recently, though, some new players have approached the bench. 

Smart chatbots such as OpenAI’s ChatGPT and Anthropic’s Claude, as well as text-to-image apps like Midjourney and Stable Diffusion, all made their explosive debuts last year, under the umbrella term of Generative AI (GenAI).

Of the many discussion points around this technology, one of the top threads has been the many legal questions these tools bring to the surface, especially around copyright, data privacy, and bias/discrimination.  

To address this emerging paradigm, Columbia Law School’s Science and Technology Law Review journal, in partnership with the newly established Program on Science, Technology & Intellectual Property Law, hosted an inaugural symposium, Accountability and Liability in Generative AI: Challenges and Perspectives.

The in-person event, which took place on November 17, 2023, featured a panel of nationally recognized experts with deep knowledge of law and AI technologies, representing prominent academic institutions and global companies. Speakers presented their perspectives on emerging legal frameworks, technological best practices, and regulatory considerations for GenAI tools.

“One of the biggest challenges right now in data science and AI is the lack of domain specialists, whether it’s in finance, art, engineering or law,” said Eric Talley, a DSI Affiliate and Isidor and Seville Sulzbacher Professor of Law. “We have convenings like this event to bring in folks who have those specialties, so that the tools we have out there are better tuned to do the job,” explained Professor Talley, whose legal expertise spans across corporate governance, cybersecurity and tech regulation.

Talia Gillis, also a DSI Affiliate and Associate Professor of Law, has focused her work mainly on fairness and discrimination issues in algorithm-based consumer credit assessments. “In my work, the problems I deal with are those of prediction and classification,” said Gillis. “With its built-in randomness and opacity, Generative AI does something quite different.”

Both Talley and Gillis served as commentators during the event, sharing their views on the current state of AI and law. Together, their insights shine a light on the “black box” issue of GenAI. In the black box, one can control what they put into an application, and see clearly what comes out, but the process by which the former triggers the latter remains hidden. This lack of understanding of the system’s inner workings and decision-making mechanisms not only create ambiguities for users, but can leave lawmakers without the information they need to develop legal procedures.

With this issue in mind, DSI rounded up some key takeaways from the event: 

Tons of ideas, but no silver bullet.

For legal deliberation around GenAI, there is no one-size-fits-all approach. In fact, the ideas presented by the symposium’s participants were varied, and sometimes even at odds with one another.

For example, some speakers suggested putting as much effort as possible up front, using regulatory processes and development standards to mitigate harms that may crop up in the deployment of AI tools. Alternatively, some favored the notion of establishing liability frameworks and litigation strategies to hold tech companies accountable only if and when such harms arise. 

One thing many of the participants agreed on was the importance of ethics in the development of AI tools. However, the lack of concrete incentives, as well as the complexity of quantifying ethical considerations, were both highlighted as challenges. 

More collaboration is needed.

The fact that law is still playing catch up with the tech industry was echoed throughout the talks. Many speakers stressed the need for more collaboration, acknowledging the longstanding gap between legal principles and technical implementation. 

Different forms of collaboration were highlighted throughout the symposium. University of Pennsylvania Professor Christopher Yoo advocated for cross-disciplinary frameworks and metrics during testing and scaling phases. In his talk “Focusing on Fine-Tuning: New Pathways for Fixing What is Wrong with Generative AI,” Georgetown Law Professor Paul Ohm proposed that judges be directly involved in the reinforcement learning process of AI models.

Without direct collaboration between legal and business/technology stakeholders, ideas from the legal community may be confined to the walled garden of academia. 

A storm is coming. 

Over the last year, GenAI has gone through many cycles of hype and disillusionment, from doomsday scenarios to sci-fi fantasies. Media commentary can sometimes get more attention than the technology itself. Despite these many distractions, though, panelists agree that a storm is coming at least where the law is concerned. 

During the presentation of her paper “A Products Liability Framework for AI,” NYU School of Law Professor Catherine Sharkey questioned the relevancy of existing Common Law, specifically the body of historical landmark decisions that have shaped the US legal system as a whole.  

Historical cases such as Brown v. Board of Education (racial segregation), Roe v. Wade (reproductive freedom), Obergefell v. Hodges (marriage equality) — these decisions were all made by individual judges, yet the legal precedents they established stand as safeguards (or in some cases, battlegrounds) for the rights and freedoms we have today.

Many of the speakers maintained that we should expect a similar watershed moment when it comes to AI, one that will change how we think about the application of law at a fundamental level. 

We just don’t yet know how, and we do not know when.