The Law Isn’t ready for Open and Generative AI

Image of lawyer, computer, and wooden legal gavel. Source: Tech.Co 

 

On April 16, UNC hosted the annual Wade H. Hargrove Colloquium. This year’s seminar focused primarily on media law in the age of artificial intelligence, including generative AI’s effect on copyright law, tort law, defamation law, political communication and journalism.

While this event did help bring to light some of the ideas surrounding the impact that artificial intelligence will bring, it also brought forth a lot of questions and uncertainty as to how AI could affect the future of media law and more specifically what role the law has in regulating generative artificial intelligence. 

AI technologies are advancing at a rapid and accelerated rate, continuously putting the rights of corporations above those of individual consumers. For example, there have been lawsuits from individual authors and news organizations against AI corporations for producing defamatory or copyrighted works, but these have not shown conclusive results for where the law stands when balancing individual freedoms with corporate rights.

Kenton Spencer, Professor at the UNC Hussman School of Journalism, says “it’s the balance of protecting individual rights and ensuring that… those that are harmed have proper news of redress but at the same time, it doesn’t stifle creativity.” Additionally, Spencer asserts that “[They] develop a flexible framework that allows for innovation but at the same time, holds these companies accountable for harm.”

When similar discussions arose during the rise of the digital age in the 21st century, the law aimed to create a balance test that can also be applicable to current AI related issues. As the law seeks to confront this accelerated technology, there have been few legal regulations that attempt to slow down AI technologies. This would be an important aspect of the process in order to protect privacy rights and copyright concerns among individuals.

There have been a few instances in which AI has already started to be a part of legal and political discussions around the world. So far, in Aug. 2023, the EU passed its first regulation on AI and in Oct. 2023, Biden passed Executive Order 14110, warning the United States of both the positives and negatives that come with this new technology. 

President Joe Biden’s executive order included a broad set of criteria around first amendment rights for companies, fourth amendment privacy rights for individuals, and the importance of maintaining economic growth. However, the criteria is vague and doesn’t actually hold companies accountable for the impact of AI, which could prove difficult in protecting individuals over corporate legal protections. 

The law, as it is now, has not done enough to keep up with today’s usage of AI to protect individuals. However, the response for whether individual or corporate rights matter more, is still being decided and the law has the power to decide which direction it will take.

An additional problem comes into play when AI creates a large issue when data comes from uncertain places. Already, the massive amount of input that is required to train AI calls into question the legality of it, especially when originally it was partially trained by pirated and illegally acquired data.

Ruth Okediji, a law professor at Harvard, said: “There is a huge transparency problem that has never been an issue with copyright law, but is certainly an issue today with misinformation, disinformation, and the capacity to generate licensing fees because your work was part of the data that was used.”

A large ethical issue regarding what kinds of datasets are used and where liability falls when a copyright issue does occur is a very present issue regarding AI and media law. The law can’t sue a machine, but it can attribute blame to other involved parties. Only when this liability has been implemented into the legal system in the United States, can individual rights begin to come more to the forefront of people’s minds when seeking to address AI technologies.

On the other hand, others believe that the restriction of AI falls on the federal government. Professor Stephen J. McConnell said, “So, we are going to need some guidance from Congress, lawmakers, courts, to kind of tell us who should be responsible for that. And that's where I think we do need some guardrails with these technologies and right now, we are in a place of lack of definition. And that is a problem.”

Until the law begins to catch up with the acceleration that AI has spread at, the rights of individuals won’t be protected and could sacrifice personal liberties. AI also has the potential to take human jobs, which may make the economy more efficient, but could lead to job insecurity for several professions across the world.

As AI continues to become more apparent in both our own community, the law becomes more and more prevalent to understand what role individual communities and corporations as a whole choose to handle the AI boom.

Professor at the UNC Hussman School of Journalism and UNC Center for Media Law and Policy faculty co-director, Amanda Reid says “We can't put toothpaste back in the tube, I don't think we can ignore this technology… And I think there's a learning curve, and

leaning in and trying to embrace that, it's probably not a bad idea.”

Currently, the rapid production and popularization of AI has brought it to the forefront of many people’s minds, but doesn’t have all the answers yet. We have tools that are smarter and faster than we are, we just need to know how to use them, and how to regulate them before we continue to implement AI on a large scale.

If the law continues to come secondary to large AI corporations and no regulation is put into place, AI will continue to operate as is with the spread of misinformation and the possibility of damaging the reputations of certain individuals. Without the individual integrity of local news organizations, large news corporations, individual journalists, content creators, and writers, there is a risk that works could continue to be taken without compensation or credit.

The future of AI and media law is uncertain with both techno-optimists and techno-pessimists arguing for more restrictions or more leniency surrounding artificial intelligence and the law. For now, AI has the potential to impact everyone, but the law just might not be ready to handle it quite yet.