Who Is Responsible When AI Breaks the Law?‌‌

Originally published in Yale Insights

If an AI is a black box, who is liable for its actions? The owner of the platform? The end user? Its original creator? Former Secretary of Homeland Security Michael Chertoff and Miriam Vogel, president and CEO of EqualAI, survey how AI both fits in and breaks existing legal frameworks. They argue that leaders need to be ready for the opportunities created by the novel technology and for potential legal pitfalls.‌‌

Q: You are co-authors of “Is Your Use of AI Violating the Law?” published in the Journal of Legislation and Public Policy. What is the aim of the paper?

Chertoff: The idea was to survey various ways in which artificial intelligence is impacting the legal landscape. What are the responsibilities of those developing AI technologies? What are the rights of those at the receiving end of decisions made or assisted by AI? What liabilities do users of AI tools have? What are the intellectual property issues raised by AI? What are the security issues? ‌

Vogel: A big part of our intent was to establish what the risks are so that we can all proceed thoughtfully. Secretary Chertoff and I both want AI to realize its potential. We both happen to be very excited about the way AI can create more opportunity. We want AI to benefit more women, more people of color, more of society and more of our economy, but we see people are fearful, intimidated by AI, and not feeling like it’s a language or a medium in which they can operate. ‌

Q: Where is the fear is coming from? 

Chertoff: Technological change is always disruptive and always unnerves people. But beyond that, it’s easy to understand an algorithm which is programmed, but a machine that can learn and develop rules for itself—that’s scary to some people. ‌

Vogel: Add questions like, Will I have to learn new skills to keep up with AI? Will AI take my job? What kind of jobs will be available to my kids?, and people are understandably worried about their own fate and their children’s livelihoods.‌

At the same time, TV and movies show us almost exclusively worst-case scenarios. While that’s what’s successful at the box office, it’s the primary or only experience many people have visualizing these technological systems in society.‌

Taken together, the real concerns and worst-case imagining are, in some ways, creating this harmful self-fulfilling prophecy. If too few people feel comfortable engaging with AI, the AI that’s developed will not be as good, it will benefit fewer people, and our economy will not benefit as broadly. To get the best outcomes from AI, we need broad and deep engagement with AI. ‌

Q: Why is that?

Vogel: AI is constantly learning and iterating. It gets better through exposure to people wanting different things, seeking different experiences, having different reactions. If it’s trained on a population of users that’s includes a full range of perspectives, backgrounds, and demographic groups, we get a more capable AI. While if an AI system interacts with a small slice of the population, it might meet the needs of that group perfectly while being problematic for everyone else.‌

Q: What are the uses of AI that are most promising to each of you?

Chertoff: To me, what artificial intelligence allows you to do is screen a huge volume of data and pick out what is significant. For example, in my old department, Homeland Security, you’re concerned about what people might be bringing in across the border by air or by sea. Artificial intelligence can help by flagging patterns of behavior that are anomalous. Or if you’re worrying about insider cybersecurity threats, AI can recognize data moving in an unusual way. ‌

AI can analyze data in a way humans cannot, but humans need to be making decisions about what the AI finds because AIs still make fundamental errors and end up down a rabbit hole. The capacities human beings bring to analysis and decision making—skepticism, experience, emotional understanding—are not part of AI at this point. ‌

Vogel: Two areas where I see promise but also concerns are healthcare and education. In healthcare there’s an AI tool called Sybil, developed by scientists at Mass General Cancer Center and MIT, which according to one study predicts with 86% to 94% accuracy when lung cancer will develop within a year. As somebody who had two aunts die way too early from lung cancer, that takes my breath away. A year’s advance notice is just mind blowing. ‌

But any healthcare applications of AI need standardized metrics, documentation, and evaluation criteria. For whom is this AI tool successful? Did the tool give false negatives? Was the data the AI trained on over- or under-indexed for a given population? Do the developers know the answers to those questions? Will healthcare providers using the tool know? Will patients know? That transparency is crucial. ‌

Likewise, in education, there are some really interesting use cases. For example, the Taiwanese government launched a generative AI chatbot to help students practice English. It’s a great way to get students comfortable with the language in their own homes. However, we need to be mindful to test for potential risks. For example, an AI system can reflect the implicit biases in the data they’re designed or trained on. For example, AI systems could unexpectedly teach racist and sexist language or concepts. They can hallucinate. It would be problematic if a student were exposed to a biased or hallucinating AI, and so we would want to ensure sufficient and thoughtful testing as well as adult supervision.‌

Q: Both of you offered examples of exciting, valuable uses of AI—then added significant caveats. 

Vogel: If we approach AI with our eyes open and feel empowered to ask questions, that’s how we’re all going to thrive. ‌

Often, when I’m talking to an audience I’ll ask, “How many of you have used AI today?” Usually, half of the audience raises their hand, yet I’m quite sure almost all of them have used AI in some way by the time they’ve gotten to that auditorium. Their GPS suggested an efficient route to their destination or rerouted them to avoid traffic. That’s AI. They checked the newsfeed on their phone. That’s powered by AI. Spotify recommended music they’d enjoy based on prior selections. That’s AI.‌

AI abstinence is not an option. It’s our world now. It’s not our future; it’s our present. The more we understand it’s not foreign, it’s something we’re using and benefiting from every day, the more we’ll feel agency and some enthusiasm for AI. ‌

I hope people will try generative AI models and look at some deepfake videos. The more we engage, the more we’ll be able to think critically about what we want from AI. Where is it valuable and where does it fall short? For whom was this designed? For what use case was this designed? And so, for whom could this fail? Where are there potential liabilities and landmines?‌

That doesn’t mean we all need to be computer scientists. We don’t need to be mechanics to drive a car. We don’t need to be pilots to book a ticket and fly across the country safely. We do have a vested interest in making sure that more people are able to engage, shape, and benefit from AI. ‌

Q: How should organizations think about deploying AI?

Chertoff: I developed a simple framework—the three Ds—which can guide how we think about deploying AI. The three Ds are data, disclosure, and decision-making. ‌

Data is the raw material that AI feeds on. You have to be careful about the quality and the ownership of that data. People both developing AI must comply with rules around the use of data, particularly around privacy and permissions. That’s just as true for people deploying AI within a company. Will the AI access company data? What are the safeguards?‌

Another data example: there’s ongoing debate about whether training AI on published works is an invasion of copyright or fair use. Human beings can read a published work and learn from it. Can AI do that or is it in some way a misuse of intellectual property? That’s still being argued. Developers and organizations deploying AI will want to stay aware.‌

With disclosure, it’s important to disclose when AI is performing some function. I wouldn’t go so far as to say you can’t create deepfakes, but I would say the fact that it’s not genuine needs to be disclosed. ‌

And finally, decision-making. I’m of the view that when it comes to making a decision that affects the life of a human being—hiring, firing, release from prison, launching a drone strike—whatever AI may tell you, a human being has to be the final decision-maker. We don’t entrust matters of life, death, or great human consequence to a machine…‌

Read the full article: https://insights.som.yale.edu/insights/who-is-responsible-when-ai-breaks-the-law