The Ethical Use of Artificial Intelligence

Michael Haley, President of ARMA International; Michael Quartararo, President of ACEDS E-Discovery; and George Socha, Senior Vice President of Brand Awareness at Reveal participated in a webinar earlier this year on the ethical use of artificial intelligence.

In the webinar, we started with definitions—what do we mean by ethics, AI, machine learning? From there we turned to the impact of AI on today’s world, negative as well as positive. For the bulk of the session, we focused on what could be done to better insure the ethical use of AI. The following are excerpts from the discussion, edited for content and brevity.

QUESTION: What do we mean by “ethics” here?

George Socha: Aldo Leopold, who helped develop modern environmental ethics, noted, “Ethical behavior is doing the right thing when no one else is watching—even when doing the wrong thing is legal.” As we work with data, we should constantly be asking ourselves how well we live up to this quote.

QUESTION: What is artificial intelligence and how do ethics relate to it?

Michael Haley: We are talking about that aspect of AI that focuses on decision-making, using artificial intelligence to perform analytics based on inputs and deliver decisions. We know we should do the right thing. If we turn to computers for help, are we doing that right thing?

QUESTION: How does AI impact our lives today?

Michael Quartararo: AI is one of the most underappreciated and unrecognized things in our lives. We don’t realize that it’s being used. Common ways AI impacts our lives include marketing and sales offerings, loan and credit processing, customer service programs, and stock trading. Then there’s cybersecurity, fraud detection, robotics, and automation. AI is literally driving the autonomous vehicle market. In my area of specialty, e-Discovery, we use AI to sift through massive amounts of documents, finding relevant or important information.

QUESTION: What is machine learning?

George Socha: A form of artificial intelligence, machine learning is an approach where data is presented to a computer, which learns from that data. It can learn from the data through supervised machine learning. I give it a bunch of documents; I tell the computer, here’s a document I like. Here’s one I don’t like. The AI looks for commonalities and tries to figure out what other documents I might like.

With unsupervised machine learning, I dump all the data on the system and ask it to find patterns. I examine the patterns, looking for something useful. I might give the system 10,000 pictures and ask it to catalog the content. It gives me labels back. I search through the labels for the content of interest.

QUESTION: Where can things go wrong?

Michael Quartararo: We should start with the foundational understanding that AI is not made from thin air. Humans create algorithms. Things can go wrong if AI systems are poorly designed, improperly tested, based on incomplete data, and constructed with built-in cultural, ethical, or business-related biases.

We need to foster better adoption, use, and, frankly, design. We need increased awareness, ease of use, and the ability to explain transparency, translucency, and opacity.

QUESTION: What are your thoughts on the notion of transparency, translucency, and opacity?

George Socha: Transparency, translucency, and opacity have special importance when we are talking about the use of AI in dispute resolution, especially through legal systems.

Some want AI to be completely transparent—let the light shine all the way through. That desire for full disclosure conflicts with other principles such as confidentiality and clients’ ability to be candid with their attorneys. Sometimes we need translucency—let some of the light through, but not all of it. Sometimes opacity is needed—lower the light-proof shades.

We have similar challenges with machine learning. To create an AI model, you give the system information and your thoughts about that information. Included might be client confidential information needed to train the model properly. In that situation, you don’t want transparency so much as moderate to deep translucency.

Michael Haley: I agree. We want to be transparent, but we also must be aware of the potential for unintended consequences of that transparency. Transparency sounds wonderful but if it produced deleted information, misguided information, or breaches confidentiality, we have to be aware of that.

QUESTION: What does the American Bar Association have to say about the use of AI in the practice of law?

Michael Quartararo: The American Bar Association has planted a flag in the ground, saying that in the practice of law we need to understand things like bias, explainability, and transparency and be aware of the ethical and beneficial use of AI. The legal profession is watching to make sure the use of these tools doesn’t go off the rails.

QUESTION: Should we be concerned about bias in AI and machine learning?

George Socha: As people, we operate subject to all sorts of biases. The AI systems we build are no different, with us sometimes building our own biases right into the systems. Historically a narrow group of people has built and trained AI systems. The bigger question is not whether there is bias, but rather what we can do to address that bias moving forward.

QUESTION: What about racial bias?

Michael Haley: Racial bias can get baked into a system, something that happened with Microsoft’s Tay chatbot. Set up to learn from itself and from its users, it had to be promptly shut down. Gaming the system, a group of users fed it racist, misogynistic content. Tay quickly began to respond to every message with offensive remarks.

QUESTION: How about social bias?

Michael Haley: The State of Wisconsin built a system to reduce judicial sentencing subjectivity, creating what they thought would be a fairer, more consistent system. They built it by loading the offenses and sentences from past cases. With that data, they tried to train the system to predict recidivism, rehabilitation, and the like. The data they used was based on years of judges who had been more lenient on white defendants and harsher on African American ones. The application, of course, continued to reinforce and exacerbate those same biases.

QUESTION: How do you overcome bias in AI?

Michael Haley: The goals of AI are admirable. It holds out the promise of more consistency and predictability. At the same time, it’s too simplistic to say, don’t build the bias in. Rather, a combination of more and better-supervised learning combined with after-the-fact analytics can help.

George Socha: If the same people continue to do the same things, we are likely to get the same results. To address the challenges Michael has been discussing, we are going to need to (1) pull a more diverse range of perspectives into the process of creating and using AI and (2) evaluate results with much greater skepticism and nuance. In the right circumstances, you can even deploy AI to search for bias.

QUESTION: What about ethical dilemmas in eDiscovery in particular?

Michael Quartararo: Lawyers are bound by rules around competence, candor, confidentiality, conflicts, and supervision that form the ethical requirements they must abide by. Relying blindly on technology without even lifting up the hood a little bit flies in the face of those requirements. You must do that, including with AI, even in the face of huge time and budget pressures.

QUESTION: What ethical dilemmas might be created by using client data to, for example, build AI models?

George Socha: AI models are built on data, which sometimes can include client data. Use of client data might be desirable in terms of achieving an outcome. There are potential dangers here. First, client data might be used without client permission. Second, client data used to build a model might get into the wild, perhaps because of poor design, perhaps due to something more nefarious.

At Reveal, we have published our AI Pledge which we encourage others to subscribe to as well: “Our organization pledges to promote the responsible use of data when developing AI models—employing trustworthy practices, knowledgeable practitioners, and secure methodologies.”

QUESTION: Could it be that bias is greater with people than with machines?

George Socha: In a provocative law review article, Andrew Keane Woods, a professor at the University of Arizona law school, posed the question of which are more biased, AI systems or humans. Provocative because he sides with AI.

ARMA members and non-members can view the Ethics of Artificial Intelligence webinar (and earn 1.0 IGP Ethics credit) for free here.

(Visited 1,470 times, 1 visits today)