Teach the machines well: Ethics in AI

artificial intelligence

“At Hyland, we believe that racism has no place in this world,” Bill Priemer, CEO and president at Hyland, recently wrote.

That applies whether it’s people or machines.

Sound weird? Let me explain.

Artificial intelligence is no longer the stuff of sci-fi movies. We’re already using it. From reading images faster in healthcare to predicting consumer behavior in financial services, AI is here.

But we have to remember the reason we call it artificial intelligence is because we create that intelligence in machines. They aren’t “born” with it. That’s why we have to be careful with what we teach these supercomputers.

And when I say teach, that’s precisely what I mean. Everything artificial intelligence “learns” is from the training data sets we expose it to. That’s why it’s so important to understand the data and content we leverage to ensure we are not perpetuating human-created historical bias.

“Public datasets are a great repository to mine information,” said Cal Al-Dhubaib, a data scientist and entrepreneur, “but they can also perpetuate historical social norms.”

Detecting bias in AI

Even though they’ve been accidents, incidences of bias in AI have happened before. From inadvertently gender-biased autocomplete solutions to racially biased healthcare algorithms, time and again, we’ve seen new AI systems learning the mistakes of our past.

“When he was U.S. Attorney General, Eric Holder asked the U.S. Sentencing Commission to study potential bias in the tests used at sentencing,” writes propublica in an article about risk-assessment algorithms.

“Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice,” said Holder in the propublica article. “They may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”

We’ve also seen how machines can perpetuate unfairness when leveraging online prospect behavior and engagement to produce lead scores.

“Purchase history, online viewing behavior, and geographic location can all be proxies to things like gender, race, age, and other protected classes,” said Al-Dhubaib. “If your AI system starts leveraging these factors to score leads, you may find yourself in tricky territory, especially if you operate in regulated areas like finance, healthcare, or insurance.”

Unfortunately, we’re finding out that AI can learn a bias not just from humans, but also because of the long-lasting effects of past technological limitations. One example is the difficulty facial recognition algorithms have when it comes to people of color. It all goes back to the chemical-based photography of the past and its trouble representing darker color gradients. Now, AI innovation with images can inadvertently learn and replicate those issues.

We have to be careful, as law enforcement departments could adopt facial recognition systems that can create false positives.artificial intelligence

Thinking the “right” way

That’s why we need to start with awareness. We need to get the word out to everyone designing these incredible systems that they need to be thinking about how they want them to … think.

We also need to talk about how these systems act, because algorithms aren’t always right. We need to make sure society understands the impacts of a world where algorithms influence social interaction and can create new norms that vastly increase the wealth of a few people.

“The design of different systems, whether we’re talking about legal systems or computer systems, can create and reinforce hierarchies precisely because the people who create them are not thinking about how social norms and structures shape their work,” said Ruha Benjamin, associate professor at Princeton University and author of Race After Technology. “Indifference to social reality is, perhaps, more dangerous than outright bigotry.”

I’m currently taking an online course through Coursera.org entitled “AI for Everyone,” taught by Andrew Ng, a leading researcher in the field. It’s really a great opportunity for anyone to get a better understanding of artificial intelligence and break through the Terminator-like hype some people embrace regarding the technology.

The final week of Ng’s course is all about AI and society.

“It’s important that we have a realistic view of AI and be neither too optimistic nor too pessimistic,” says Ng. “AI can’t do everything, but will transform industries.”

Ng also admits that AI does have some serious limitations.

“As a society, we do not want to discriminate against individuals, and we want people to be treated fairly,” he said. “But when AI systems are fed data that doesn’t reflect these values, then an AI can become bias or can learn to discriminate against certain people.”

Giving AI a “code to live by”

artificial intelligence

Artificial intelligence isn’t going away – it has proven to be a great way to get repetitive tasks out of the way, so people can focus on higher-value tasks that need the human touch.

So, what do we do to make sure there isn’t any bias?

“The next time you find yourself having a conversation about AI,” said Al-Dhubaib, “don’t just ask about what the model can do, ask about how and why the model was trained.”

That’s an excellent methodology. Here at Hyland, we’re embracing that, as we care deeply about this situation. We’re going to continue to innovate using AI, so we’re focusing on leveraging this technology in ethical ways to fight systemic racial bias – whether that’s in people or machines.

Opening the door as wide as possible

Our R&D department is making sure we’re teaching the machines well, as we continue to develop intelligent solutions like Brainware. Fortunately, this system works in a realm with no measurable direct impact on society, which makes solving the ethical issue – the gathering of unbiased data and fair use of technology – a lot easier.

After all, in this instance, we’re talking about speeding organizational invoice processing through AP automation, not using AI to spot criminal activities.

Also, we’re drawing from decades of experience with application and development of machine-learning technology that has resulted in a data-review culture that aims to spot and remove bias in data. This helps us design solutions that are robust against potentially misleading information.

Part of making sure we’re keeping bias out of artificial intelligence systems means we’re also making sure the data sets are as diverse as possible. But how do you make sure you’re doing so?

Actually, we’ve found that it doesn’t matter whether you’re talking about developing an artificial intelligence system or developing a new product.open door

“You need a diverse set of stakeholders who can vet your products and services, and speak to their cultural implications” said Joe Shearer, a research analyst at Hyland. “Because, if you overlook a population that uses your products or solutions, you’re essentially denying them access. If it’s a key piece of functionality, you need to make sure you open the door as wide as possible.”

The same goes for the machines we’re teaching: They need access to the widest range of information possible. Otherwise, it’s GIGO.

This is a good start, but unfortunately, we need to ensure the processes of accumulating the data we use are not biased themselves. It’s a vicious circle, because if the processes aren’t perfect, the data will contain inappropriate correlations. Both AI and the human mind are designed to pick up complex – even hidden – correlations and if they sync with the desired goals, they act accordingly.

And we all know someone who wears something like a lucky scarf that will help his or her team win the big game, right? People are at least as irrational as AI.

Maybe that’s why every AI expert eventually says something to the effect of, “We don’t know why it does that.”

Fixing the bias in AI

Artificial intelligence is here. And that’s a good thing. At least as far as technology-based tasks like process optimization are concerned.

We shouldn’t be worried about these systems taking our jobs. It’s more likely they’ll free people from mundane, repetitive tasks so they can focus on activities that require the human touch. Because no matter how much we teach them, machines can’t feel emotions. They don’t have empathy.

We do. And as we face the challenges 2020 has ushered in, focusing on empathy is not only the right thing to do for society, it’s also a great way to set your business up for success.

So if you’re out there designing intelligent systems, remember that you’re also a teacher. We must take as much time and consideration in teaching our machines as we do our children, as any good parent knows.

And as the song goes, we need to teach the children well.

Scott Caesar

Scott Caesar

Scott Caesar is an experienced director of Research and Development, having served Hyland for the past 12 years. His passion for innovation and his talent for enterprise software, solution architecture,... read more about: Scott Caesar

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.