How much can we trust AI? How to build confidence before

Companies should always build trust in AI before applying it throughout the organization. Here are some simple steps to make AI more reliable and ethical.

In 2019, Amazon’s facial recognition technology identified Duran Harmon of the New England Patriots, Brad Merchand of the Boston Bruins, and 25 other New England athletes as criminals when it mistakenly linked to a database of photographs of athletes.

See: Artificial Intelligence Ethics Policy (TechRepublic Premium).

How can artificial intelligence be improved and when can businesses and their customers rely on it?

“At this year’s IBM Annual Customer and Developers Conference, the issue of distrust of AI systems was a major issue,” said Ron Pozansky, who works at IBM’s Design Productivity. IBM “Honestly, most people don’t trust AI, at least not enough to keep it in production.

A 2018 survey by The Economist found that 94% of business leaders believe that taking strategic challenges to AI is key to this solution; However, the 2018 MIT Sloan Management Review found that only 18% of companies are truly AI ‘leaders’ who have embraced AI extensively in their offerings and processes.

The gap reflects a very real usability problem in our AI community: people want our technology, but it doesn’t work in their current state. “

“There are good reasons why people still don’t believe in AI tools,” he said. “In the beginning, there was the burning issue of superstition. Racist, sexist or otherwise biased outings across the board. ,

See also: Metaverse Cheat Sheet: Everything You Need to Know (Free PDF) (TechRepublic)

Understanding AI biases
On the other hand, Poznanski and others remind businesses that AI is biased by design, and unless businesses understand the nature of this bias, they can easily use AI.

For example, when a large molecular AI test was conducted in Europe aimed at identifying solutions, studies that deliberately did not discuss the molecules in question were excluded in order to speed up the results.

That said, analytical flow can occur when your AI deviates from its original business use when it was intended to be solved or when the underlying AI technologies, such as machine learning, are “learning”. From data models and make wrong decisions.

Find a Midpoint
To avoid AI results, the Gold Standard method today is to examine and re-examine AI results to ensure that they are within 95% of the accuracy of the decisions of a team of human experts. In other cases, companies may conclude that 70% accuracy is the minimum to start making recommendations for an AI model that people might consider.

See: We need to address AI biases before it’s too late

Reasonable compromise on the level of accuracy provided by AI, where intentional and blind spot bias may occur is an intermediate solution that organizations can apply when working with AI.

“Solving this urgent problem of mistrust in AI begins by tackling those sources of distrust,” Pozansky said. “In order to address the issue of bias, the dataset should [be] designed to extend training data to eliminate blind spots.”

Leave a Comment