Business News

How executives can prioritize ethical innovation and data dignity in AI

[ad_1]

More and more, companies are relying on artificial intelligence to carry out various functions of their business—some that only computers can do and others that are still best handled by humans. And while it might make sense that a computer can carry these jobs out without any sort of bias or agenda, leaders in the AI ​​space are increasingly warning of that exact scenario.

The concern is so prevalent that new responsible AI measures have been floated by the federal governmentrequiring companies to vet for these biases and to run systems past humans to avoid them.

The four pillars of responsible AI

Ray Eitel-Porter, managing director and global lead for responsible AI at Accentureoutlined during a virtual event hosted by Fortune on Thursday that the tech consulting firm operates around four “pillars” for implementing AI: principles and governance, policies and controls, technology and platforms, and culture and training.

“The four pillars basically came from our engagement with a number of clients in this area and really recognizing where people are in their journey,” he said. “Most of the time now, that’s really about how you take your principles and put them into practice.”

Many companies these days have an AI framework. Policies and controls are the next layer, which is all about how you put those principles in place. The technology and platforms are the tools in which you implement those principles, and the culture and training portion ensures that everyone at every level of the company understands their role, can execute it, and buys in.

“It’s definitely not just something for a data science team or a technology team,” said Eitel-Porter. “It’s very much something that is relevant to everybody across the business so the culture and training piece is really important.”

Naba Banerjee, head of product at Airbnbsuggested that a fifth pillar be included: the financial investments required to make these things happen.

Interestingly, Eitel-Porter said the interest and intent is there, citing a recent Accenture survey of 850 senior executives globally, which found that just 6% had managed to incorporate responsible AI operationally, while 77% said doing so was a top priority looking forward .

And to Banerjee’s point about investment, the same survey showed 80% of respondents said they were allocating 10% of their AI and analytics budgets over the next few years to responsible AI, while 45% said they were going to allocate 20% of their budget to the endeavour.

“That’s really encouraging because, frankly, without the money, it’s very hard to do these things, and it shows there’s a very strong commitment on the part of organizations to move to that next step…to operationalize the principles through the governance mechanism,” he said.

How companies are trying to be responsible

Airbnb is using AI to prevent house parties at host homes, which became a bigger problem amid the pandemic. One of the ways the company tries to detect this risk is by looking at renters under 25 renting mansions for one, with one assumption being these customers are scouting locations for parties.

“That seems pretty common sense, so why use AI?” Banerjee asked. “But when you have a platform with more than 100 million guests, more than 4 million hosts, and more than 6 million listings, and the scale continues to grow, you cannot do this with a set of rules. And as soon as you build a set of rules, someone finds a way to bypass the rules.”

Banerjee said employees were constantly working on training the model to enforce these rules, but it wasn’t perfect.

“When you try to stop bad actors, you unfortunately catch some dolphins in the net, too,” she said.

That’s when humans in customer service have to step in to troubleshoot with individual users of the platform, who had no intention of throwing a rager, but were prevented from booking anyway. Those instances were used to improve the models as well.

But robots can’t do everything. One way the online homestay marketplace is keeping humans in the loop is with Airbnb’s Project Lighthouse, which focuses on preventing discrimination by partnering with civil rights organizations. Banerjee said the company’s mission is to create a world where anyone can belong anywhere, and to that end, the platform has removed 2.5 million users since 2016 who did not follow community standards.

“Unless you can measure and understand the impact of any kind of system you’re building to keep the community safe…you can’t really do anything about it,” she said.

Project Lighthouse aims to measure and root out that discrimination, but it does so without facial recognition or algorithms. Instead, it’s using humans to help understand the perceived race of a person while keeping that person’s identity anonymous.

“Where we see a gap between white guests, Black guests, white hosts, Black hosts, we take action,” she said.

At MasterCardartificial intelligence has long been used to prevent fraud across the millions of daily transactions occurring all over the country.

“It’s interesting because at Mastercard, we’re in the data and technology business. This is the space we’ve been in for many, many years,” says Raj Seshadri, president of data and services at Mastercard.

And inherent in this work is the concept of trust, she added: “What is the intent of what you’re doing? What did you hope to achieve and what are the unintended consequences?”

But the more data you have can help avoid discrimination when using AI, Seshadri said. As an example, women-run small businesses don’t usually get approved for as much credit, but with more data points, it could be possible to reduce gender-based discrimination.

“It levels the playing field,” Seshadri said.

Biased robots are human creations

Krishna Gade, founder and CEO of Fiddler AI, said that biased robots are not sentient creatures with an agenda, rather the result of flawed human data informing what we hope can be an improved version of the process.

A difficulty here is that machine-learning-based software is improving in a sort of black box, Gade said. And it doesn’t work like traditional software where you can view code line by line and make fixes. It becomes difficult then to explain how the AI ​​is working.

“They’re essentially trying to infer what is going on in the model,” Gade says. Data that AI uses to calculate a Mastercard customer’s loan approval, for example, might be causal for the model, but not in the real world. “There are so many other factors that might be driving the current rate.”

At Fiddler AI, users can “fiddle with” the inputs on a model to figure out why it’s behaving the way it is. You could adjust someone’s previous debts to see what his or her credit score would change to, for example.

“Those types of interactions can build trust with a model,” he said, noting that many industries, such as banking, are having risk management teams review their AI processes, but not all industries are implementing these checks.

New government regulations will likely change this as many in this industry have called for an AI Bill of Rights.

“Many of these conversations are going on, and I think it’s a good thing,” Seshadri said.

[ad_2]

Source link

Related Articles

Back to top button