Tech News

Navigating a stunning pandemic side effect: AI whiplash

[ad_1]

Among the many business disruptions caused by Covid-19, here’s the most largely forgotten: the blow to artificial intelligence (AI).

When the pandemic world began to rise last year, they reached out to all the tools available to businesses — including AI — to solve challenges and serve customers safely and efficiently. In 2021 KPMG survey Among business executives conducted in the U.S. from January 3 to 16, half of those surveyed said their organization has accelerated the use of AI in response to covid-19 — including 72% of industrial manufacturers, 57% of technology companies, and 53% of vendors.

Most are happy with the results. 82% of respondents agree that AI has been helpful to their organization during the pandemic, and a majority said it provides more value than expected. In general, almost everyone says that the widespread use of AI will make their organization work more efficiently. In fact, 85% want their organization to accelerate the adoption of AI.

However, the feeling is not entirely positive. Despite trying to crush the gas, 44% of executives believe their industry is moving faster than they should to AI. More shockingly, 74% say that the use of AI remains a bigger nuisance than the reality of supporting businesses – a significant increase in key industry figures from the September 2019 AI survey. In the financial services and retail sectors, for example, 75% of executives today feel that AI is excessive, with 42% and 64%, respectively.

How do these seemingly contradictory square views revolve around what KPMG is calling AI whiplash? Depending on the work we do to help organizations apply AI, we see a variety of explanations for the hype. One is the simple novelty of technology, which has allowed misperceptions about what can and cannot be done, how long it takes to achieve business-level results, and what mistakes are possible without organizations having a proper experiment with AI without a proper basis.

Although 79% of respondents said that AI is at least functional in their organization, only 43% say it is fully functional on a scale. It’s still common for people who think it’s something to buy AI — like a new machine — to deliver instant results. And while they can be successful with AI (often with little evidence of the concept), many organizations have learned that scaling to the business level can be a bigger challenge. It requires the acquisition of clean and well-organized data; strong data storage infrastructure; to help generate training data labeled by subject matter experts; sophisticated computer skills; and business acquisition.

Of course, it’s also not easy to believe that AI advocates have occasionally overstated their potential or diminished their efforts to realize their full value.

As executives are in conflict over the speed of AI adoption, we see basic human nature at stake. For starters, it’s always easier to believe that the grass is greener on the other side. We also suspect that a lot of people are worried that their industry is moving too fast mainly because their organization is not keeping up with that speed. If they had early stage stress with AI — especially last year, when the world saw AI-enabled achievements — achievements like the rapid development of covid-19 vaccines — it might have been easy to overcome those fears.

We see another factor that encourages mixed feelings about the potential of AI: the lack of a established legal and regulatory framework for correcting use. Many business leaders do not have a clear view of what their organization is doing to govern AI or what the new government regulations might be. It is understandable that they are concerned about the associated risks, as well as the development of use cases that may be frustrated by regulatory rules today.

This uncertainty helps to explain another finding that is seemingly contradictory in our survey. While business executives typically take a skeptical view of government regulation, 87% say the government should have a role to play in regulating AI technology.

From the stroke of AI

Each organization will need its own game book to recover from the AI ​​blow and optimize technology investment, a comprehensive plan should have five components:

  • Strategic investment in data. Data is the raw material of AI and the connective tissue of a digital organization. Organizations need clean, machine-digested data labeled to train AI models, with the help of subject matter experts. They need a data storage infrastructure that transcends functional silos within the company and can deliver data quickly and reliably. Once the models have been deployed, a strategy and approach to obtaining harvest data is needed for continuous tuning and training.
  • Adequate talent. Computer scientists who are experts in AI are in high demand and difficult to find, but they are essential to understanding the AI ​​landscape and guiding strategy. Organizations that cannot form a full team of internal scientists will need external partners to fill gaps and help us sort out an ever-expanding set of AI vendors and offerings.
  • Long-term business-driven AI strategy. Organizations make the most of AI by looking for solutions to problems, thinking about not buying technology and finding ways to use it. Businesses, not IT departments, let the agenda drive. When AI investments related to a business-oriented strategy go wrong, they quickly become an opportunity to fail and learn, not to burn quickly. But even if companies repeat themselves quickly, they need to stick to the long-term AI strategy, as the biggest benefits are achieved in the long run.
  • Culture and staff capacity. Few AI agendas will gain from the workforce without making a purchase and a culture invested in the success of AI. Gaining employee engagement requires at least a meaningful understanding of technology and data, and a deeper understanding of what benefits it will bring to them and the company. It is also important to train staff, especially where the AI ​​will assume or complement existing responsibilities. Taking a data-driven mindset and incorporating deeper AI literacy into the organization’s DNA will help them scale and achieve success.
  • Commitment to the ethical and impartial use of AI. AI has great promise, but also the potential for harm if organizations use it in a way that customers don’t like or discriminate against in certain segments of the population. All organizations should develop an ethical AI policy with clear guidelines on how technology will be developed. This policy should prescribe measures and should be part of the DevOps process to check for data problems and imbalances, measure and quantify unwanted bias in machine learning algorithms, track data origins, and identify those who train algorithms. Organizations should constantly monitor models for bias and drift, and ensure that model decisions are explained.

What’s next

Over the next two years, AI’s investment management goals will vary by industry. Health directors say there will be telemedicine, robotic tasks and patient care. In the life sciences, they say they want to expand AI to identify new revenue opportunities, reduce administrative costs, and analyze patient data. And government executives say the focus will be on improving skills in automating and analyzing care processes and managing procurement and other duties.

Expected results also vary by industry. Retail managers anticipate the greatest impact in the area of ​​customer intelligence, inventory management, and customer service chatbots. Industrial manufacturers see it in product design, development, and engineering; maintenance operations; and production activities. Financial services companies hope to achieve fraud detection and prevention, risk management and process automation improvements.

In the long run, KPMG plays a key role in AI in reducing fraud, waste and abuse and helping companies improve sales, marketing and customer service. Ultimately, we believe that AI will help solve basic human challenges in different areas such as disease identification and treatment, agriculture and global hunger, and climate change.

That’s the future worth working on. We believe that the government and industry also have a role to play in this — by collaborating in formulating rules that will drive the ethical evolution of AI, without stifling the innovation and momentum that is already underway.

Read more in the KPMG section “Forgetting AI in a World” report.

This content is produced by KPMG. It was not written in the editorial board of the MIT Technology Review.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button