AI becomes fairer

[ad_1]
The pandemic around the world over the last year has shed a cold, bright light on many things — multiple levels of preparedness to respond; collective attitudes towards health, technology and science; and large financial and social disparities. As the world continues to navigate the covid-19 health crisis and as places begin to gradually return to work, school, travel, and recreation in some places, it is crucial to compete for priorities to equitably protect public health while ensuring privacy.
The protracted crisis has led to a rapid change in work and social behavior, as well as increased confidence in technology. It is now more critical than ever for companies, governments and society to be careful when applying technology and handling personal information. The widespread and rapid adoption of artificial intelligence (AI) shows how they adapt to technologies that may tend to interact with humans and social organizations in dangerous or inappropriate ways.
“Our relationship with technology has changed tremendously since the pandemic,” says Yoav Schlesinger, director of AI’s ethical practice at Salesforce. “There will be a negotiation process between people, companies, government and technology; how the data between all these parties will be negotiated will be negotiated in a new social data contract ”.
AI running
When the Covid-19 crisis began in early 2020, scientists looked to AI to support various medical uses, such as identifying potential candidates for vaccines or treatment drugs, helping to detect possible symptoms of covid-19, and allocating intensive resources such as scarce resources. -guarding units and fans. Specifically, they relied on the analytical capacity of augmented AI systems to develop cutting-edge vaccines and treatments.
Although advanced data analysis tools can help extract information about massive amounts of data, the results have not always been more accurate. In fact, AI-driven tools and the data sets they work with can perpetuate inherent bias or systemic inequality. Throughout the pandemic, agencies such as the Centers for Disease Control and Prevention and the World Health Organization have collected a huge amount of data, but the data does not necessarily accurately indicate disproportionately and negatively affected populations — including blacks, browns, and indigenous peoples. people, or even some diagnostic advances, says Schlesinger.
For example, biometric outfits like Fitbit or Apple Watch reveal the ability to detect possible symptoms of covid-19, such as temperature changes or oxygen saturation. However, these analyzes are often based on flawed or limited data sets and can lead to biases or injustices that cause vulnerable individuals and communities.
“It simply came to our notice then Green LED light it’s harder to read pulse and oxygen saturation in dark skin tones, “says Schlesinger, referring to the semiconductor light source.” So it may not do the same job of catching hidden symptoms for those with black and brown skin. “
AI has shown greater effectiveness in helping to analyze huge data sets. A team from the Viterbi School of Engineering at the University of Southern California has developed an AI framework to help study covid-19 vaccine candidates. After identifying 26 potential candidates, he narrowed the field to the 11 most successful. The source of the analysis data was the Immune Epitope Database, which collects more than 600,000 contamination factors caused by more than 3,600 species.
Other researchers in Viterbo apply AI to more accurately decipher cultural codes and better understand the social norms that guide the behavior of ethnic groups and racial groups. This can have a profound effect on how a particular population suffers in a crisis like a pandemic, due to religious ceremonies, traditions and other social customs that can facilitate the spread of viruses.
Leading scientists Kristina Lerman and Fred Morstatter have based their research Moral basis theory, which describes the “intuitive ethics” that make up the moral constructions of culture, such as care, fairness, loyalty, and authority, which helps to inform individual and group behavior.
“Our goal is to develop a framework that allows us to understand the dynamics that drive the decision-making process of a culture at a deeper level,” says Morstatter. Report released by USC. “And by doing that, we anticipate more information about the culture.”
The study also examines how to spread AI in an ethical and fair manner. “Most people, but not all, are interested in making the world a better place,” Schlesinger says. “Now we have to go to the next level: what goals do we want to achieve and what results would we like to see?” How do we measure success, and what will it look like? “
Satisfying ethical concerns
It is important to ask the data collected and hypotheses about AI processes, Schlesinger says. “We talk about achieving fairness through awareness. At every step of the process, you are making value judgments or hypotheses that will weigh your results in a certain direction, “he says.” That is the basic challenge of building ethical AI, which is to see all places where humans are marginalized. “
Part of this challenge is to conduct a critical analysis of the data sets that inform AI systems. It is essential to understand the data sources and the composition of the data and answer the following questions: How is the data composed? Does it bring together a wide range of stakeholders? What is the best way to disseminate this data in a model to minimize bias and maximize accuracy?
To the extent that people return to work, they may be employers Using integrated AI detection technologies, including thermal cameras for detecting high temperatures; audio sensors for detecting coughs or loud voices that help spread the breath drop; and to control video streams, hand washing procedures, physical distance regulations, and mask requirements.
These monitoring and analysis systems, in addition to being challenges to technical accuracy, pose basic risks Human rights, privacy, security and trust. The push to increase surveillance has been a worrying side effect of the pandemic. Government agencies have also used surveillance camera footage, phone location data, credit card purchase records and passive temperature scanners in crowded public areas, such as airports, to help track the movements of people hired or exposed to kobid-19 and implement virus transmission. strings.
“The first question that needs to be answered is not just what we can do, but yes?” says Schlesinger. “People’s search for their biometric data without their consent raises ethical concerns, even if it is positioned as the greatest benefit. We should have a strong partnership as a society to see if there is a good reason to implement these technologies as soon as possible.”
What the future holds
When society returns to something close to normal, it is essential to re-evaluate the relationship with data and establish new rules for data collection, as well as the proper use of data – and potential misuse. During the construction and deployment of AI, technologists will continue to make these necessary assumptions about data and processes, but the basics of this data should be questioned. Is the data legitimately extracted? Who assembled it? On what assumptions is it based? Is it presented exactly? How can the privacy of citizens and consumers be maintained?
As AI becomes more widespread, it is also essential to consider how to build trust. One approach is to use AI to increase human decisions and not to replace human contributions.
“There will be more questions about the role that AI should play in society, its relationship with humans, and what are the right tasks for humans and what are the right tasks for an AI,” says Schlesinger. “There are certain areas where increasing AI’s capabilities and human capabilities will accelerate our confidence and confidence. In places where AI does not represent humans, but increase efforts, that’s the next horizon.”
There will always be a state in which man must be involved in making decisions. “In regulated industries, such as health care, banking, and finance, there needs to be a human being to keep a full loop,” Schlesinger says. “You can’t expand AI without the care of the clinician to make decisions. Although we would like to believe that AI is capable of doing that, AI still doesn’t have empathy, and probably never will.”
It is essential for the data collected and generated by the AI not to increase the differences but to decrease them. There needs to be a balance between AI seeking ways to help accelerate human and societal progress, promoting equitable action and responses, and simply recognizing that certain problems will need human solutions.
This content was created by Insights, a custom content from the MIT Technology Review. It was not written in the editorial board of the MIT Technology Review.
[ad_2]
Source link