Tech News

Will a robot get my boss?

[ad_1]

request for help:

I am concerned that law enforcement agencies are increasingly using robots to neutralize threats, surveillance and hostage situations. Maybe I just saw it RoboCop often, but I am wary of the machines that make decision-making machines, especially with the frequency with which human officials frequently use their authority. Do I have a moral obligation to submit to a police robot?

“SUSPECTED.”

Dear suspect …

Hollywood has not been particularly optimistic with robots in positions of control. RoboCop it’s just an example of the broader canon of science fiction, the dire consequences of leaving critical tasks to flexible machines in the head – accustomed to literalism that can turn deadly into main directives, which can explode to death but be confused by a set of ladders. The message of these films is clear: rigid automatons are not capable of the improvised solutions and moral nuances that are often required in times of crisis.

This stereotype may be encouraged by Boston Dynamics, which is introducing some robots to police departments, which prompted the release of a model video made last December 1950 around the “Do You Love Me” session. Maybe you saw it? Among the robots were Atlas, an android resembling a soldier in a devastating storm, and Spot, who was the inspiration for the killer puppies in the “Metalhead” episode. Black Mirror. It seems that no machine was designed to dispel fears about buying a robot, so what better way to make people fall in love than to show their agility? And what better reason to study this reason than the skill that is so considered human, that we invented a movement designed to make fun of the inability to make an automaton (Robot)? When we see machines shuffle, shake, and rotate, it’s hard to see them as living, embodied creatures with the same flexibility and sensitivity as us.

It doesn’t matter if Spot’s joints can cut his finger or if police robots have already been used to use lethal force. The suspicion, a way of answering your question, without any recourse to moral philosophy, might be based on pragmatic conclusions. As most of us do, if you have plans to survive, yes, you should fully obey a police robot.

But I guess that’s not your practical question. And I agree that it is important to make up for the compensation that comes with handing over police duties to machines. The Boston Dynamics video, meanwhile, was released in late 2020 to “celebrate the beginning of what we hope will be a happier year.” A week later, the insurgents stormed the Capitol, and images of police showing little resistance to the crowd proliferated – photos that were striking on social media against tougher responses to protests against the Black Life issue last summer.

At a time when many police departments are experiencing a crisis of power over racial violence, the most compelling argument for robotic policing is that machines do not have an inherent capacity to harm. For a robot it is a human person, regardless of skin color, gender or reason. As the White House noted in a 2016 report on algorithms and civil rights, new technologies can “help make decisions based on factors and variables related to risk empirically enforcing law enforcement, rather than human instincts and prejudices with flaws.”

Of course, if there is evidence of current police technology, things are not so simple. The predictive police algorithms used to identify high-risk people and neighborhoods are highly biased, and roboticist Ayanna Howards has called them “the original sin of AI”. Since these systems are based on historical data (past court cases, previous arrests), they ultimately select the same unfairly targeted communities and reinforce structural racism. Automated predictions are self-fulfilling by locking certain quadrants into an excessive policy pattern. (Officials arriving at the crime scene are ready to discover one.) These tools, in other words, do not neutralize prejudices as much as they formalize prejudices, turning existing social inequalities into systems that unconsciously and mechanically perpetuate. them. Digital ethics professor Kevin Macnish warns that the values ​​of the authors of the algorithm are “frozen in code, effectively institutionalizing those values.”

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button