We need to design mistrust in AI systems to make them more secure
[ad_1]
It is interesting that in these types of scenarios you need to actively design distrust of the system so that you have more security.
Yes, that’s what you need to do. In fact, we are denying experience around the idea of denying service. We have no results yet, and we are struggling with some ethical concerns. Because after talking about it and posting the results, we’ll have to explain why you don’t want to give AI the ability to deny you a service. How do I remove the service if someone really needs it?
But here’s an example of Tesla’s mistrust. Denying service would mean that I create a profile of your trust, depending on how many times you have turned off or disconnected the wheel. Given these disconnect profiles, I can then model at what point you are completely confident in this situation. We did that, not with Tesla data, but with our data. At one point, the next time you get in the car, you would be denied service. You do not have access to the system during the X period.
It’s almost like punishing a teenager by removing your phone. You know that teenagers won’t do what you don’t want them to do if you associate it with their mode of communication.
What other mechanisms have you explored to increase mistrust in systems?
The other methodology we have examined is roughly called explanatory AI because the system provides an explanation of some of its risks or uncertainties. Because all of these systems have uncertainty – none of them are 100%. And a system knows when to be safe. So in a way that information can be understood by humans, people will change their behavior.
As an example, let’s say I’m a car driver, and I have all the information on the map, and I know that some intersections have more accidents than others. When we approach one of them, I would say, “We’re approaching a crossroads where 10 people were killed last year.” You explain it in a way that someone tends to say, “Oh, wait, maybe I should be more conscious.”
We’ve already talked about some of your concerns about our tendency to overconfidence in these systems. What are the others? On the other hand, are there benefits too?
Negatives are really related to bias. That’s why I always talk about bias and trust. Because if I trust these systems too much and these systems are making decisions that have different outcomes for different groups of individuals — for example, a medical diagnosis has differences between women and men — we are creating systems that exacerbate the differences we have today. . That is a problem. And when you associate it with things related to health or transportation, both of which can lead to dead or alive situations, a bad decision can lead to something you can’t really recover from. So we really need to fix that.
The positive aspects are that automated systems are generally better than people. I think they can be has better, but I personally prefer to interact with an AI system in some situations than in other situations with humans. I know he has some problems, but give me the AI. Give robots. They have more data; they are more accurate. Especially if you are a beginner person. The result is better. The result may not be the same.
In addition to your robotics and AI research, you are committed to increasing diversity throughout your career. You started a program to mentor at-risk junior level girls 20 years ago, which is a lot of people thought about this issue before. Why is this important to you, and why is it important to the pitch?
It’s important to me because I can identify the times in my life, especially when someone gave me access to engineering and computer science. I didn’t even know it was a thing. That’s why, later on, I never had a problem knowing that I could do that. Therefore, I always felt it was my responsibility to do the same for those who did it for me. As I got older, I noticed that there were a lot of people in the room who weren’t like me. So I realized: Wait, there’s definitely a problem here, because people don’t have models, they don’t have access, they don’t even know that’s a thing.
Why it’s important on the field is because everyone has a different experience. One thing was like I was thinking about human-robot interaction before. It wasn’t because I was great. The problem was because I looked the other way. When I’m talking to someone with a different perspective, it’s like, “Let’s try to combine and represent the best of both worlds.”
Airbags kill more women and children. Why is that? Well, I’ll say that because someone wasn’t in the room saying, “Hey, why don’t we test this on the women in the front seat?” There are a bunch of problems with certain groups of people being killed or dangerous. And I would claim if I went back, “No, did you think this?” because they are speaking from their experience and from the environment and community.
How do you expect to evolve with AI and robotics over time? What is your vision for the field?
If you think about coding and programming, almost anyone can do it. There are many organizations like Code.org now. Resources and tools are there. One day I would like to have a conversation with a student and ask, “Do you know AI and machine learning?” and they say, “Dr. H, I’m in that third grade! “I want to be surprised like that, because it would be wonderful. Of course, then I should think about what my next job is, but that’s another story.
But I think when you have tools with coding and AI and machine learning you can create your own jobs, you can create your future, you can create your own solution. That would be my dream.
[ad_2]
Source link