Tech News

The Future of Digital Assistants is Queer

[ad_1]

Looking for a smart wife means, in its simplest form, providing digital assistants with different personalities who represent many versions of femininity around the world with greater accuracy, compared to the friendly and domineering personality chosen by many companies.

It would be a good case to find out what these Q devices can be queering for, Strengers adds, “but that can’t be the only solution.” Another option might be to bring masculinity in different ways. An example might be Pepper, a humanoid robot developed by Softbank Robotics that often gives it pronouns and is able to recognize faces and basic human emotions. Or Jibo, another robot, introduced in 2017, that also used masculine pronouns and was marketed as a home social robot, although it has since been given a second life as a health care and education-focused device. Given the “soft and effeminate” masculinity of Pepper and Jibo — for example, the former responds politely to questions and often offers glaring glances, and the latter often whimsically turns around and approaches users with a loving demeanor — Strengers and Kennedy see it. . as a positive step in the right direction.

Queering digital assistants can also create bot personalities to replace humanized notions of technology. To anyone who asks the Capital One baking robot, launched in 2019, about his gender, he will playfully respond, “I’m binary. I don’t mean I’m both, I mean, I’m just one and zero. Think of me as a bot.” .

Similarly, Kai, an online banking chat bot developed by Kasisto, the organization that builds AI software for online banking, completely ignores human characteristics. Jacqueline Feldman, a Massachusetts writer and UX designer who created Kai, explained that the bot was “designed to be gender-free.” Not by assuming a non-binary identity, as Q does, but by assuming the specific identity of the robot and using the pronouns “this”. “From my point of view as a designer, a bot could be beautifully designed and charming in new ways that are specific to the boot, without shaping the human being,” he says.

When asked if he was a real person, Kai would say, “It’s a bot, it’s a bot. The next question, please,” is to make it clear to users that he wasn’t human or pretending. “Like, I’m not human. But I learn. That’s machine learning.”

Bot identity does not mean that Kaik is being abused. A few years ago, Feldman too he talked about Deliberately designing docks with the ability to deflect and extinguish bullying. For example, if a user repeatedly harasses the bot, Kai would respond, “I’m seeing white sand and a hammock. Please try me later!” “I really did my best to give the bot dignity,” Feldman said esana Australian Broadcasting Corporation 2017.

However, Feldman believes that bot is an ethical imperative to self-identify as a bot. “There is a lack of transparency in the companies that design it [bots] make it easier for someone who interacts with the bot to forget that it’s a bot, ”he says, and giving gender bots or a human voice makes that much harder. Since the experiences of many consumers with Chatbot can be frustrating and many people would prefer to talk to one person, Feldman believes that giving human qualities to bots can be “excessive design”.

[ad_2]

Source link

Related Articles

Back to top button