Even robots have to struggle with prejudices. When asked what Europeans could imagine as a top priority in their use, a small majority answers: the exploration of space.
“In a flippant interpretation one could say: They want robots as far away as possible, and indeed into space,” says Martina Mara, Professor of Robotic Psychology at Johannes Kepler University Linz. It deals – unlike the name suggests – with the human psyche and tests how different characters react to different robots. In turn, it derives recommendations on how to design robots so that people can interact with the machines as well as possible.
At the University of Passau she shows on the topic “Everything out of control? Digitalization challenges for society “Examples of robots that make people feel more or less comfortable.
“Cute” is not always good
The Robohon is a smartphone with hands and feet, a small guy with big eyes and a big head. Clear case of Kindchenschema: “This of course goes in and we find the cute,” says Mara. But that’s not necessarily positive: Robots should not always be cute, the researcher thinks. “We make ourselves manipulable as we fall for the cuteness of such machines.” And in some cases, this also becomes a matter of safety: For example, in a lawnmower robot with sharp knives, the children are not necessarily for one should keep nice playmates.
Between zombie and corpse
Less cute is the telly. He sends the audience to the Uncanny Valley, to the eerie valley, somewhere between the corpse and the zombie. The “Uncanny Valley” hypothesis by robotist Masahiro Mori in 1970 describes the effect that human-like features initially make a robot more likeable – see Robohon. From a certain point this impression tilts and the curve drops steeply downwards. Only perfect human copies are perceived positively again.
But the telenoid is far away from that. There is actually a charming concept behind it: the robot is a kind of Skype or face time to touch. When the granddaughter reports to the grandmother from the year abroad, the Telenoid transmits not only voice, but also gestures and facial expressions. Grandma could hug the robot and imagine it was the granddaughter. Nevertheless, you do not want to have this machine at home instead of the phone at night.
Marketing gag with devastating effect
The robot woman Sophia is less scary, but still stirs fears. Above all, it is one thing: a marketing gimmick. Sophia can express emotions and rejoice that she has granted citizenship to Saudi Arabia. The video of her acceptance speech went viral.
The robot woman, behind which, according to Mara, a “very passable chatbot” is, appears at various business events and chats about their desire for children. David Hanson, founder of Hanson Robotics and one of the “creators” of Sophia, as it likes to be called in this context, explains that his dream is to create robots that are lifelike – “basically alive”. Mara annoy such utterances and representations: “This fuels fears after the motto: Now the emotional super-robots are already there and they also want to marry us.” The researcher, who is well networked in the world of robotics, calms down: “It There is currently no prototype worldwide that suggests that something like awareness in a machine would be possible in the foreseeable future. “
Head Tilt and Storytelling
Although the psychologist has Mara Tricks at hand, which could increase the acceptance of these uncanny creatures: It helps if the humanoids have a teasing headbear to the side or if you present to the audience in advance nice stories about the robots – keyword storytelling. Only: do we want that? Mara advocates a different approach: she tries to think of the robots from the human point of view and asks herself: How should robots be designed so that we feel comfortable with them? According to the researcher, there is a simple answer: “The robots must be clearly recognizable as machines, so that we do not even come into this category conflict.”
This also applies to digital language assistants. The recently introduced Google Duplex is capable of playing human communication to the person being called – including “uhs” and “mhms”. “I think you should have the right to decide for yourself if you want to talk to such systems,” says Mara. But the chatbot would have to recognize itself as such. The question arises whether it makes sense then that the program mimics a human conversation.
More education about the opportunities
The researcher calls for more education about artificial intelligence. For example, that this intelligence can only be as smart as the material it learned from. This in turn consists of data that humans have generated – with the consequence that artificial intelligence reproduces human errors, traditional role models and stereotypes. Mara brings an example from Google translate, which automatically assigns occupations from Turkish that have no grammatical gender to a specific gender: A nurse in the hospital becomes a nurse, and a person who holds a professorship becomes a professor. “We need to discuss how we can feed forward-thinking ideas into such systems,” says the professor. It also calls for transparency in the decisions made by algorithms. “We need Artificial Intelligence, which explains itself, robots that make themselves better understood.”
She pleads for a “human-centered robotics” in which artificial intelligence is clearly recognizable as a machine, at best in its function as a sympathetic tool. As an example, she cites the exo-skeleton that helps caregivers lift patients.
The pictures are less spectacular. For this, hopes the researcher Mara, this could steer the social debate in a different direction – away from the fear of the humanoid robot, to considerations of what missions in everyday life for the machines would be conceivable.