A groundbreaking technology has emerged that pushes androids beyond the uncanny valley, achieving more lifelike facial expressions. This innovation enhances emotional realism, making androids appear less robotic and more human-like. By improving the precision of facial movements and expressions, the technology bridges the gap between artificial and human features, revolutionizing robotics and creating androids with greater emotional depth and relatability.
Even highly realistic androids can create discomfort when their facial expressions lack emotional coherence. Traditionally, a "patchwork method" has been employed for facial movements, but this approach has practical limitations. A team has now created a new technology using "waveform movements" to generate real-time, complex expressions without awkward transitions. This system mirrors internal states, improving emotional communication between robots and humans, and potentially making androids seem more humanlike.
A research team led by Osaka University has developed a technology that enables androids to express mood states, such as "excited" or "sleepy," by synthesizing facial movements as layered, decaying waves.
Even when an android's appearance is so realistic it could be mistaken for a human in a photograph, watching it move in person can still feel unsettling. It can show familiar expressions like smiles or frowns, but identifying a consistent emotional state behind those expressions is challenging, creating a sense of unease as you try to figure out its true feelings.
Until now, a 'patchwork method' has been used when allowing robots, particularly androids, with movable facial parts to display expressions over long periods. This method involves preparing several pre-arranged action scenarios to avoid unnatural facial movements and switching between scenarios when necessary.
However, this method presents practical challenges, including the need to prepare complex action scenarios in advance, minimize unnatural movements during transitions, and fine-tune facial movements to subtly convey expressions.
In this study, lead author Hisashi Ishihara and his team developed a dynamic facial expression synthesis technology using "waveform movements." These represent various gestures, such as "breathing," "blinking," and "yawning," as individual waves. These waves are applied to relevant facial areas and layered to create complex facial movements in real time, eliminating the need for pre-prepared action data and avoiding noticeable transitions.
Moreover, by incorporating "waveform modulation," which adjusts the waves based on the robot's internal state, changes in mood or internal conditions can be instantly reflected in facial movements.
"Advancing research in dynamic facial expression synthesis will allow robots with complex facial movements to display more expressive faces and convey mood changes in response to their surroundings, including human interactions," says senior author Koichi Osuka. "This could greatly enhance emotional communication between humans and robots."
Ishihara adds, "Instead of creating surface-level movements, further development of a system in which internal emotions are reflected in every detail of an android's behavior could lead to androids perceived as having a heart."
By enabling robots to adjust and express emotions adaptively, this technology is expected to significantly enhance the value of communication robots, allowing them to interact with humans in a more natural, humanlike way.
For questions or comments write to writers@bostonbrandmedia.com
Source: sciencedaily