An avatar system for real-time embodied communication between ML devices. A core Lumin OS service to integrate with other platform social features.
Product Design Lead
HCI Research
Creative Direction
Concept Design
Coordination with ML Studios, SW, Brand
Presentation to Executive Stakeholders
User Testing
Creative Early Adopters
Lorena Pazmino, Principal Visual Designer
Ian Mankowski, 3D Artist
Christina Lee, 3D Artist
Magic Leap Studios, Avatar Art Direction
Savannah Niles, Product Manager
Cole Heiner, Interaction Designer
Frank Hamilton, UX Prototyper
Achieve real social presence.
Support dynamic use cases beyond talking.
Create ethical, respectful avatars.
But Magic Leap’s HW sensing can only partially track user behavior in real-time. As a platform feature, ML avatars needed to create rich presence without external HW peripherals or heavy SW computation, so that any user could use this feature alongside other Lumin apps.
Avatars represent a visceral intersection of human-to-computer interaction. Networked embodied communication raises the stakes and introduces a third actor - other users. Design choices here impact the social outcomes of the overall system and need careful consideration.
What should ML avatars look like?
What behaviors should they represent?
How do we achieve a feeling of virtual presence?
I lead an academic literature review to create the ML1 Avatar Behavior Brief.
I synthesized over 15 papers to arrive at interaction design principles for our avatar system.
A guiding principle emerged from research with the fidelity accord – that matching behavioral fidelity to visual fidelity can create a sense of real presence, even in abstract art styles. Uncanniness and discomfort can result from misaligned behavior and visuals.
Real-time behavior is only as expressive as its input sources. Inside-out sensing limits what's possible, and what is believable for other users.
Avatars should represent only what we can sense in real time. Minimal interference with directly-sensed signals.
Don’t dampen real behavior by adding additional generic animations elsewhere. Inferred behavior can’t yet predict idiosyncratic user expression in real-time. Instead of guessing wrong, the brief recommended gestalt abstraction.
To validate the design, the team built prototypes and compared the brief’s direct approach to an alternate inferred strategy.
Directly-sensed felt better. This design recommendation was controversial. Stakeholders expected full characters, but the prototypes, user feedback, and actual felt experience convinced the team to pursue a direct approach. Inferred avatars with full characters created uncomfortable social dynamics and felt less lively. Through this direct approach, ML Avatars achieve rich nonverbal body language and a real sense of shared social presence.
From this behavior brief, I collaborated with UX Visual Design, Product and ML Studios to creative direct the avatar visual style and asset scope. To achieve presence, visual details had to match real-time avatar behavior.
We landed on an elevated, yet warm and inclusive style to appeal to ML1 creative consumers. Collaborating with other UX researchers, I led 2 rounds of internal user studies to refine the approach, assess user comfort and identify improvements.
ML Avatars celebrate users’ self-expression and self-determination. They are full scale, freely move themselves around spaces, and strive for inclusive representation. Released in fall 2018, ML Avatars enable all new communication possibilities. They continue to get positive feedback from a wide variety of public users.