Research assistant Autumn Trimble sits inside “Mugsy,” one of the capture facilities Pittsburgh’s Facebook Reality Lab uses to create “codec avatars.”
“There’s this big, ugly sucker at the door,” the young woman says, her eyes twinkling, “and he said, ‘Who do you think you are, Lena Horne?’ I said no but that I knew Miss Horne like a sister.”
It’s the beginning of a short soliloquy from Walton Jones’ play The 1940’s Radio Hour, and as she continues with the monologue it’s easy to see that the young woman knows what she’s doing. Her smile grows while she goes on to recount the doorman’s change of tune—like she’s letting you in on the joke. Her lips curl as she seizes on just the right words, playing with their cadence. Her expressions are so finely calibrated, her reading so assured, that with the dark background behind her, you’d think you were watching a black-box revival of the late-’70s Broadway play.
There’s only one problem: Her body disappears below the neck.
Yaser Sheikh reaches out and stops the video. The woman is a stunningly lifelike virtual-reality avatar, her performance generated by data gathered beforehand. But Sheikh, who heads up Facebook Reality Labs’ Pittsburgh location, has another video he considers more impressive. In it, the same woman appears wearing a VR headset, as does a young man. Their headsetted real-life selves chat on the left side of the screen; on the right side, simultaneously, their avatars carry on in perfect concert. As mundane as the conversation is—they talk about hot yoga—it’s also an unprecedented glimpse at the future.