MIT researchers transform dancer Roberto Bolle into a cloud of pixels


\"\"The MIT SENSEable City Lab unveiled a new project today called Dancing Atoms, which uses 3-D body scans, motion capture and digital conversions to create a “pixel map” of star ballet dancer Roberto Bolle and his every movement. The goal of the project — developed in collaboration with Queen Mary University of London, Rapido3d and CMYK+WHITE — is to examine and replicate the human body and its range of motion, finding new ways to study the nature of beauty.

“Motion-capture technologies make it possible, for the first time, to analyze human movements in full 3-D at very high resolutions,” says Pat Healey, professor of human interaction at Queen Mary. “This unprecedented level of detail can help us understand what affects people\’s perceptions of grace and beauty.\”

“What better way to study the body than through the spatial mapping of a ballet dancer?” asks Adam Pruden, project leader and research fellow at SENSEable. “Greater analysis and understanding of our bodies in space is necessary as technology becomes more integrated into the building infrastructure, and as we increasingly use our bodies as an input to control objects around us.”

Dancing Atoms is essentially a digital copy of Bolle and his movements. Bolle’s body was scanned using a custom 360-degree scanner by Rapido3d that mapped his x, y and z coordinates for shape and RGB coordinates for color. This data was digitized into a “pixel cloud” made up of over a million polygons, or pieces, to create a complete model of Bolle’s body.

“Think about the ‘WonkaVision’ scene from ‘Willy Wonka and the Chocolate Factory’ where the boy is photographed by a giant camera and split up into millions of tiny pieces, before whizzing through the air,” says MIT associate professor Carlo Ratti, director of SENSEable. “With Dancing Atoms, we can start to manipulate these millions of pieces by controlling each bit as if it were a digital atom.”

As Bolle dances, he wears a black suit with 42 reflective markers placed at key locations – mostly joints – on his body, while motion-capture cameras record the markers moving through space at a rate of 120 recordings per second.

“When we motion-captured Roberto, our system consolidated all the 2-D images from each infrared camera in our 12-camera array, allowing us to know the precise 3-D coordinates of each reflective marker on Roberto\’s body,” said Stuart Battersby, PhD student in human interaction at Queen Mary University of London.

Bolle’s 3-D scan and dance footage is used by designers and researchers to piece together his avatar. With software to import and combine the data, Roberto’s form and motion is converted to a fluid map of responsive and adjustable pixel spheres, which brings his digital avatar to life.

\"\"“His avatar is displayed and controlled at different resolutions, ranging from a human constellation of 20 dots to a full-resolution body to portray the range of visualizing the human form,” Pruden says. “Roberto’s dancing pixel-avatar expands through space, shifts forms and responds to environmental forces supplied by his physical and digital inputs.”

Today’s 3-D scanning tools generally involve large, expensive equipment and a significant amount of time to set up, capture and process data. However, advances in scanning technology, such as the Microsoft Kinect, point to a future where 3-D scanning and motion capture will be unobtrusively placed in the environment at low cost. Also, future smartphones with micro-projectors will become a natural step toward 3-D scanners that can travel with us in our pockets, allowing us to digitally duplicate and analyze any object.

The Dancing Atoms project was developed by Adam Pruden and Carlo Ratti of Senseable City Lab in collaboration with Roberto Bolle, Italy’s étoile ballet dancer; Pat Healey, Stuart Battersby, Arash Eshghi and Nicola Plant of the Interaction, Media and Communication Group, Queen Mary University of London; Kev Stenning of Rapido3D; and Sanders Hernandez and EunSun Lee of CMYK+WHITE.