For more information please contact firstname.lastname@example.org.
T(ether) is a novel spatially aware display that supports intuitive interaction with volumetric data. The display acts as a window affording users a perspective view of three- dimensional data through tracking of head position and orientation. T(ether) creates a 1:1 mapping between real and virtual coordinate space allowing immersive exploration of the joint domain. Our system creates a shared workspace in which co-located or remote users can collaborate in both the real and virtual worlds. The system allows input through capacitive touch on the display and a motion-tracked glove. When placed behind the display, the user’s hand extends into the virtual world, enabling the user to interact with objects directly.
Above: Virtual objects can be created and manipulated through gestures made using the hand.
Above: The environment can be spatially annotated using a tablet's touch screen.
Above: T(ether) is collaborative. Multiple people can edit the same virtual environment.
T(ether) uses Vicon motion capture cameras and the g-speak vision pipeline to track the position and orientation of Oblong tags which are affixed to tablets, user heads and hands. Server-side synchronization was coded using NodeJS and tablet-side code uses Cinder. The synchronization server forwards tag location to each of the tablets over wifi, which in turn renders the scene. Touch events on each tablet are broadcasted to all other tablets using the synchronization server.
Copyright © 2012 MIT Media Lab