Another glasses-free technology coming from MIT
As striking as it is, the illusion of depth now routinely offered by stereoscopic 3D movies is a paltry facsimile of a true three-dimensional visual experience. In the real world, as you move around an object, your perspective on it changes. But in a movie theater showing a stereoscopic 3D movie, everyone in the audience has the same, fixed perspective — and has to wear cumbersome glasses, to boot.
Despite impressive recent advances, holographic television, which would present images that vary with varying perspectives, probably remains some distance in the future. But in a new paper featured as a research highlight at this summer's Siggraph 2012 computer-graphics conference, the MIT Media Lab's Camera Culture group offers a new approach to multiple-perspective, glasses-free autostreoscopic 3D that could prove much more practical in the short term: the triple LCD Tensor display.
The three individual images and the result seen on the Tensor Display
Professor Ramesh Raskar leads the MIT Camera Culture group and is supervising Senior researcher Douglas Lanman, its graduate student Matthew Hirsch and the freshly hired mathematician Gordon Wetzstein. More about the team here.
An Evolution of the previous HR3D Project
In their HR3D project, instead of the complex hardware required to produce holograms, the Media Lab system uses three layers of LCD displays stacked one above each other. Each panel refreshs at a rate of 360 fps, giving a total of 1000 images per second. With clever optics sending different images images in different directions, the resulting system is able to send some 20 progressively different images in 20 angular sectors. The visula comfort is thus far greater than on a the usual 5 to 7 sectors autostereoscopic displays.
A math problem
The problem is to generate the correct pixels for each of the three panels. The problem of calculating the patterns is exponentially more complex, as the number of screens goes from 2 to 3. In solving that problem, Raskar, Lanman and Hirsch were joined by Gordon Wetzstein, a new postdoc in the Camera Culture group. The researchers’ key insight was that, while some aspects of a scene change with the viewing angle, some do not. The pattern-calculating algorithms exploit this natural redundancy, reducing the amount of information that needs to be sent to the LCD screens and thus improving the resolution of the final image.