Here at SIGGRAPH 2012, we got our first look at the Edge3 Technologies 3D tracking and gesture recognition system. It's sort of like Kinect or Leap, except definitely better than the former and arguably better than the latter. We were impressed, and we'll tell you why it could enable new interfaces for the next-generation of devices.
Edge3 operates on pretty much one of the simplest 3D sensing principles there is: stereo. There are no lasers, no infrared fields, just two CMOS cameras identical to what you already have in your laptop or cellphone. The cameras are placed next to each other, and each one gets a very slightly different view.
By comparing the views with some fancy software, you can get the distance that each pixel is from the cameras. It's simple, and people have been using stereo cameras for depth perception for a long time, but it's often tricky to make it robust and reliable, especially in weird light or with blank backgrounds.
The team behind Edge3 has put a lot of hard work into designing vision software that is robust and reliable, and the reason that we're getting our first look at these prototypes at Siggraph is that they've been successful. They've developed algorithms that can handle just about any sort of conditions, and that will provide very impressive performance even on limited hardware platforms like cellphones. The upshot here is that Edge3 is a solid 3D vision system that can create depth maps of your hands, body and the hands and bodies of your friends, and do accurate and detailed gesture detection and recognition under a variety of lighting conditions. Since it's all about the software — not hardware — it's both versatile and affordable.
It's hard not to compare this thing to the Microsoft Kinect, and when you do, the Kinect comes out looking rather poorly. Edge3 has about 16 times the resolution of Kinect. It works in all kinds of light, even outdoors and in direct sunlight, while Kinect struggles with that. Edge3 has millimeter-level accuracy, and can be tuned to work at whatever distance you want by refocusing the cameras with different lenses. You can also use it at whatever angle you prefer, whether it's looking straight up or looking at you. It runs at 100+ frames per second, while Kinect tops out at just 30. And Kinect is more expensive. Significantly more expensive.
Now, I know what you're wondering. You're wondering, "how is this better than that Leap thing that looks like it's so good at gestures?" Well, feel free to disagree with this, but the reason that I like the Edge3 more (so far, at least) is because it has potential to be used for all kinds of different things, not just gesture control. Like, the great thing about Kinect is that it got hacked up and stuck onto all kinds of things: robots, for example, made huge advances after Kinect was cracked open.
Edge3 promises to offer a similar solution (depth mapping correlated with high-resolution color imagery) that's better, cheaper, and much more versatile, while Leap seems like more of a unitasker. Leap is great at what it does, but arguably, it doesn't have the same kind of potential to enable a wide variety of next-gen devices.
We don't have a firm price or release date on Edge3 yet, beyond "substantially less than $100" and "soon." Part of the problem (not that it's really a problem) is that because the hardware requirements are so simplistic, Edge3 is really hoping to license its technology to other people for use in all kinds of platforms as opposed to just creating one single peripheral-type device.
All photos by Evan Ackerman for DVICE. Posted on location at SIGGRAPH 2012 in Los Angeles, California.