Below is a video I picked up from engadget.com in a post titled Canesta gesture controlled TV frees us from the tyranny of the remote. I’ve talked in the past about Gesture Computing seen to an extent with the iPhone and in large part with Surface Computing. Now we see a use where the television is controlled through gestures. Looks very easy and solves the problem of the lost remote. I wonder what would happen if multiple people are gesturing at the same time??
Several blog posts and articles were out today announcing that AT&T is putting Surface Computing tables in their stores. This came out of an announcement by AT&T at the CTIA conference. There are some pretty nice videos of what this technology can do: Good video on Surface Computing
This is kind of a gimmick by AT&T, but could drive interest and people into their stores. More importantly this may be a leading edge of getting these multi-touch, surface computing platforms out where people can use them.
So, what’s next? I can envision surface computing used by architects, engineers, city planners, in restaurants and bars, in any environment where interactivity is key.
One of the blogs I routinely read is Futurist.com. In their latest post, they talk about digital interaction seen in Surface Computing and the iPhone. See their post on iPhones, Surface Computing – A New Way. Basically what they are talking about is the ability to interact with the computer screen/phone screen using your hands, gestures, and the ability for a Surface Computing table to recognize other devices and elements by putting them on the table. I’ve seen a few online demos of this technology over the last year or so, and it is really quite stunning.
I’m wondering how fast this will catch on as a norm in the computing arena. It seems that with the iPhone, with some of the same interactive features that one can experience with Surface Computing, the demand for this type of interface can only increase.