Nice blog post on some thinking of what is next for interfaces - www.buzzfeed.com/tommywilhelm/what-comes-after-the-touchscreen
Gesture tracking, 3D mapping of gestures and movement, etc - the view seems to be that is where we are heading.
But what are we trying to do? What tasks are we trying to conduct with these interfaces? What are the applications doing? Presumably we are not writing documents or other similar day-to-day tasks.
Perhaps it is an interesting route for uses where you don't want or can't actually touch a screen - like the example of surgeons given in the blog.
People will figure out uses for these technologies. Touch spent years doing little but spinning and zooming photographs but now with the hardware of smartphones and tablets we have found real value. Gesture hasn't yet had much of an outing other than for game play which is a short, occasional task - not a day long job.
Labels: HCI, HMI, interaction, interface, multitouch