This article was originally published on Brad’s blog, Feld Thoughts.
A few weeks ago, Fred Wilson dictated a post on his blog, A VC, using his Nexus One phone. He also discovered Swype, an alternative text input system, which now has an unofficial Android app. As usual, the comment threads on A VC were very active and had lots of thoughts about the future (and past) of voice and keyboard input.
When I talk about human-computer interaction, I regularly say that in 20 years we will look back on the mouse and keyboard as input devices the same way we currently look back on punch cards.
While I don’t have a problem with mice and keyboards, I think we are locked into a totally sucky paradigm. The whole idea of having a software QWERTY keyboard on an iPhone amuses me to no end. Yeah, I’ve taught myself to type pretty quickly on it, but when I think of the information I’m trying to get into the phone, typing seems so totally outmoded.
Last year at CES “gestural input” was all the rage in the major CE booths (Sony, Samsung, LG, Panasonic, etc.). Translating from CES-speak, this was primarily things like “changing the channel on a TV using a gesture.” This year the silly basic gesture crap was gone and replaced with IP everywhere (very important in my mind) and 3D (very cute, but not important). And elsewhere there was plenty of 2D multitouch, most notably front-and-center in the Microsoft and Intel booths. I didn’t see much speech and I saw very little 3D UI stuff. One exception was the Sony booth, where Organic Motion (a portfolio company of my VC firm, Foundry Group) put up a last-minute installation for Sony to show off markerless 3D motion capture.
So while speech and 2D multitouch are going to be an important part of all of this, it’s a tiny part. If you want to envision what things could be like a decade from now, read Daniel Suarez’s incredible books, Daemon and Freedom™. Or watch the following video that I just recorded from my glasses and uploaded to my computer (warning: cute dog alert).