A few years back (eek!) I was involved in a funded project by the Technology Strategy Board that covered gesture driven interfaces etc. Off the back of it I started playing about with some ideas for different types of input methods. This was all before Kinect appeared on the scene in any serious way and visual detection of gestures etc was rather crude at the time (still is to a large respect) – so I went down the physical route.
The glove basically controlled several inputs via a USB proto board, I wrote some software that just latched onto the mouse events for Windows so I could send right/left clicks etc. The nifty bit was how you turned the inputs on/off:
The darker areas on the finger tips are a type of conductive fabric, handily it had an adhesive backing so I didn’t need to stitch anything (a “proper” version of this should really use conductive thread rather than the narrow gauge wires I used). The thumb acts as a ground and each of the finger tips control an input to the USB proto board. This means that when you touch your finger and thumb together you turn that input on etc. Combinations are supported too, just hold more fingers against your thumb.
The real aim of this wasn’t to provide simple left/right mouse clicks, but different fingers and combinations thereof would related to “modes”. Think of index finger being move, middle finger is scale – that sort of thing.
Another neat bit of this was the other glove (not pictured – I need to find the damn thing!) that provided the ability to control the movement of objects and such-like. Using a wiimote on top of my monitor, pointing toward me, I tracked IR LEDs on the finger tips of the glove and fed that information into GlovePIE to control the cursor movement. A wave of my hand and a tap of my fingers and I could control my desktop without touching anything 🙂
Of course in the last few years this has all exploded into the commercial space (to mixed response) with more sophisticated, visual, methods, but this has a nice cyber punky feel to it 😀