In the realm of the sense

Rather than humans adapting to virtual reality, computer interfaces are adapting to ours.

THIS BEING THE YEAR 2000, home of The Future!, we were all supposed to be computing in virtual cyberspace reality by now—zooming through virtual landscapes in digitized bodies, wearing our heads-up Borg-style headsets as we interfaced data structures with big digital gloves, or at the very least sporting some serious electronic implants.

So far, though, the average computer on the average desk hasn't evolved much past the interaction trinity of mouse, monitor, keyboard. (And never mind the fact that it's still pretty much lodged on your desk.) Virtual reality interfaces are, for the most part, making themselves useful in niches such as distance medicine and manufacturing (ask the engineers at Boeing about their cool Joint Strike Fighter VR Lab—not a game but a virtualized 3D space that lets engineers evaluate CAD designs for JSF supportability without building an actual model and making actual people suffer through it).

That's not most of us. Instead, technology still tends to adapt to humans rather than the other way around. While we wait for that irresistible alternative to the Holy Hardware Trinity, software and hardware developers are trying more humble approaches to aligning the computer experience more closely with human biology.

SIGHT. Improvements in computing visuals sprawl in all directions. That's partly because the eye is a wonderfully sensitive instrument, designed to process the real world in real time and contemptuous, if an organ can be contemptuous, of the flat-and-jaggy world of the modern monitor.

It's also partly because we ask such a lot of our eyes. For instance, despite the past's best prediction of the future, most of us still do a tremendous amount of reading—on monitors and on all sorts of smaller devices, such as the long-predicted e-books slated to overrun the paper variety Real Soon Now. Microsoft's been doing its part to speed the plow, making a big push behind its ClearType technology last week. Such technologies work by adapting type display to the way our eye interprets computer screens and their tiny red-green-blue pixel configurations. It really does look sharper and read more smoothly, which makes one suspect that reading isn't a dead art even in The Future! The nice thing with ClearType is there's no hardware required; all it does is make smarter use of the hardware you've got—both the silicon sort and the two sensors in the front of your head.

Image quality isn't everything, though; images are still bandwidth-intensive and hard to move around the Net efficiently. The MPAA won't like this thought, but just over the horizon is MP4, which promises to do for video what MP3 did for audio. Promising to compress video by cutting out the bits that our eyes won't miss (as MP3 cuts out the stuff our ears can't process), MP4 should make streaming video feasible, if not perfect, for your average Jane with a DSL or cable modem. The movie industry will be thrilled, I assure you.

SOUND. After years of pundit predictions that the Internet was going to replace our televisions, the truth comes out: We'd prefer it replaced our radios. The aforementioned MP3 has made the kind of impression that online video only hoped to; we've come a long way from giant WAV file and the lumpy, hard-to-configure MBONE hookups of much less than a decade past. Real's streaming audio opened the door; MP3 broke it down.

There's interesting stuff in store in MP4 for sound—not the whole spec, but the SA portion thereof. SA specifies audio standards that turn sound into a computer program rather than a data file—no big deal to you, but a whole lot of lawyers are going to flip when it comes time to regulate this stuff. You heard it here first, folks.

TOUCH. For years it has been threatened that computers would eventually replicate tactile sensations in the "real world"—the texture of cloth, the vibration of an accelerating engine, or the heat from a nearby flame. (And don't even ask about the bright future predicted for teledildonics—yes, it's what you think it is.)

At the moment, touch-sensitive interfaces tend toward the very abstruse and the very trivial, with little play in between. On the low end, a number of gamer-oriented peripherals from companies such as Logitech provide force feedback (a.k.a. haptics—you tug at the joystick, it resists) or vibration feedback (you accelerate, you feel a revving) for games thus designed—spiffy if you're a Flight Simulator junkie, not so much if you're a devotee of The Sims.

On the high end, companies such as SensAble are demonstrating hardware-software combinations that allow sculptors to use their hands and a penlike stylus to work in a virtualized sort of clay (feels like clay, acts like clay with an Undo function)—a recent exhibition in Boston displayed work by artists who used the SensAble wares to create art that was then sent to a three-dimension "printer" that produces models in substances such as wax; one artist instead chose to send his work over the Net to a mill to be cut from a block of aluminum.

A system like SensAble's would set you back around $10,000 plus 3D printer plus PC, though the company estimates that prices will come down to average-user levels in five years or so. For now, most potential customers are commercial artists from toy companies and the like. Touch—our only fully duplex sense (that is, able to send and receive information at the same time) and the most diffusely experienced—also plays a role in very specific 3D applications such as the Boeing simulator mentioned above.

SMELL AND TASTE. Themselves the least understood of the senses, smell and taste lag behind the other three in the realm of computer-human interface. This is not necessarily a bad thing, as no one wants to see their officemates licking the computers.

If anyone's got plans otherwise, they're keeping a low profile. Taste makes no showing at all online. (Feel free to add your own pithy observation here.) As for scent, your only hope is DigiScents, creators (according to their Web site) of—well, we'll see, so to speak.

Improbable? Scent is a tough prospect for digitization; humans and their 5 million olfactory receptor cells can distinguish among as many as 10,000 odors, and the biological mechanism for smelling involves molecules fitting precisely into one of 500-1,000 different kinds of receptors. (Your eye, on the other hand, has three receptors.) And two people won't necessarily perceive scents the same, even though scents themselves can be as unique as fingerprints.

Needless to say, anything that has to cook up molecules isn't strictly software based; DigiScents' ScentStream software would combine with the iSmell Personal Scent Synthesizer. Digiscents claims 600-plus games developers alone for that dynamic duo, but this reporter will believe it when she smells it—bringing a fresh dimension to the phrase "a nose for news." In the future, we may actually know what news smells like—is it too late to stick with the 20th century?

comments powered by Disqus