User Interfaces
User Interfaces
Cheaper and more powerful personal computers are making it possible to perform
processor-intensive tasks on the desktop. Break-throughs in technology, such as speech
recognition, are enabling new ways of interacting with computers. And the convergence of
personal computers and consumer electronics devices is broadening the base of computer users
and placing a new emphasis on ease of use. Together, these developments will drive the industry
in the next few years to build the first completely new interfaces since SRI International and
Xerox's Palo Alto Research Center did their pioneering research into graphical user interfaces
(GUIs) in the 1970s.
True, it's unlikely that you'll be ready to toss out the keyboard and mouse any time soon.
Indeed, a whole cottage industry -inspired by the hyperlinked design of the World Wide Web -
has sprung up to improve today's graphical user interface. Companies are developing products
that organize information graphically in more intuitive ways. XML-based formats enable users to
view content, including local and network files, within a single browser interface. But it is the
more dramatic innovations such as speech recognition that are poised to shake up interface
design.
Speech will become a major component of user interfaces, and applications will be
completely redesigned to incorporate speech input. Palm-size and handheld PCs, with their
cramped keyboards and basic handwriting recognition, will benefit from speech technology.
Though speech recognition may never be a complete replacement for other input devices,
future interfaces will offer a combination of input types, a concept known as multimodal input. A
mouse is a very efficient device for desktop navigation, for example, but not for changing the
style of a paragraph. By using both a mouse and speech input, a user can first point to the
appropriate paragraph and then say to the computer, 'Make that bold.' Of course, multimodal
interfaces will involve more than just traditional input devices and speech recognition.
Eventually, most PCs will also have handwriting recognition, text to speech (TTS), the ability to
recognize faces or gestures, and even the ability to observe their surroundings.
At The Intelligent Room, a project of Massachusetts Institute of Technology's Artificial
Intelligence Lab, researchers have given sight to PCs running Microsoft Windows through the
use of video cameras. 'Up to now, the PC hasn't cared about the world around it,' said Rodney A.
Brooks, the Director of MIT's Artificial Intelligence Lab. 'When you combine computer vision
with speech understanding, it liberates the user from having to sit in front of a keyboard and
screen.'
It's no secret that the amount of information - both on the Internet and within intranets - at
the fingertips of computer users has been expanding rapidly. This information onslaught has led
to an interest in intelligent agents, software assistants that perform tasks such as retrieving and
delivering information and automating repetitive tasks. Agents will make computing
significantly easier. They can be used as Web browsers, help-desks, and shopping assistants.
Combined with the ability to look and listen, intelligent agents will bring personal computers one
step closer to behaving more like humans. This is not an accident. Researchers have long noted
that users have a tendency to treat their personal computers as though they were human. By
making computers more 'social,' they hope to also make them easier to use.
As these technologies enter mainstream applications, they will have a marked impact on
the way we work with personal computers. Soon, the question will be not 'what does software
look like' but 'how does it behave?