Purely active input devices can have both discrete components (e.g. buttons) and manually driven
continuous components, which means that the user must manipulate the component in order to generate
the device’s continuous behavior. Trackballs and sliders are examples of manually driven continuous
components and they allow the user to generate sequences of values from a given range.
Data reports are composed of either discrete components, continuous components, or a
combination of the two.
Discrete input device components typically generate a single data value (i.e., a Boolean value or
an element from a set) based on the user’s action. They are often used to change modes in an
application, such as changing the drawing mode in a desktop 3D modeling program or to indicate
the user wants to start performing an action, such as instantiating a navigation technique.
Continuous input device components generate multiple data values (i.e., real-valued numbers,
pixel coordinates, etc.) in response to a user’s action and, in many cases, regardless of what the
user is doing (tracking systems and bend-sensing gloves are examples). In many cases, input
devices combine discrete and continuous components, providing a larger range of device-interaction technique mappings.
- Motion Trackers
- Eye trackers
- Data Gloves
MOTION TRACKING
It is the process of recording the movement of objects or people. It is used
in military, entertainment, sports, medical applications, and for validation of computer vision and
robotics. In filmmaking and video game development, it refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When
it includes face and fingers or captures subtle expressions, it is often referred to as performance
capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and
games, motion tracking usually refers more to match moving.
One of the most important aspects of 3D interaction in virtual worlds is providing a correspondence
between the physical and virtual environments. As a result, having accurate tracking is a crucial part
of making interaction techniques usable within VE applications. The critical characteristics of motion trackers include their range, latency (delay between the time a motion occurs and when it is reported),
jitter (noise or instability), and accuracy. Currently, there are a number of different motion-tracking
technologies in use, which include:
- magnetic tracking
- mechanical tracking
- acoustic tracking
- inertial tracking
- optical tracking
- hybrid tracking
Magnetic Tracking- Magnetic trackers use a transmitting device that emits a low-frequency magnetic
field. A small sensor, the receiver, determines its position and orientation relative to this magnetic
source. The range of such trackers varies, but they typically work within a radius of 4 to 30 feet. The figure shows an example of a magnetic tracking system. It uses a small emitter and receivers and has
better accuracy than larger range systems. However, its range is limited to a 4-foot radius, which means
the device is not appropriate for large display environments such as surround-screen visual displays or
even HMDs where the user needs a lot of space to roam.
In general, magnetic tracking systems are accurate to within 0.1 inches in position and 0.1 degrees in
orientation. Their main disadvantage is that any ferromagnetic or conductive (metal) objects present in
the room with the transmitter will distort the magnetic field, reducing the accuracy. These accuracy
reductions can sometimes be quite severe, making many interaction techniques, especially gesture-based techniques, difficult to use.
Mechanical Tracking- Mechanical trackers have a rigid structure with a number of interconnected
mechanical linkages combined with electromechanical transducers such as potentiometers or shaft
encoders. One end is fixed in place, while the other is attached to the object to be tracked (usually the
user’s head or hand). As the tracked object moves, the linkages move as well, and measurements are
taken from the transducers to obtain position and orientation information. Arm-mounted visual displays
use this type of tracking technology. Mechanical trackers are very accurate and transmit information
with very low latencies. However, they are often bulky, limiting the user’s mobility and making it difficult
to use physically based navigation techniques.
Acoustic Tracking- Acoustic tracking devices use high-frequency sound emitted from source
components and received by microphones. The source may be on the tracked object, with the
microphones placed in the environment (an outside-in approach), or the source may be in the
environment, with the microphones on the tracked object (an inside-out approach). The dominant
approach to determining position and orientation information with acoustic tracking is to use the time-of-flight duration of ultrasonic pulses.
Inertial Tracking- Inertial tracking systems use a variety of inertial measurement devices such as
angular-rate gyroscopes and linear accelerometers. These devices provide derivative measurements (i.e.,
gyroscopes provide angular velocity, and linear accelerometers provide linear acceleration), so they must
be integrated to obtain position and orientation information. Since the tracking system is in the sensor, the range is limited to the length of the cord that attaches the sensor to the electronics unit (wireless tracking
is also possible with these systems). In addition, these devices can produce measurements at high
sampling rates. Inertial tracking systems were originally used in ships, submarines, and airplanes in the 1950s. However, the weight of these devices prohibited their use in motion tracking until they became
small enough to fit in microelectronic mechanical systems(MEMS).
Optical Tracking- Another approach to position and orientation tracking of users and physical objects is
from measurements of reflected or emitted light. These types of trackers use computer vision techniques
and optical sensors such as cameras, infrared emitters, or lateral effect diodes, which generate signals
proportional to the position of incoming light along one axis (i.e., 2D displacement measurement). A
variety of different cameras can be used, from simple desktop webcams to sophisticated high-resolution
cameras with high sampling rates and pixel densities.
Like acoustic trackers, optical tracking systems use either outside-in or inside-out configurations.
Outside-in systems have their sensors mounted at fixed locations in the environment, and tracked
objects are marked with active or passive landmarks such as retro-reflective markers or colored gloves.
The number and size of these landmarks vary depending on the type of optical tracking system and how
many DOF is required. In some cases, no landmarks are used at all. Inside-out systems place optical
sensors on the user or tracked object while the landmarks are placed in the environment. It delivers
accurate position and orientation tracking without environmental interference or distortion.
Hybrid Tracking- Hybrid trackers put more than one tracking technology together to help increase
accuracy, reduce latency, and provide a better overall 3D interaction experience. In general, individual tracking technologies are used to compensate for each other’s weaknesses. An example of
such a device is shown in Figure 4.10. This example combines inertial and ultrasonic tracking
technologies. The inertial component measures orientation and the ultrasonic component measures
position, enabling the device to attain 6 DOF. Moreover, information from each component is used to
improve the accuracy of the other. As a side note, this tracking system has the added advantage of being
wireless, with the user wearing a small battery-powered electronics box on her belt. The major difficulty
with hybrid trackers is that more components produce more complexity. The extra complexity is
warranted, however, if tracking accuracy is significantly improved.
EYE TRACKING
Eye trackers are purely passive input devices used to determine where the user is
looking. Eye-tracking technology is primarily based on computer vision techniques: the device tracks
the user’s pupils using corneal reflections detected by a camera. Devices can be worn or embedded into
a computer screen, making for a much less obtrusive interface. Other eye-tracking techniques include
electrooculography, which measures the skin’s electric potential differences using electrodes placed around
the eye, and embedding mechanical or optical reference objects in contact lenses that are worn directly on
the eye.
From a generic interaction perspective, eye-tracking systems have been used both as an evaluation tool
and to interact with an application. For example, these devices are used to collect information about a
user’s eye movements in the context of psychophysical experiments, to get application usage
patterns to help improve the interface, or for training in visual inspection tasks. Eye-tracking systems are
also used as input devices. An example would be a user controlling a mouse pointer strictly with his
eyes. In the context of 3D interface design, active eye-tracking systems have the potential to improve upon
many existing 3D interaction techniques. For example, there are numerous techniques that are based on
gaze direction (e.g., gaze-directed steering, gaze-directed manipulation), which use the user’s head tracker as an approximation to where she is looking. Since the gaze vector is only accurate if the user is
looking straight ahead, usability problems can occur if the user looks in other directions while keeping
the head stationary. Eye-tracking devices might help improve these gaze-directed techniques since the
actual gaze from the user can be obtained.
DATA GLOVES
In some cases, it is useful to have detailed tracking information about the user’s hands,
such as how the fingers are bending or if two fingers have made contact with each other. Data gloves are
input devices that provide this information. Data gloves come in two basic varieties: bend-sensing gloves and pinch gloves.
- Bend- Sensing Gloves: Bend-sensing data gloves are purely passive input devices used to
detect postures of the hand. For example, the device can distinguish between a fist, a pointing
posture, and an open hand. The raw data from the gloves is usually given in the form of joint
angle measurements, and software is used to detect postures based on these measurements.
- Pinch Gloves: The Pinch Glove (see Figure) system is an input device that determines if a user is
touching two or more fingertips together. These gloves have a conductive material at each of the fingertips
so that when the user pinches two fingers together, electrical contact is made. These devices are often
used for performing grabbing and pinching gestures in the context of object selection, mode switching, and
other techniques.
- Combining Bend-Sensing Data and Pinch Input: Both the Pinch Gloves and bend-sensing
gloves have limitations. Although it is possible to determine if there is finger contact (e.g., index
finger to thumb) with a bend-sensing glove, some form of hand gesture recognition is required,
which will not be as accurate as of the Pinch Glove (which has essentially 100% accuracy
assuming the device is functioning properly). Conversely, one can get an idea of how the
fingers are bent when using Pinch Gloves, but they provide only very rough estimates. Ideally,
a data glove should have the functionality of both bend-sensing gloves and Pinch Gloves.
Q3. What are 3D Mice and how it works also describe the types?
Ans: 3D Mice- In many cases, specifically with motion trackers these tracking devices are combined
with other physical device components such as buttons, sliders, knobs, and dials to create more functionally
powerful input devices. We call these devices 3D mice and define them broadly as a handheld or worn input
devices that combine motion tracking with a set of physical device components.
The distinguishing characteristic of 3D mice, as opposed to regular 2D mice, is that the user physically
moves them in 3D space to obtain position and/or orientation information instead of just moving the
device along a flat surface. Therefore, users can hold the device or, in some cases, wear it. Additionally,
with orientation information present, it is trivial to determine where the device is pointing (the device’s
direction vector), a function used in many fundamental 3D interaction techniques. Because of their
generality, they can be mapped to many different interaction techniques, and in one form or another, they
are often the primary means of communicating user intention in 3D UIs for VE applications.
There are two types of 3D Mice are as follows:
- Handheld 3D Mice- A common design approach for 3D mice is to place a motion tracker
inside a structure that is fitted with different physical interface widgets. Actually, one of the first
3D mice to be developed used no housing at all. The “bat”, so named because it is a mouse that
flies, was developed by Colin Ware in the late 1980s. It was simply a 6-DOF tracking device with
three buttons attached to it. Such a device is rather easy to build with a few electrical components
(provided you have the tracking device). A more sophisticated and elegant version of the bat is
shown in Figure below (a). This device houses a motion tracker in a structure that looks like a
simple remote control. It is commonly used in conjunction with surround-screen displays for
both navigation and the selection of 3D objects. The physical structure that houses the motion tracker is often a replication of an input device used
in the real world. For example, the 3D mouse (as shown in (b) figure) is modeled after an Air
Force pilot’s flight stick. Some 3D mice have also been developed to look like their 2D
counterparts. For example, the Fly Mouse looks similar to a conventional 2D mouse, but it uses
acoustic tracking, has five buttons instead of two, and can also be used as a microphone for speech
input.
- User-Worn 3D Mice- Another approach to the design of 3D mice is to have the user wear them
instead of holding them. Assuming the device is light enough, having the device worn on the user’s
finger, for example makes the device an extension of the hand. Figure (a) shows the Ring Mouse,
an example of such a device. It is a small, two-button, ring-like device that uses ultrasonic tracking
that generates only position information. One of the issues with this device is that it has a limited
number of buttons because of its small form factor. The Finger-Sleeve, shown in figure (b), is a finger-worn 3D mouse that is similar to the Ring
Mouse in that it is small and lightweight, but it adds more button functionality in the same
physical space by using pop-through buttons. Pop-through buttons have two clearly
distinguished activation states corresponding to light and firm finger pressure.
Q4. Explain different Tracking devices.
Ans: There are various tracking devices are as follows:
- Nintendo WII Remote ("Wiimote")- The Wii Remote device does not offer a technology
based on 6-DOF since again, cannot provide absolute position, in contrast, is equipped with a multitude of sensors, which convert a 2D device is a great tool of interaction in 3D
environments. This device has gyroscopes to detect rotation of the user, accelerometers
ADXL3000, for obtaining speed and movement of the hands, optical sensors for determining the orientation and electronic compasses, and infra-red devices to capture the position. This type of
device can be affected by external references of infra-red light bulbs or candles, causing errors
in the accuracy of the position. An essential capability of the Wii Remote is its motion
sensing capability, which allows the user to interact with and manipulate items on screen
via gesture recognition and pointing, using an accelerometer and optical sensor technology.
- Google Tango Devices- The Tango Platform is an augmented reality computing platform,
developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks
division of Google. It uses computer vision and internal sensors (like gyroscopes) to enable
mobile devices, such as smart-phones and tablets, to detect their position relative to the world
around them without using GPS or other external signals. It can therefore be used to provide 6-
DOF input can also be combined with its multi-touch screen. The Google Tango devices
can be seen as more integrated solutions than the early prototypes combining spatially-tracked
devices with touch-enabled-screens for 3D environments.
- Microsoft KINECT- The Microsoft Kinect device offers us a different motion capture
technology for tracking. Instead of basing its operation on sensors, this is based on a structured
light scanner, located in a bar, which allows tracking of the entire body through the detection of
about 20 spatial points, of which 3 different degrees of freedom are measured to obtain the position,
velocity, and rotation of each point. Its main advantage is the ease of use, and the no requirement of
an external device attached by the user, and its main disadvantage lies in the inability to detect
the orientation of the user, thus limiting certain space and guidance functions.
- Leap Motion- The Leap Motion is a new system of tracking of hands, designed for small
spaces, allowing a new interaction in 3D environments for desktop applications, so it offers a
great fluidity when browsing through three-dimensional environments in a realistic way. It is a small device that connects via USB to a computer and used two cameras with infra-red light
LED, allowing the analysis of a hemispheric area about 1 meter on its surface, thus recording
responses from 300 frames per second, information is sent to the computer to be processed by
the specific software company.
Q5. Explain all Special purpose input devices.
Ans: Many other types of devices are used in 3D interfaces. These devices are often designed for
specific applications or used in specific interfaces. These devices are -
ShapeTape: ShapeTape (shown in Figure) is a flexible, ribbon-like tape of fiber-optic curvature
sensors that comes in various lengths and sensor spacing. Because the sensors provide bend and twist
information along the tape’s length, it can be easily flexed and twisted in the hand, making it an ideal
input device for creating, editing, and manipulating 3D curves.
Interaction Slippers: In many surround-screen display configurations where the floor is actually a
display surface, users must wear slippers when they enter the device to avoid making scuff marks and
tracking in dirt. An interesting input device takes advantage of the need for slippers in these
environments: the Interaction Slippers (see Figure). The Interaction Slippers embed a wireless track-ball
device (the Trackman) into a pair of common house slippers. The slippers use wireless radio technology
to communicate to the host computer. The Trackman is inserted into a hand-made pouch on the right
slipper and rewired. Two of the Trackman’s three buttons are connected to a pair of conductive cloth
patches on the instep of the right slipper. On the instep of the left slipper, two more conductive cloth
patches are attached. Touching a cloth patch on the left slipper to a cloth patch on the right slipper
completes the button press circuit. This design enables two gestures corresponding to heel and toe
contacts respectively. The slippers were designed for interacting with the Step WIM navigation
technique, in which a miniature version of the world is placed on the ground under the user’s feet,
allowing him to quickly travel to any place in the VE.
Cave painting Table: An example of an input device that was specifically developed for a particular
3D application is the CavePainting Table (see Figure ) used in CavePainting, a system for painting 3D
scenes in a VE. The CavePainting Table uses a prop-based design that relies upon multiple cups of
paint and a single tracked paintbrush. These paint cup props stay on a physical table that slides into the
surround-screen device and also houses knobs and buttons used for various interaction tasks. In
conjunction with the table, a real paintbrush is augmented with a single button that turns the “paint” on
and off. The bristles of the brush are covered with conductive cloth, and users can dip the brush into the
paint cups (which are linked with the conductive cloth as well) to change brush strokes. A tracked bucket is
used to throw paint around the virtual canvas.
Transparent Palettes: In some cases, making a simple addition to an existing input device can create a
powerful tool for interacting in 3D applications. For example, when interacting with 3D applications that
utilize workbench-style displays, attaching a motion tracker to a piece of Plexiglas can create a useful tool
for interacting in 2D and 3D In addition, these devices can also have touch-sensitive screens (see Figure).
Such a device allows the user to perform 2D interaction techniques, such as writing and selection of
objects and commands from 2D palettes, as well as 3D interaction techniques, such as volumetric
selection by sweeping the device through the virtual world. This “pen-and- tablet” metaphor has been
used extensively in 3D UIs.
Control Action Table: The last input device is the Control Action Table (CAT), which was designed
for use in surround-screen display environments. This freestanding device (shown in Figure) looks like a
circular tabletop. The CAT uses angular sensors to detect orientation information using three nested
orientation axes. The device also has an isometric component; the tabletop is equipped with a
potentiometer that detects forces in any 3D direction. Thus, the user can push or pull on the device for
translational movement. Additionally, the CAT has a tablet for 2D interaction mounted on the
tabletop, which makes it unique because it supports both 6-DOF and 2D input in the same device.
Other advantages of the CAT include the ability to control each DOF individually and its location
persistence (meaning that its physical state does not change when released). The CAT does have some
inherent limitations because the nature of the nested orientation axes can make some orientations hard to
specify, and in certain configurations (e.g., when the tabletop is vertical), translational movement can be
difficult to perform as well.
Q6. What is Direct Human Input? Explain in detail.
Ans: Direct Human Input- A powerful approach to interacting with 3D applications is to obtain
data directly from signals generated by the human body. With this approach, the user actually becomes
the input device. For example, a user could stand in front of a camera and perform different
movements, which the computer would interpret as commands. In this, we specifically discuss speech,
bioelectric, and brain-computer input and how they can be used in 3D UIs.
Speech Input: Speech input provides a nice complement to other input devices. It is a natural way to
combine different modes of input to form a more cohesive and natural interface. In general, when
functioning properly, speech input can be a valuable tool in 3D UIs, especially when both of the user’s
hands are occupied. Beyond choosing a good speech recognition engine, there are many other important issues to consider when using speech for a 3D interface.
There are exchanges that must be made when dealing with speech input. One important issue is where
the microphone is to be placed. Ideally, a wide-area microphone is used so that the user need not wear a
headset. Placing such a microphone in the physical environment could be problematic since it might pick
up noise from other people or machines in the room. One of the big problems with using speech input is
having the computer know when to and when not to listen to the user’s voice. Often, a user is
conversing with a collaborator with no intention of issuing voice commands, but the application
“thinks” the user is speaking to it. This misinterpretation can be very troublesome.
One of the best ways to avoid this problem is to use an implicit or invisible push-to-talk scheme. A
traditional push-to-talk scheme lets the user tell the application when he or she is speaking to it, usually
by pushing a button. In order to maintain the naturalness of the speech interface, we do not want to add to
the user’s cognitive load. The goal of implicit push-to-talk is to embed the “push” into existing
interaction techniques so the user does not have the burden of remembering to signal the application that
a voice command is about to be issued. As an example, consider a furniture layout application in which a
user wants to place different pieces of furniture into a room or other architectural structure. The user wishes to put a table into a kitchen. To accomplish this task, the user must create the object and then place it in the room. The user shows where the table should be placed using a laser pointer and then says,
“Give me a table, please.” The act of picking up the laser pointer signals the application that the user is
about to ask for an object. This action “piggybacks” the voice command onto the placement task,
making the push-to-talk part of the technique implicit.
BioElectric Input: NASA Ames Research Center has developed a bioelectric input device that reads
muscle nerve signals emanating from the forearm. These nerve signals are captured by a dry electrode
array on the arm. The nerve signals are analyzed using pattern recognition software and then routed
through a computer to issue relevant interface commands.
Brain Input: The goal of brain-computer interfaces is to have a user directly input commands to the
computer using signals generated by the brain. A brain-computer interface can use a simple, non-invasive approach by monitoring brainwave activity through electroencephalogram (EEG) signals. The user simply wears
a headband or a cap with integrated electrodes. A future, more invasive approach would be to surgically implant
microelectrodes in the motor cortex. Of course, this approach is still not practical for common use but might be
appropriate for severely disabled people who cannot interact with a computer in any other way. Research has
shown that a monkey with microelectrodes implanted in its motor cortex can move a mouse cursor to desired
targets.
Q7. What are the strategies for building input devices?
Ans: There are a variety of strategies for constructing home-brewed input devices. One of the first
things to consider is the device’s intended functionality because doing so helps to determine what types
of physical device components will be required. For example, the device might need to sense forces,
motion, or simply button presses. Based on the intended device functionality, the device developer can
choose appropriate sensors, whether they be digital (output of 0 or 1) or analog (output of a range of
values). These sensors can easily be found in electronics stores and over the Internet. Examples include
pressure sensors, bend sensors, potentiometers, thermostats (for sensing temperature), photocells (for
sensing light), simple switches, and many others. These sensors come in a variety of styles and
configurations, and the appropriate choice is often based on trial and error. This trial-and-error approach is
especially important with buttons, since buttons and switches come in many different shapes, sizes, and
force thresholds—the amount of force the user needs to activate the button or switch.
In many cases, building the sensors is a feasible option, especially if they are switches or buttons. One
powerful approach for building simple switches is to use conductive cloth. The conductive cloth is just
fabric with conductive material sewn into it, and it has many advantages for building custom input
devices. In fact, the input devices shown above all use the conductive cloth as part of their designs. The conductive cloth is inexpensive and fairly robust material. Because it is cloth, it is flexible, so it can be
used just about anywhere, in many different sizes and geometric configurations. Additionally, the conductive
cloth is easily sewn onto other fabrics so that devices can be constructed on clothing.
Another important consideration when making home-brewed input devices is how the sensors are housed
in the physical device. Positioning sensors in the device housing is especially important when they are
active components such as buttons, dials, and sliders because the user must be able to interact with
them comfortably to manipulate the device. For example, if a homemade 3D mouse is being constructed
with several buttons, these buttons should be placed so that the user does not have to endure any undue
strain in order to press any of the buttons at any given time. Sensor placement in homemade input devices
is also affected by the geometry of the device itself.
One of the reasons many 3D UI designers do not build homemade devices is that they do not have the
ability or equipment to construct the physical housing the sensors are placed in and on. Ideally, a milling
machine, vacuum-form device (a device that heats plastic and stretches it over a mold), or 3D printer would
be used to construct the device housing based on a model developed in 3D modeling software. However,
these tools are not necessarily household items. One novel approach for constructing device housings is
to use Lego bricks. Another approach is to use modeling clay to create input device housings. The
advantage of using modeling clay is that it can be molded into any shape the designer wants and can be quickly changed to try out different geometries. Once an appropriate design or designs are found, the
clay can be oven-fired and used as the device housing.
Q8. How to connect Home-Brewed Input devices to the computer?
Ans: Connecting Home-Brewed Input devices to the Computer- The other important part of
constructing home-brewed input devices is choosing how to connect them to the computer. In the
majority of cases, homemade input devices require some type of logic that the user needs to specify in
order for the computer to understand the data the input device produces. The one exception is when
existing devices are taken apart so that the sensors in them can be used in different physical
configurations. An example of this is the Interaction Slippers shown in Figure above. Because the
device uses the rewired components of a wireless mouse, it can use the standard mouse port to transmit
information to the computer, thus requiring no additional electronics.
There are two primary approaches for connecting a homemade input device to the computer so it can be
used in 3D interfaces. The first approach is to use a microcontroller. A microcontroller is just a small
computer that can interface with other electronic components through its pins. There are many
different varieties to choose from depending on price, power, ease of programming, and so on. The
designer can connect an input device to a microcontroller on a circuit board, which in turn
communicates to the computer through a serial or USB port. Typically, the designer first builds the
electronics on a prototyping board (breadboard), which is an easy way to establish electrical
connections between the device and the microcontroller without the need to solder. Using any of the
many software packages (many of them are free and use Basic) or for writing microcontroller code, the
developer can write a program for controlling the input device and download it to the microcontroller.
After the prototyping and testing stage, the microcontroller and any associated electronics can be
attached to an appropriate circuit board. The homemade input device then has its own electronics unit
for sending information from the device to the computer, and with appropriate software such as device
drivers, to the 3D UI. Using microcontrollers does require some effort and has a slight learning curve,
but the approach gives the input device developer a lot of freedom in choosing how the input
device/computer interface is made.
A second approach to connecting homemade input devices to a computer is to use the Musical
Instrument Device Interface (MIDI). MIDI is a protocol that was developed to allow electrical
musical instruments to communicate with computers. The important characteristic of MIDI is that it is a
protocol for communicating control information, such as if a button was pressed, how hard it was
pressed, or how long it was pressed, which means it can be used for connecting input devices to
computers. Figure shows an example of a MIDI controller and some of the sensors used in developing
input devices with it. Similar to Phidgets, using MIDI gives the input device developer the advantage of
not having to deal with microcontroller programming and circuit design. However, in most cases, the
developer still needs to write the device drivers to use the custom-built input devices in 3D applications.
Q9. How to choose input devices for 3D User Interface and what are the factors
involved in it?
Ans: A key issue in 3D interface design is to choose the appropriate input devices that are best suited to
the needs of a particular application. The designer needs to examine the various tasks the 3D UI needs
to support, find or develop the appropriate interaction techniques, and ensure that the chosen input
devices are mapped to these techniques appropriately. In this section, we first examine some important factors to consider when choosing input devices.
There are some important factors which are –
Many factors must be considered when choosing an appropriate input device for a particular 3D UI.
- Device ergonomics
- The number and type of input modes
- The available technique to device-mapping strategies
- The types of tasks the user will be performing all play a role in choosing suitable input
devices, making these choices a challenging task.
The problem is amplified due to the variety of possible operations the user might perform within the
context of a given 3D application. A particular device might be perfect for one task in an application but
completely inappropriate for another.
Device ergonomics is clearly an important consideration when choosing an appropriate input device for a
3D application. In general, we do not want to put undue strain on the user’s body. Such strain can lead
to repetitive stress injuries and make it difficult for the user to perform common tasks. Devices should be
lightweight, require little training, and provide a significant transfer of information to the computer with
minimal effort.
A particular device’s input modes must be considered when choosing an input device for a 3D
application. The types of input required for a given application helps to reduce the possible device
choices. However, these devices in an immersive 3D modeler are not appropriate, since they are difficult
to use while standing and they do not provide the appropriate DOF and continuous events needed to track
the user’s head and hands. In contrast, a desktop 3D computer game does not necessarily require a
complicated 6-DOF tracking device, since, in most cases, the keyboard and a mouse or a joystick will
suffice. In such an application, although a bend-sensing glove could be used to navigate (using some
collection of gestures), it would probably not be appropriate given the complexity of the device. A
simpler device such as a Wanda, shown above, is much easier to use since the application does not need
all of the extra DOF that a bend-sensing glove gives the user.
An input device can handle a variety of interaction techniques depending on the logical mapping of
the technique to the device. The major issue is whether that mapping makes the device and the
subsequent interaction techniques usable. Therefore, an important consideration when choosing an input
device in a 3D application is how the given device will map to the variety of interaction techniques
required to perform application tasks. It is in these mappings where tradeoffs are usually made since
very often a device will have a natural mapping to one or two of the interaction techniques in the
application but relatively poor mapping to the others.
This example makes the point that there is often a tradeoff when choosing an input device for a 3D
application. In many cases, input de- vices have been designed for general use, which means that
although they can be used for a variety of interaction techniques, they may not provide the best mapping
for any one of them. Thus, several specialized de- vices may provide better usability than a single
general-purpose device.
Q10. Write the name of the tools which help to choose input devices.
Ans: There are basically two tools that help to choose input devices are input Device Taxonomies- Input device taxonomies can be a useful tool for determining which input
devices can be substituted for each other, and they can also help in making decisions about what devices
to use for particular tasks. In addition, they are an important part of 3D UI design because they provide a
mechanism for understanding and discussing the similarities and differences among input devices. In
this, we briefly review some of these input device taxonomies from a historical perspective to show the evolution of these tools. Additionally, we discuss how they can be used to help make decisions about
choosing appropriate devices in 3D UIs.
One of the first input device taxonomies was developed by Foley and Wallace (1974). Their approach
was to separate the input device from the interaction technique. They created a set of four virtual
devices, which at the time covered most input devices. These virtual devices are the pick, locator,
button, and valuator. A pick device is used to designate a user-defined object. A locator is used to
determine the position and/or orientation. A button is used to designate a system-defined object. Finally, a
valuator s used to input a single value within a set of numbers. Two additional virtual devices, stroke
and string, were added to this set by Enderle, Kansay, and Pfaff (1984). A stroke is a sequence of
points, and a string is a sequence of characters.
This virtual device taxonomy proves useful in many different situations. For example, the 3D UI
developer can use this taxonomy as a tool for quickly reducing the number of possible input devices to
choose from by simply examining which virtual devices fit the application best and selecting the
devices that fit in those categories. If a 3D locator is required in an application, then we can
automatically eliminate all of the physical devices that do not map to this virtual device. However, this
taxonomy does have a fundamental flaw, because devices that appear to be equivalent in the taxonomy
can be dramatically different both physically and practically. For example, a mouse and a trackball are
very different devices, yet are both considered to be 2D locators and stroke devices.
As a result of these limitations, new taxonomies were developed to take other characteristics of the
input devices into account. For example, Foley, Wallace, and Chan (1984) improved upon the virtual
device taxonomy by mapping elementary interaction tasks (e.g., select, position, orient,) to the devices
that perform those tasks. Based on task requirements, only a limited set of devices could be used for
any particular task. However, because the taxonomy is task-based, an input device can appear for
more than one task.
Buxton (1983) developed a taxonomy that organizes continuous input devices into a 2D space, the
dimensions of which are DOF and properties sensed (i.e., motion, position, pressure). Additionally, a
sub-classification is used for devices that have a mechanical intermediary between the hand and the
sensing mechanism and those that are touch-sensitive. An example of how this taxonomy classifies
input devices.
Empirical Evaluations- In general, the taxonomies are useful for narrowing down the choice of input
device for a particular task or 3D UI. However, in order to get concrete information about which
devices are appropriate for given tasks, empirical studies are often required. In contrast to the lack of
empirical work done on choosing appropriate output devices for 3D applications, there has been a good
amount of research evaluating input devices for interacting in 3D. Performing empirical analyses of
input devices is somewhat easier than performing comparisons of output devices because it is easier to
obtain quantitative measurements about device performance. Characteristics such as speed, accuracy,
and ease of learning are often used to measure how a device will perform a certain task.
Studies have been conducted to determine the effectiveness of 3D input devices compared to traditional
desktop devices such as the mouse. For example, Hinckley, Tullio, and colleagues (1997) compared the
mouse with a 6-DOF tracking device for performing 3D object rotation tasks. Their results showed that
the tracking device performed 36% faster than the 2D mouse without any loss of accuracy. In another
study, Ware and Jessome (1988) compared the mouse and the bat for manipulating 3D objects. These
results indicated that 3D object manipulation was easier to perform with the bat than with the mouse. Although these are only two studies, they do suggest that 3D input devices with 3 DOF or more are better
than a mouse for handling freeform3D object manipulation.
Comments
Post a Comment