3D USER INTERFACE INPUT HARDWARE || SHORT & LONG QUESTION ANSWER || AR &VR AKTU NOTES

SHORT QUESTION AND ANSWER 

Q1. What is 3D User Input hardware?

Ans: These hardware devices are called input devices and their aim is to capture and interpret the actions performed by the user. The degrees of freedom (DOF) is one of the main features of these systems. Classical interface components (such as mouse and keyboards and arguably touchscreen) are often inappropriate for non-2D interaction need. These systems are also differentiated according to how much physical interaction is needed to use the device, purely active need to be manipulated to produce information, purely passive do not need to.

Q2. What are Purely active input devices?

Ans: Purely active input devices that require the user to actually perform some physical action before data are generated. In other words, the input device will not provide any information to the computer unless it is manipulated in some way. 

Purely active input devices can have both discrete components (e.g. buttons) and manually driven continuous components, which means that the user must manipulate the component in order to generate the device’s continuous behavior. Trackballs and sliders are examples of manually driven continuous components and they allow the user to generate sequences of values from a given range.

Q3. What are Purely passive input devices?

Ans: Purely passive input devices do not require any physical activity for the device to function. In other words, these devices continue to generate data even if they are untouched. Of course, users can manipulate these devices like active input devices, but this is not a necessity. They are sometimes called monitoring input devices and they are very important in many 3D UIs. For example, a tracker is a device that will continually output position and/or orientation records even if the device is not moving. These devices are important when we want to have to keep asking for them. A perfect example of this is head tracking, which is a requirement for 3D audio displays and active viewer motion parallax in visual displays.

Q4. What are Input characteristics?

 Ans: Many different characteristics can be used to describe input devices. One of the most important is the degree of freedom (DOF) that an input device affords. A degree of freedom is simply a particular, independent way that a body moves in space. A device such as a tracker generally captures three position values and three orientation values for a total of six DOF. For the most part, a device’s DOF gives an indication of how complex the device is and the power it has in accommodating various interaction techniques. Another way to characterize input devices is by the input type and frequency of the data (i.e., reports) they generate:

          Data reports are composed of either discrete components, continuous components, or a combination of the two. 

          Discrete input device components typically generate a single data value (i.e., a Boolean value or an element from a set) based on the user’s action. They are often used to change modes in an application, such as changing the drawing mode in a desktop 3D modeling program or to indicate the user wants to start performing an action, such as instantiating a navigation technique. 

          Continuous input device components generate multiple data values (i.e., real-valued numbers, pixel coordinates, etc.) in response to a user’s action and, in many cases, regardless of what the user is doing (tracking systems and bend-sensing gloves are examples). In many cases, input devices combine discrete and continuous components, providing a larger range of device-interaction technique mappings. 

Q5. Name the variety of different input devices used in 3D interfaces.

Ans: These devices are broken up into the following categories:

  • Desktop input devices- These devices have been traditionally used in 2D desktop applications but work well in 3D UIs and 6-DOF devices that were designed specifically for 3D desktop interaction.
  • Tracking devices- These user tracking devices, which are very important when we want to know a user’s or physical object’s location in 3D space. 
  •  3D mice- A variety of 3D mice, which are defined to be devices that combine trackers with a series of buttons and other discrete components in a given configuration. 
  •  Special-purpose input devices- A combination of specialized input devices that do not fit well into our other categories. 
  •  Direct human input- which includes speech, bioelectric, and brain input.
Q6. What are Tracking Devices? 
Ans: 3D user interaction systems are based primarily on motion tracking technologies, to obtain all the necessary information from the user through the analysis of their movements or gestures, these technologies are called, tracking technologies. Trackers detect or monitor head, hand, or body movements and send that information to the computer. The computer then translates it and ensures that position and orientation are reflected accurately in the virtual world. Tracking is important in presenting the correct viewpoint, coordinating the spatial and sound information presented to users as well as the tasks or functions that they could perform. 3D trackers have been identified as mechanical, magnetic, ultrasonic, optical, and hybrid inertial. 
Examples of trackers include motion trackers, eye trackers, and data gloves.

Q7. Name some special-purpose input devices.

Ans: There are five special-purpose input devices are as follows:
  • ShapeTape used in manipulating 3D curves. 
  • A user wearing Interaction Slippers. 
  • The CavePainting Table used in the CavePainting application. 
  • Transparent palettes used for both 2D and 3D interaction.
  • The CAT (Control Action Table) is designed for surround-screen display environments.        
Q8. What are Home-Brewed Input devices?

Ans: Home-brewed input devices often assists the 3D UI designer in developing new interaction techniques and improving upon existing ones to provide the user with more expressive power in specific 3D applications and with new methods of expression that existing input devices do not afford. The 3D UI researcher and practitioner can go a long way toward developing these augmented and novel devices using simple electronic components and household items. In fact, this home-brewed style of input device design and prototyping has produced many of the commercial input devices sold today. We have already seen devices such as the bat, the Finger-Sleeve, and the CAT, all of which were designed and built by researchers in small academic research labs.

LONG QUESTION & ANSWER


Q1. Explain Desktop Input Devices?

Ans: Desktop Input Devices: There are many input devices that are used in desktop 3D UIs. Many of these devices have been used and designed for traditional 2D desktop applications such as word processing, spreadsheets, and drawing. However, with appropriate mappings, these devices also work well in 3D UIs and in 3D applications such as modeling and computer games. Some desktop input devices have also been developed with 3D interaction in mind. These devices can provide up to 6 DOF, allowing users to manipulate objects in 3D space, and are specifically designed for interaction on the desktop. Of course, most of these devices could also be used in more immersive 3D UIs that use surround-screen displays or HMDs, although some would be more appropriate than others. Here, we discuss some of the input devices which basically are purely active because the user must physically manipulate them to provide information to the 3D application:
  • Keyboards- The keyboard is a classic example of a traditional desktop input device that contains a set of discrete components (a set of buttons). They are commonly used in many desktop 3D applications from modeling to computer games. For example, the arrow keys are often used as input for simple travel techniques in first-person shooter computer games. Unfortunately, bringing the standard keyboard into more immersive 3D environments is not practical when users are wearing HMDs (head-mounted displays) or in surround-screen environments, since users are typically standing.
  • 2D mice and trackballs- Two-dimensional mice and trackballs are other classic examples of desktop input devices made popular by the windows, icons, menus, and pointers. The mouse is one of the most widely used devices in traditional 2D input tasks and comes in many different varieties. The trackball is basically an upside-down mouse. Instead of moving the whole device to move the pointer, the user manipulates a rotatable ball embedded in the device. One of the advantages of the trackball is that it does not need a flat 2D surface to operate, which means that it can be held in the user’s hand and will still operate correctly. Regardless of the physical design of the mouse or trackball, these devices have two essential components. The first is a manually continuous 2D locator for positioning a cursor and generating 2D pixel coordinate values. The second is a set of discrete components (usually one to three buttons).                                                          Mice and trackballs are relative devices that report how far they move rather than where they are. As with keyboards, they are commonly used in many different 3D applications and provide many different choices for mapping interaction techniques to tasks. For example, they are often combined with keyboards in computer games to enable more complex travel techniques. The keyboard may be used for translation while the mouse or trackball is used to rotate the camera so the user can see the 3D environment (e.g., lookup, look down, turn around).                                                                                                                                Mice and trackballs have the same problem as the keyboard in that they are not designed to be brought into more immersive 3D environments. Because a mouse needs to be placed on a 2D surface in order for the locator to function properly, it is difficult to use with these displays. Since the trackball can be held in one hand and manipulated with the other, it can be used in immersive 3D environments, and it has also been successfully incorporated into a 3D interface using a workbench display. However, in most cases, 3D mice are used in immersive 3D interfaces because of their additional DOF.
  • Joysticks- Joysticks are another example of input devices traditionally used on the desktop and with a long history as a computer input peripheral. These devices are similar to mice in that they have a combination of a manually continuous 2D locator and a set of discrete components such as buttons and other switches. However, there is an important distinction between the mouse and joystick. With a mouse, the cursor stops moving as soon as the mouse stops moving. With a joystick, the cursor typically continues moving in the direction the joystick is pointing. To stop the cursor, the joystick’s handle must be returned to the neutral position. This type of joystick is commonly called an isotonic joystick, and the technique is called rate control (as opposed to position control). Many console video game systems make use of different joystick designs in their game controllers. Joysticks can also be augmented with haptic actuators, making them haptic displays as well. Isometric joysticks have also been designed. Isometric devices have a large spring constant so they cannot be perceptibly moved. Their output varies with the force the user applies to the device. A translation isometric device is pushed, while a rotation isometric device is twisted. A problem with these devices is that users may tire quickly from the pressure they must apply in order to use them. Joysticks have been used as input devices in computer games for many years. They are frequently used in driving and flight simulation games, and when integrated into game controllers, they are the input device of choice with console video game systems. Additionally, they are sometimes used in CAD/CAM applications. Since joysticks are designed primarily for desktop applications and console video game systems, they are rarely used in 3D UIs that employ HMDs or surround-screen visual displays. However, since many joysticks are handheld, they could easily be brought into these types of environments.
  • Six DOF Input devices for the desktop- The devices we have discussed so far can all be used in 3D interfaces, and they can allow the user to interact with 3D objects, but they were not specifically designed for this purpose. There are two examples of 6 DOF input devices that were developed specifically for 3D interaction on the desktop. A slight push and pull pressure of the fingers on the cap of the device generates small deflections in x, y, and z, which moves objects dynamically in the corresponding 3 axes. With slight twisting and tilting of the cap, rotational motions are generated along the 3 axes. The particular devices also have a series of buttons that can be programmed with any frequently used function or user-defined keyboard macro. 
Q2. What are Tracking Devices and explain its different types of trackers also? 

Ans: Tracking devices: Already discussed above. 
There are three of the most common tracking devices which we examine:
  1. Motion Trackers
  2. Eye trackers
  3. Data Gloves
MOTION TRACKING
                       It is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robotics. In filmmaking and video game development, it refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

One of the most important aspects of 3D interaction in virtual worlds is providing a correspondence between the physical and virtual environments. As a result, having accurate tracking is a crucial part of making interaction techniques usable within VE applications. The critical characteristics of motion trackers include their range, latency (delay between the time a motion occurs and when it is reported), jitter (noise or instability), and accuracy. Currently, there are a number of different motion-tracking technologies in use, which include: 
  • magnetic tracking
  • mechanical tracking
  • acoustic tracking
  • inertial tracking
  • optical tracking
  • hybrid tracking    
Magnetic Tracking- Magnetic trackers use a transmitting device that emits a low-frequency magnetic field. A small sensor, the receiver, determines its position and orientation relative to this magnetic source. The range of such trackers varies, but they typically work within a radius of 4 to 30 feet. The figure shows an example of a magnetic tracking system. It uses a small emitter and receivers and has better accuracy than larger range systems. However, its range is limited to a 4-foot radius, which means the device is not appropriate for large display environments such as surround-screen visual displays or even HMDs where the user needs a lot of space to roam. 
In general, magnetic tracking systems are accurate to within 0.1 inches in position and 0.1 degrees in orientation. Their main disadvantage is that any ferromagnetic or conductive (metal) objects present in the room with the transmitter will distort the magnetic field, reducing the accuracy. These accuracy reductions can sometimes be quite severe, making many interaction techniques, especially gesture-based techniques, difficult to use. 

Mechanical Tracking- Mechanical trackers have a rigid structure with a number of interconnected mechanical linkages combined with electromechanical transducers such as potentiometers or shaft encoders. One end is fixed in place, while the other is attached to the object to be tracked (usually the user’s head or hand). As the tracked object moves, the linkages move as well, and measurements are taken from the transducers to obtain position and orientation information. Arm-mounted visual displays use this type of tracking technology. Mechanical trackers are very accurate and transmit information with very low latencies. However, they are often bulky, limiting the user’s mobility and making it difficult to use physically based navigation techniques.

Acoustic Tracking- Acoustic tracking devices use high-frequency sound emitted from source components and received by microphones. The source may be on the tracked object, with the microphones placed in the environment (an outside-in approach), or the source may be in the environment, with the microphones on the tracked object (an inside-out approach). The dominant approach to determining position and orientation information with acoustic tracking is to use the time-of-flight duration of ultrasonic pulses. 

Inertial Tracking- Inertial tracking systems use a variety of inertial measurement devices such as angular-rate gyroscopes and linear accelerometers. These devices provide derivative measurements (i.e., gyroscopes provide angular velocity, and linear accelerometers provide linear acceleration), so they must be integrated to obtain position and orientation information. Since the tracking system is in the sensor, the range is limited to the length of the cord that attaches the sensor to the electronics unit (wireless tracking is also possible with these systems). In addition, these devices can produce measurements at high sampling rates. Inertial tracking systems were originally used in ships, submarines, and airplanes in the 1950s. However, the weight of these devices prohibited their use in motion tracking until they became small enough to fit in microelectronic mechanical systems(MEMS).

Optical Tracking- Another approach to position and orientation tracking of users and physical objects is from measurements of reflected or emitted light. These types of trackers use computer vision techniques and optical sensors such as cameras, infrared emitters, or lateral effect diodes, which generate signals proportional to the position of incoming light along one axis (i.e., 2D displacement measurement). A variety of different cameras can be used, from simple desktop webcams to sophisticated high-resolution cameras with high sampling rates and pixel densities.
Like acoustic trackers, optical tracking systems use either outside-in or inside-out configurations. Outside-in systems have their sensors mounted at fixed locations in the environment, and tracked objects are marked with active or passive landmarks such as retro-reflective markers or colored gloves. The number and size of these landmarks vary depending on the type of optical tracking system and how many DOF is required. In some cases, no landmarks are used at all. Inside-out systems place optical sensors on the user or tracked object while the landmarks are placed in the environment. It delivers accurate position and orientation tracking without environmental interference or distortion.

Hybrid Tracking- Hybrid trackers put more than one tracking technology together to help increase accuracy, reduce latency, and provide a better overall 3D interaction experience. In general,  individual tracking technologies are used to compensate for each other’s weaknesses. An example of such a device is shown in Figure 4.10. This example combines inertial and ultrasonic tracking technologies. The inertial component measures orientation and the ultrasonic component measures position, enabling the device to attain 6 DOF. Moreover, information from each component is used to improve the accuracy of the other. As a side note, this tracking system has the added advantage of being wireless, with the user wearing a small battery-powered electronics box on her belt. The major difficulty with hybrid trackers is that more components produce more complexity. The extra complexity is warranted, however, if tracking accuracy is significantly improved.

EYE TRACKING
                   Eye trackers are purely passive input devices used to determine where the user is looking. Eye-tracking technology is primarily based on computer vision techniques: the device tracks the user’s pupils using corneal reflections detected by a camera. Devices can be worn or embedded into a computer screen, making for a much less obtrusive interface. Other eye-tracking techniques include electrooculography, which measures the skin’s electric potential differences using electrodes placed around the eye, and embedding mechanical or optical reference objects in contact lenses that are worn directly on the eye. 
From a generic interaction perspective, eye-tracking systems have been used both as an evaluation tool and to interact with an application. For example, these devices are used to collect information about a user’s eye movements in the context of psychophysical experiments, to get application usage patterns to help improve the interface, or for training in visual inspection tasks. Eye-tracking systems are also used as input devices. An example would be a user controlling a mouse pointer strictly with his eyes. In the context of 3D interface design, active eye-tracking systems have the potential to improve upon many existing 3D interaction techniques. For example, there are numerous techniques that are based on gaze direction (e.g., gaze-directed steering, gaze-directed manipulation), which use the user’s head tracker as an approximation to where she is looking. Since the gaze vector is only accurate if the user is looking straight ahead, usability problems can occur if the user looks in other directions while keeping the head stationary. Eye-tracking devices might help improve these gaze-directed techniques since the actual gaze from the user can be obtained.

DATA GLOVES
                In some cases, it is useful to have detailed tracking information about the user’s hands, such as how the fingers are bending or if two fingers have made contact with each other. Data gloves are input devices that provide this information. Data gloves come in two basic varieties: bend-sensing gloves and pinch gloves. 
  • Bend- Sensing Gloves: Bend-sensing data gloves are purely passive input devices used to detect postures of the hand. For example, the device can distinguish between a fist, a pointing posture, and an open hand. The raw data from the gloves is usually given in the form of joint angle measurements, and software is used to detect postures based on these measurements. 
  • Pinch Gloves: The Pinch Glove (see Figure) system is an input device that determines if a user is touching two or more fingertips together. These gloves have a conductive material at each of the fingertips so that when the user pinches two fingers together, electrical contact is made. These devices are often used for performing grabbing and pinching gestures in the context of object selection, mode switching, and other techniques.
  • Combining Bend-Sensing Data and Pinch Input: Both the Pinch Gloves and bend-sensing gloves have limitations. Although it is possible to determine if there is finger contact (e.g., index finger to thumb) with a bend-sensing glove, some form of hand gesture recognition is required, which will not be as accurate as of the Pinch Glove (which has essentially 100% accuracy assuming the device is functioning properly). Conversely, one can get an idea of how the fingers are bent when using Pinch Gloves, but they provide only very rough estimates. Ideally, a data glove should have the functionality of both bend-sensing gloves and Pinch Gloves.
Q3. What are 3D Mice and how it works also describe the types?
Ans: 3D Mice- In many cases, specifically with motion trackers these tracking devices are combined with other physical device components such as buttons, sliders, knobs, and dials to create more functionally powerful input devices. We call these devices 3D mice and define them broadly as a handheld or worn input devices that combine motion tracking with a set of physical device components. 
The distinguishing characteristic of 3D mice, as opposed to regular 2D mice, is that the user physically moves them in 3D space to obtain position and/or orientation information instead of just moving the device along a flat surface. Therefore, users can hold the device or, in some cases, wear it. Additionally, with orientation information present, it is trivial to determine where the device is pointing (the device’s direction vector), a function used in many fundamental 3D interaction techniques. Because of their generality, they can be mapped to many different interaction techniques, and in one form or another, they are often the primary means of communicating user intention in 3D UIs for VE applications.
There are two types of 3D Mice are as follows:
  1. Handheld 3D Mice- A common design approach for 3D mice is to place a motion tracker inside a structure that is fitted with different physical interface widgets. Actually, one of the first 3D mice to be developed used no housing at all. The “bat”, so named because it is a mouse that flies, was developed by Colin Ware in the late 1980s. It was simply a 6-DOF tracking device with three buttons attached to it. Such a device is rather easy to build with a few electrical components (provided you have the tracking device). A more sophisticated and elegant version of the bat is shown in Figure below (a). This device houses a motion tracker in a structure that looks like a simple remote control. It is commonly used in conjunction with surround-screen displays for both navigation and the selection of 3D objects. The physical structure that houses the motion tracker is often a replication of an input device used in the real world. For example, the 3D mouse (as shown in (b) figure) is modeled after an Air Force pilot’s flight stick. Some 3D mice have also been developed to look like their 2D counterparts. For example, the Fly Mouse looks similar to a conventional 2D mouse, but it uses acoustic tracking, has five buttons instead of two, and can also be used as a microphone for speech input. 
  2. User-Worn 3D Mice- Another approach to the design of 3D mice is to have the user wear them instead of holding them. Assuming the device is light enough, having the device worn on the user’s finger, for example makes the device an extension of the hand. Figure (a) shows the Ring Mouse, an example of such a device. It is a small, two-button, ring-like device that uses ultrasonic tracking that generates only position information. One of the issues with this device is that it has a limited number of buttons because of its small form factor.                                                                                                                                    The Finger-Sleeve, shown in figure (b), is a finger-worn 3D mouse that is similar to the Ring Mouse in that it is small and lightweight, but it adds more button functionality in the same physical space by using pop-through buttons. Pop-through buttons have two clearly distinguished activation states corresponding to light and firm finger pressure.
Q4. Explain different Tracking devices. 

Ans: There are various tracking devices are as follows:
  1. Nintendo WII Remote ("Wiimote")- The Wii Remote device does not offer a technology based on 6-DOF since again, cannot provide absolute position, in contrast, is equipped with a multitude of sensors, which convert a 2D device is a great tool of interaction in 3D environments. This device has gyroscopes to detect rotation of the user, accelerometers ADXL3000, for obtaining speed and movement of the hands, optical sensors for determining the orientation and electronic compasses, and infra-red devices to capture the position. This type of device can be affected by external references of infra-red light bulbs or candles, causing errors in the accuracy of the position. An essential capability of the Wii Remote is its motion sensing capability, which allows the user to interact with and manipulate items on screen via gesture recognition and pointing, using an accelerometer and optical sensor technology.
  2. Google Tango Devices- The Tango Platform is an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It uses computer vision and internal sensors (like gyroscopes) to enable mobile devices, such as smart-phones and tablets, to detect their position relative to the world around them without using GPS or other external signals. It can therefore be used to provide 6- DOF input can also be combined with its multi-touch screen. The Google Tango devices can be seen as more integrated solutions than the early prototypes combining spatially-tracked devices with touch-enabled-screens for 3D environments. 
  3. Microsoft KINECT- The Microsoft Kinect device offers us a different motion capture technology for tracking. Instead of basing its operation on sensors, this is based on a structured light scanner, located in a bar, which allows tracking of the entire body through the detection of about 20 spatial points, of which 3 different degrees of freedom are measured to obtain the position, velocity, and rotation of each point. Its main advantage is the ease of use, and the no requirement of an external device attached by the user, and its main disadvantage lies in the inability to detect the orientation of the user, thus limiting certain space and guidance functions.
  4. Leap Motion- The Leap Motion is a new system of tracking of hands, designed for small spaces, allowing a new interaction in 3D environments for desktop applications, so it offers a great fluidity when browsing through three-dimensional environments in a realistic way. It is a small device that connects via USB to a computer and used two cameras with infra-red light LED, allowing the analysis of a hemispheric area about 1 meter on its surface, thus recording responses from 300 frames per second, information is sent to the computer to be processed by the specific software company.
Q5. Explain all Special purpose input devices. 

Ans: Many other types of devices are used in 3D interfaces. These devices are often designed for specific applications or used in specific interfaces. These devices are -

ShapeTape: ShapeTape (shown in Figure) is a flexible, ribbon-like tape of fiber-optic curvature sensors that comes in various lengths and sensor spacing. Because the sensors provide bend and twist information along the tape’s length, it can be easily flexed and twisted in the hand, making it an ideal input device for creating, editing, and manipulating 3D curves.

Interaction Slippers: In many surround-screen display configurations where the floor is actually a display surface, users must wear slippers when they enter the device to avoid making scuff marks and tracking in dirt. An interesting input device takes advantage of the need for slippers in these environments: the Interaction Slippers (see Figure). The Interaction Slippers embed a wireless track-ball device (the Trackman) into a pair of common house slippers. The slippers use wireless radio technology to communicate to the host computer. The Trackman is inserted into a hand-made pouch on the right slipper and rewired. Two of the Trackman’s three buttons are connected to a pair of conductive cloth patches on the instep of the right slipper. On the instep of the left slipper, two more conductive cloth patches are attached. Touching a cloth patch on the left slipper to a cloth patch on the right slipper completes the button press circuit. This design enables two gestures corresponding to heel and toe contacts respectively. The slippers were designed for interacting with the Step WIM navigation technique, in which a miniature version of the world is placed on the ground under the user’s feet, allowing him to quickly travel to any place in the VE. 

Cave painting Table: An example of an input device that was specifically developed for a particular 3D application is the CavePainting Table (see Figure ) used in CavePainting, a system for painting 3D scenes in a VE. The CavePainting Table uses a prop-based design that relies upon multiple cups of paint and a single tracked paintbrush. These paint cup props stay on a physical table that slides into the surround-screen device and also houses knobs and buttons used for various interaction tasks. In conjunction with the table, a real paintbrush is augmented with a single button that turns the “paint” on and off. The bristles of the brush are covered with conductive cloth, and users can dip the brush into the paint cups (which are linked with the conductive cloth as well) to change brush strokes. A tracked bucket is used to throw paint around the virtual canvas.

Transparent Palettes: In some cases, making a simple addition to an existing input device can create a powerful tool for interacting in 3D applications. For example, when interacting with 3D applications that utilize workbench-style displays, attaching a motion tracker to a piece of Plexiglas can create a useful tool for interacting in 2D and 3D In addition, these devices can also have touch-sensitive screens (see Figure). Such a device allows the user to perform 2D interaction techniques, such as writing and selection of objects and commands from 2D palettes, as well as 3D interaction techniques, such as volumetric selection by sweeping the device through the virtual world. This “pen-and- tablet” metaphor has been used extensively in 3D UIs.

Control Action Table: The last input device is the Control Action Table (CAT), which was designed for use in surround-screen display environments. This freestanding device (shown in Figure) looks like a circular tabletop. The CAT uses angular sensors to detect orientation information using three nested orientation axes. The device also has an isometric component; the tabletop is equipped with a potentiometer that detects forces in any 3D direction. Thus, the user can push or pull on the device for translational movement. Additionally, the CAT has a tablet for 2D interaction mounted on the tabletop, which makes it unique because it supports both 6-DOF and 2D input in the same device. Other advantages of the CAT include the ability to control each DOF individually and its location persistence (meaning that its physical state does not change when released). The CAT does have some inherent limitations because the nature of the nested orientation axes can make some orientations hard to specify, and in certain configurations (e.g., when the tabletop is vertical), translational movement can be difficult to perform as well.

Q6. What is Direct Human Input? Explain in detail. 

Ans: Direct Human Input- A powerful approach to interacting with 3D applications is to obtain data directly from signals generated by the human body. With this approach, the user actually becomes the input device. For example, a user could stand in front of a camera and perform different movements, which the computer would interpret as commands. In this, we specifically discuss speech, bioelectric, and brain-computer input and how they can be used in 3D UIs.
Speech Input: Speech input provides a nice complement to other input devices. It is a natural way to combine different modes of input to form a more cohesive and natural interface. In general, when functioning properly, speech input can be a valuable tool in 3D UIs, especially when both of the user’s hands are occupied. Beyond choosing a good speech recognition engine, there are many other important issues to consider when using speech for a 3D interface. There are exchanges that must be made when dealing with speech input. One important issue is where the microphone is to be placed. Ideally, a wide-area microphone is used so that the user need not wear a headset. Placing such a microphone in the physical environment could be problematic since it might pick up noise from other people or machines in the room. One of the big problems with using speech input is having the computer know when to and when not to listen to the user’s voice. Often, a user is conversing with a collaborator with no intention of issuing voice commands, but the application “thinks” the user is speaking to it. This misinterpretation can be very troublesome. 
One of the best ways to avoid this problem is to use an implicit or invisible push-to-talk scheme. A traditional push-to-talk scheme lets the user tell the application when he or she is speaking to it, usually by pushing a button. In order to maintain the naturalness of the speech interface, we do not want to add to the user’s cognitive load. The goal of implicit push-to-talk is to embed the “push” into existing interaction techniques so the user does not have the burden of remembering to signal the application that a voice command is about to be issued. As an example, consider a furniture layout application in which a user wants to place different pieces of furniture into a room or other architectural structure. The user wishes to put a table into a kitchen. To accomplish this task, the user must create the object and then place it in the room. The user shows where the table should be placed using a laser pointer and then says, “Give me a table, please.” The act of picking up the laser pointer signals the application that the user is about to ask for an object. This action “piggybacks” the voice command onto the placement task, making the push-to-talk part of the technique implicit. 
BioElectric Input: NASA Ames Research Center has developed a bioelectric input device that reads muscle nerve signals emanating from the forearm. These nerve signals are captured by a dry electrode array on the arm. The nerve signals are analyzed using pattern recognition software and then routed through a computer to issue relevant interface commands.
Brain Input: The goal of brain-computer interfaces is to have a user directly input commands to the computer using signals generated by the brain. A brain-computer interface can use a simple, non-invasive approach by monitoring brainwave activity through electroencephalogram (EEG) signals. The user simply wears a headband or a cap with integrated electrodes. A future, more invasive approach would be to surgically implant microelectrodes in the motor cortex. Of course, this approach is still not practical for common use but might be appropriate for severely disabled people who cannot interact with a computer in any other way. Research has shown that a monkey with microelectrodes implanted in its motor cortex can move a mouse cursor to desired targets.

Q7. What are the strategies for building input devices?
 
Ans: There are a variety of strategies for constructing home-brewed input devices. One of the first things to consider is the device’s intended functionality because doing so helps to determine what types of physical device components will be required. For example, the device might need to sense forces, motion, or simply button presses. Based on the intended device functionality, the device developer can choose appropriate sensors, whether they be digital (output of 0 or 1) or analog (output of a range of values). These sensors can easily be found in electronics stores and over the Internet. Examples include pressure sensors, bend sensors, potentiometers, thermostats (for sensing temperature), photocells (for sensing light), simple switches, and many others. These sensors come in a variety of styles and configurations, and the appropriate choice is often based on trial and error. This trial-and-error approach is especially important with buttons, since buttons and switches come in many different shapes, sizes, and force thresholds—the amount of force the user needs to activate the button or switch. 
In many cases, building the sensors is a feasible option, especially if they are switches or buttons. One powerful approach for building simple switches is to use conductive cloth. The conductive cloth is just fabric with conductive material sewn into it, and it has many advantages for building custom input devices. In fact, the input devices shown above all use the conductive cloth as part of their designs. The conductive cloth is inexpensive and fairly robust material. Because it is cloth, it is flexible, so it can be used just about anywhere, in many different sizes and geometric configurations. Additionally, the conductive cloth is easily sewn onto other fabrics so that devices can be constructed on clothing. Another important consideration when making home-brewed input devices is how the sensors are housed in the physical device. Positioning sensors in the device housing is especially important when they are active components such as buttons, dials, and sliders because the user must be able to interact with them comfortably to manipulate the device. For example, if a homemade 3D mouse is being constructed with several buttons, these buttons should be placed so that the user does not have to endure any undue strain in order to press any of the buttons at any given time. Sensor placement in homemade input devices is also affected by the geometry of the device itself. One of the reasons many 3D UI designers do not build homemade devices is that they do not have the ability or equipment to construct the physical housing the sensors are placed in and on. Ideally, a milling machine, vacuum-form device (a device that heats plastic and stretches it over a mold), or 3D printer would be used to construct the device housing based on a model developed in 3D modeling software. However, these tools are not necessarily household items. One novel approach for constructing device housings is to use Lego bricks. Another approach is to use modeling clay to create input device housings. The advantage of using modeling clay is that it can be molded into any shape the designer wants and can be quickly changed to try out different geometries. Once an appropriate design or designs are found, the clay can be oven-fired and used as the device housing.

Q8. How to connect Home-Brewed Input devices to the computer? 

Ans: Connecting Home-Brewed Input devices to the Computer- The other important part of constructing home-brewed input devices is choosing how to connect them to the computer. In the majority of cases, homemade input devices require some type of logic that the user needs to specify in order for the computer to understand the data the input device produces. The one exception is when existing devices are taken apart so that the sensors in them can be used in different physical configurations. An example of this is the Interaction Slippers shown in Figure above. Because the device uses the rewired components of a wireless mouse, it can use the standard mouse port to transmit information to the computer, thus requiring no additional electronics. There are two primary approaches for connecting a homemade input device to the computer so it can be used in 3D interfaces. The first approach is to use a microcontroller. A microcontroller is just a small computer that can interface with other electronic components through its pins. There are many different varieties to choose from depending on price, power, ease of programming, and so on. The designer can connect an input device to a microcontroller on a circuit board, which in turn communicates to the computer through a serial or USB port. Typically, the designer first builds the electronics on a prototyping board (breadboard), which is an easy way to establish electrical connections between the device and the microcontroller without the need to solder. Using any of the many software packages (many of them are free and use Basic) or for writing microcontroller code, the developer can write a program for controlling the input device and download it to the microcontroller. After the prototyping and testing stage, the microcontroller and any associated electronics can be attached to an appropriate circuit board. The homemade input device then has its own electronics unit for sending information from the device to the computer, and with appropriate software such as device drivers, to the 3D UI. Using microcontrollers does require some effort and has a slight learning curve, but the approach gives the input device developer a lot of freedom in choosing how the input device/computer interface is made.
A second approach to connecting homemade input devices to a computer is to use the Musical Instrument Device Interface (MIDI). MIDI is a protocol that was developed to allow electrical musical instruments to communicate with computers. The important characteristic of MIDI is that it is a protocol for communicating control information, such as if a button was pressed, how hard it was pressed, or how long it was pressed, which means it can be used for connecting input devices to computers. Figure shows an example of a MIDI controller and some of the sensors used in developing input devices with it. Similar to Phidgets, using MIDI gives the input device developer the advantage of not having to deal with microcontroller programming and circuit design. However, in most cases, the developer still needs to write the device drivers to use the custom-built input devices in 3D applications.

Q9. How to choose input devices for 3D User Interface and what are the factors involved in it?
Ans: A key issue in 3D interface design is to choose the appropriate input devices that are best suited to the needs of a particular application. The designer needs to examine the various tasks the 3D UI needs to support, find or develop the appropriate interaction techniques, and ensure that the chosen input devices are mapped to these techniques appropriately. In this section, we first examine some important factors to consider when choosing input devices. There are some important factors which are – 
Many factors must be considered when choosing an appropriate input device for a particular 3D UI.
  • Device ergonomics
  • The number and type of input modes
  • The available technique to device-mapping strategies
  • The types of tasks the user will be performing all play a role in choosing suitable input devices, making these choices a challenging task.
The problem is amplified due to the variety of possible operations the user might perform within the context of a given 3D application. A particular device might be perfect for one task in an application but completely inappropriate for another. 
Device ergonomics is clearly an important consideration when choosing an appropriate input device for a 3D application. In general, we do not want to put undue strain on the user’s body. Such strain can lead to repetitive stress injuries and make it difficult for the user to perform common tasks. Devices should be lightweight, require little training, and provide a significant transfer of information to the computer with minimal effort. 
A particular device’s input modes must be considered when choosing an input device for a 3D application. The types of input required for a given application helps to reduce the possible device choices. However, these devices in an immersive 3D modeler are not appropriate, since they are difficult to use while standing and they do not provide the appropriate DOF and continuous events needed to track the user’s head and hands. In contrast, a desktop 3D computer game does not necessarily require a complicated 6-DOF tracking device, since, in most cases, the keyboard and a mouse or a joystick will suffice. In such an application, although a bend-sensing glove could be used to navigate (using some collection of gestures), it would probably not be appropriate given the complexity of the device. A simpler device such as a Wanda, shown above, is much easier to use since the application does not need all of the extra DOF that a bend-sensing glove gives the user. 
An input device can handle a variety of interaction techniques depending on the logical mapping of the technique to the device. The major issue is whether that mapping makes the device and the subsequent interaction techniques usable. Therefore, an important consideration when choosing an input device in a 3D application is how the given device will map to the variety of interaction techniques required to perform application tasks. It is in these mappings where tradeoffs are usually made since very often a device will have a natural mapping to one or two of the interaction techniques in the application but relatively poor mapping to the others. 
This example makes the point that there is often a tradeoff when choosing an input device for a 3D application. In many cases, input de- vices have been designed for general use, which means that although they can be used for a variety of interaction techniques, they may not provide the best mapping for any one of them. Thus, several specialized de- vices may provide better usability than a single general-purpose device.

Q10. Write the name of the tools which help to choose input devices.

Ans: There are basically two tools that help to choose input devices are input Device Taxonomies- Input device taxonomies can be a useful tool for determining which input devices can be substituted for each other, and they can also help in making decisions about what devices to use for particular tasks. In addition, they are an important part of 3D UI design because they provide a mechanism for understanding and discussing the similarities and differences among input devices. In this, we briefly review some of these input device taxonomies from a historical perspective to show the evolution of these tools. Additionally, we discuss how they can be used to help make decisions about choosing appropriate devices in 3D UIs. 
One of the first input device taxonomies was developed by Foley and Wallace (1974). Their approach was to separate the input device from the interaction technique. They created a set of four virtual devices, which at the time covered most input devices. These virtual devices are the pick, locator, button, and valuator. A pick device is used to designate a user-defined object. A locator is used to determine the position and/or orientation. A button is used to designate a system-defined object. Finally, a valuator s used to input a single value within a set of numbers. Two additional virtual devices, stroke and string, were added to this set by Enderle, Kansay, and Pfaff (1984). A stroke is a sequence of points, and a string is a sequence of characters. 
This virtual device taxonomy proves useful in many different situations. For example, the 3D UI developer can use this taxonomy as a tool for quickly reducing the number of possible input devices to choose from by simply examining which virtual devices fit the application best and selecting the devices that fit in those categories. If a 3D locator is required in an application, then we can automatically eliminate all of the physical devices that do not map to this virtual device. However, this taxonomy does have a fundamental flaw, because devices that appear to be equivalent in the taxonomy can be dramatically different both physically and practically. For example, a mouse and a trackball are very different devices, yet are both considered to be 2D locators and stroke devices. 
As a result of these limitations, new taxonomies were developed to take other characteristics of the input devices into account. For example, Foley, Wallace, and Chan (1984) improved upon the virtual device taxonomy by mapping elementary interaction tasks (e.g., select, position, orient,) to the devices that perform those tasks. Based on task requirements, only a limited set of devices could be used for any particular task. However, because the taxonomy is task-based, an input device can appear for more than one task. 
Buxton (1983) developed a taxonomy that organizes continuous input devices into a 2D space, the dimensions of which are DOF and properties sensed (i.e., motion, position, pressure). Additionally, a sub-classification is used for devices that have a mechanical intermediary between the hand and the sensing mechanism and those that are touch-sensitive. An example of how this taxonomy classifies input devices.
Empirical Evaluations- In general, the taxonomies are useful for narrowing down the choice of input device for a particular task or 3D UI. However, in order to get concrete information about which devices are appropriate for given tasks, empirical studies are often required. In contrast to the lack of empirical work done on choosing appropriate output devices for 3D applications, there has been a good amount of research evaluating input devices for interacting in 3D. Performing empirical analyses of input devices is somewhat easier than performing comparisons of output devices because it is easier to obtain quantitative measurements about device performance. Characteristics such as speed, accuracy, and ease of learning are often used to measure how a device will perform a certain task. Studies have been conducted to determine the effectiveness of 3D input devices compared to traditional desktop devices such as the mouse. For example, Hinckley, Tullio, and colleagues (1997) compared the mouse with a 6-DOF tracking device for performing 3D object rotation tasks. Their results showed that the tracking device performed 36% faster than the 2D mouse without any loss of accuracy. In another study, Ware and Jessome (1988) compared the mouse and the bat for manipulating 3D objects. These results indicated that 3D object manipulation was easier to perform with the bat than with the mouse. Although these are only two studies, they do suggest that 3D input devices with 3 DOF or more are better than a mouse for handling freeform3D object manipulation.
 

Comments