UNIT-4 3D INTERACTION TECHNIQUES ||LONG QUESTION ANSWER || AR & VR AKTU NOTES

 LONG QUESTION ANSWER

Q 1. What is 3D manipulation tasks?

Ans: The effectiveness of 3D manipulation techniques greatly depends on the manipulation tasks to which they are applied. The same technique could be intuitive and easy to use in some task conditions and utterly inadequate in others. For example, the techniques needed for the rapid arrangement of virtual object in immersive modeling applications could be very different from the manipulation techniques used to handle surgical instruments in a medical simulator. Therefore, before discussing interaction techniques, it is important to define what we actually mean by manipulation.
Manipulation usually refers to any act of handling physical objects with one or two hands. For the practical purpose of designing and evaluating 3D manipulation techniques, we narrow the definition of the manipulation task to spatial rigid object manipulation—that is, manipulations that preserve the shape of objects. This definition is consistent with an earlier definition of the manipulation task in 2D UIs.

Q 2. What are the various steps for 3D manipulation tasks ?

Ans: We designate the following tasks as basic manipulation tasks:

  • Selection

    Selection is the task of acquiring or identifying a particular object or subset of objects from the entire set of objects available. Sometimes it is also called a target acquisition task. The real-world counterpart of the selection task is picking up one or more objects with a hand, pointing to one or more objects, or indicating one or more objects by speech. Depending on the number of targets, we can distinguish between single-object selection and multiple-object selection.

  • Positioning

    Positioning is the task of changing the 3D position of an object. The real world counterpart of positioning is moving an object from a starting location to a target location.

  • Rotation

    Rotation is the task of changing the orientation of an object. The real-world counterpart of rotation is rotating an object from a starting orientation to a target orientation.

  • Scaling

    Scaling is the task of changing the size of an object. While this task lacks a direct real-world counterpart, scaling is a common virtual manipulation for both 2D and 3D UIs. Hence, we include it as a basic manipulation task.

Tasks Parameter
Selection Distance and direction to target, target size, density of objects around the target, number of targets to be selected, target occlusion.
Positioning Distance and direction to initial position, distance and direction to target position, translation distance, required precision of positioning.
Rotation Distance to target, initial orientation, final orientation, amount of rotation, required precision of rotation.
Scaling Distance to target, initial scale, final scale, amount of scale, required precision of scale.

Q 3. What are the classifications for 3D manipulation ?

Ans.Many 3D manipulation techniques relate to one another, and many share common properties. Classifying them according to common features is useful in understanding the relations between different groups of techniques and can help us to grasp a larger picture of the technique design space. Some classifications are-


  1. Pointing Techniques -

    The motivation behind the pointing technique is to allow the user to easily select and manipulate objects located beyond the area of reach by simply pointing at them. When the vector defined by the direction of pointing intersects a virtual object, the user can select it by issuing a trigger event that confirms the selection. Examples of triggers are buttons and voice commands. After the object is selected, it can be attached to the end of a pointing vector for manipulation. Pointing is a powerful selection technique. A number of experimental evaluations have demonstrated that it results in better selection performance than grasping-based techniques because pointing requires significantly less physical hand movement from the user.
    The difference between them is defined mostly by two design variables: first, how the pointing direction is defined (i.e., how the input device position and orientation is mapped onto the direction of the ray), and second, by the type of selection calculation, which defines the visual feedback provided and how many objects are selected when users point at them. Based on this second variable, we organize pointing techniques in this section into two categories—vector-based and volume-based techniques.

    • Vector-based pointing techniques:

      These techniques require only a vector in order to calculate what object the user intends to select and manipulate. This makes vector-based pointing rather easy to implement. As a result, these techniques are commonly used for pointing in 3D UIs. In this section, we discuss the following vector based pointing techniques:

      1. Ray casting-

        With ray-casting, the user points at objects with a virtual ray that defines the direction of pointing, and a virtual line segment attached to the hand visualizes the pointing direction The pointing vector in the case of the simple ray-casting technique is estimated from the direction of the virtual ray that is attached to the user’s virtual hand and the 3D position of the virtual hand. In cases where the hand’s position and orientation cannot be tracked (or cannot be tracked accurately), the ray may emanate from the tracked head position and extend in the direction the head is pointing; this is termed “gaze-based ray-casting.” More than one object can be intersected by the ray, but only the one closest to the user should be selected; thus the interaction technique must consider all possible candidates for selection.
        In the simplest case of the ray-casting technique, the shape of the ray can be a short line segment attached to the user’s hand This, however, could be difficult to use when selecting small objects located far away, as it does not provide the user with sufficient visual feedback on whether the ray is actually intersecting the virtual object. An infinitely long virtual ray provides the user with better visual feedback, as it allows the user to select objects simply by touching them with the ray.

        Shooting with raycasting

      2. Fishing Reel-

        The difficulty of controlling the distance to the virtual objects being manipulated is a problem for all pointing techniques. One possible solution is to supply the user with an additional input mechanism dedicated to controlling the length of the virtual ray. Similar to the way a fishing reel works, this technique allows the user to select an object with the simple ray-casting technique, then reel it back and forth using the dedicated input mechanism, which could be, for example, a simple mechanical slider, a joystick, or a pair of buttons added to the tracking device. Although the fishing reel lets the user control the distance to the object, it separates the manipulation’s degrees of freedom—the ray direction is controlled by the spatial movements of the user’s hand, while distance is controlled by other means.

    • Volume based pointing techniques-

      These techniques require the definition of a vector and a volume in order to determine what the user intends to select and manipulate. Usually, the volume is defined in some relation to the vector, such as using a vector to define the axis of a cone. However, in some cases, the vector is used to intersect an object, which then defines the position of a volume given that intersection point. In this section, we discuss the following volume-based pointing examples:

      1. Flash-light-

        The flashlight technique was developed to provide a “soft” selection technique that does not require precision and accuracy of pointing to virtual objects with the ray. The technique imitates pointing at objects with a flashlight, which can illuminate an object even when the user does not point at it precisely. In the flashlight technique, the pointing direction is defined in the same way as in the simple ray-casting technique, but it replaces the virtual ray with a conic selection volume, with the apex of the cone at the input device. Objects that fall within this selection cone can be selected. The technique therefore allows easy selection of small objects even when they are located far from the user. The obvious problem with the flashlight technique is disambiguation of the desired object when more than one object falls into the spotlight. First, if two objects fall into the selection volume, then the object that is closer to the centerline of the selection cone is selected. Second, if the angle between the object and the centerline of the selection cone is the same for both objects, then the object closer to the device is selected. The flashlight technique does not require that an entire object fall into the spotlight: even if an object is touched by the side of the selection volume (“illuminated” by the flashlight), it can be considered a selection candidate. Although this makes it very easy to select virtual objects, this ease of selection becomes a disadvantage when selection of small objects or tightly grouped objects is required. In these situations (as well as some others), it is desirable to directly specify the spread angle of the selection cone.

      2. Aperture Selection-

        The aperture technique makes this possible. The aperture technique is a modification of the flashlight technique that allows the user to interactively control the spread of the selection volume. The pointing direction is defined by the 3D position of the user’s viewpoint in virtual space, which is estimated from the tracked head location and the position of a hand sensor, represented as an aperture cursor within the 3D UI.

  2. Surface Techniques

    Numerous interaction techniques have been investigated and developed for interacting with 2D contexts on multi-touch displays and surfaces. We discuss the following techniques for 2D are as follows:

    1. Dragging :

      Dragging involves directly selecting and translating an object by touching it with one or more fingers and then sliding them across the surface. The most common approach is to use a single finger for this interaction. Dragging results in the virtual object being translated within a 2D plane coinciding with or parallel to the surface. The distance and direction of the translation is equivalent to the 2D vector defined by the initial contact point and the final contact point, at which the user removes his or her fingers from the surface.

    2. Rotating:

      A number of surface-based 2D interaction techniques have been investigated for rotating virtual objects. The most commonly used approach is an independent rotation that occurs about the center of the object.

    3. Pinching :Another common surface-based interaction technique is pinching (or splaying) the fingers to visually shrink (or enlarge), respectively, a virtual object. This technique visually scales the virtual object based on the distance between two contact points. If the two contact points are being pinched, or dragged toward one another, the distance between them decreases.
  3. Indirect Techniques –

    we discuss a number of interaction techniques that allow the user to manipulate virtual objects without directly interacting with them. Hence these techniques are referred to as indirect interactions. Three approaches are discussed above.

Comments