Skip to content

Body Warping

André Zenner edited this page Jun 3, 2024 · 16 revisions

Mermaidgraph Body Warping

Introduction

Azmandian et al. created a very simple and illustrative example for understanding the concept of Body Warping.

A real cube A is positioned in front of the user on a desk and a virtual cube A' is shifted slightly to the right. In this case, as the user reaches for the cube, to ensure that the real hand meets the real cube and the virtual hand meets the virtual cube, a redirection of the real hand to the left must be applied. A straightforward way of achieving this is to shift the entire rendering of the virtual hand to the right, which leads to the user compensating for the shift by translating the real hand in the opposite direction, i.e. to the left.

Because an instantaneous shift would be directly noticeable and could disturb a user, the offset (also called warp) gets usually applied incrementally. The warp depends on the hand's progression towards the real object.

bodyWarp

Implementations

In our toolkit, we implemented various approaches:

Azmandian et al.

Mermaidgraph Body Warping Azmandian et al.

To define an incremental warp that will be applied to the virtual hand, Azmandian et al. first measured the distance vector T between the virtual target W_V and the real target W_T:

formula

The position P_H of the user's physical hand is measured when body warping is activated. Azmandian et al. defined it as the warping origin W_O. With this, they computed the warping ratio as followed:

formula

The final warp results from w = T * α. To compute the final virtual hand position this warp gets applied to the hand position of the physical hand: P_H + w.

Note: We fixed a typo in the formula depicted in the original paper by switching min and max.

Implementation

The implementation can be found in _scripts/Redirection/RedirectionTechniques/BodyWarping/Azmandian_BodyWarping.cs

public override void Init(RedirectedPrefab redirectedPrefab, Transform head)
        {
            _t = redirectedPrefab.GetVirtualTargetPos() - redirectedPrefab.GetRealTargetPos();
        }


public override void ApplyRedirection(Transform realHandPos, Transform virtualHandPos, Transform warpOrigin, RedirectedPrefab target,
            Transform bodyTransform)
        {
            // set pH to users hand position
            pH = realHandPos.position;
            // define warping origin
            w0 = warpOrigin.position;
            // define the warping end
            wT = target.GetRealTargetPos();
            // compute the warping ratio
            a = Mathf.Max(0, Mathf.Min(1, (Vector3.Dot((wT - w0), (pH - w0))) / Vector3.Dot(wT - w0, wT - w0)));
            // compute the new position
            var w = a * _t;
            // apply the warp to the virtual hand
            virtualHandPos.position = realHandPos.position + w;
        }

More information:
Mahdi Azmandian et al., Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences.
In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 1968–1979.
DOI: 10.1145/2858036.2858226

Back to top

Cheng et al.

Mermaidgraph Body Warping Cheng et al.

Cheng et al. extended Azmandian et al.’s work. To define the warping ratio, they compute the distance vector T between the real P_p and virtual target P_v.

formula

To compute the virtual hand position, the algorithm by Cheng et al. then gradually adds this offset to the real hand position, depending on the distance between the real hand and the real target. The amount of offset applied at any given moment in time is represented by the shift ratio \alpha, which ranges between 0 (at the beginning of a redirection) to 1 (representing the full offset T, applied when the virtual hand reaches the virtual target and the real hand reaches the real target). H_0 again describes the warping origin, i.e. the location of the real hand in the frame in which the redirection is started.

formula

The final warp is indicated by W = \alpha * T and is added to the real hand position to determine the virtual hand position: formula.

Cheng et al. also implemented a zero warp distance. The general retargeting should be reduced to zero whenever the users retract their hand close to the body. So hand retargeting starts only when the user's hand crosses a given distance D. Since this feature can be used with every body warping technique, users can activate the zero warp distance option for all body warping techniques.

Implementation

public override void Init(RedirectedPrefab redirectedPrefab, Transform head)
        {
            // compute the distance vector between the virtual and physical target
            _t = redirectedPrefab.GetVirtualTargetPos() - redirectedPrefab.GetRealTargetPos();
            _t0 = Vector3.zero;
        }


public override void ApplyRedirection(Transform realHandPos, Transform virtualHandPos, 
            Transform warpOrigin, RedirectedPrefab target, Transform playerTransform)
        {
            var ds = 0f;
            // compute the distance between the real hand position and the users body position
            var dist = Vector3.Distance(realHandPos.position, playerTransform.position);
            // check whether the hand is below the zero warp distance or not. If so, set ds to the distance which leads to no redirection,
            // else set ds to the length between the physical hand position Hp and the warping origin H0
            ds = dist < zeroWarpDistance ? dist : (realHandPos.position - warpOrigin.position).magnitude;
            
            // compute the length between the physical hand and the physical target
            var dp = (realHandPos.position - target.GetRealTargetPos()).magnitude;
            // compute the shift ratio, it ranges between 0 and 1
            var a = ds / (ds + dp);
            // compute the warp 
            var w = a * _t + (1 - a) * _t0;
            // applay the warp to the virtual hand
            virtualHandPos.position = realHandPos.position + w;
        }

More information:
Lung-Pan Cheng et al.
Sparse Haptic Proxy: Touch Feedback in Virtual Environments Using a General Passive Prop.
In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3718–3728.
DOI: 10.1145/3025453.3025753

Back to top

Han et al.

Mermaidgraph Body Warping Han et al.

Han et al. created two remapping techniques for reaching in VR:

  1. Translational Shift, which introduces a static offset between the virtual and physical hand.
  2. Interpolated Reach, which dynamically interpolates the position of the virtual hand during a reaching motion.

Translational Shift applies a positional offset, based on the distance between the real and virtual target, to the virtual hand. This offset is applied instantly and remains constant. The calculation for the virtual hand position, P_{vh}, using translational shift is given by:

formula

Here, P_{ph} is the physical hand’s actual world position and P_{po} and P_{vo} are the world positions of the physical object and virtual object, respectively.

Interpolated Reach: In addition to the Translational Shift, Interpolated Reach gradually applies an offset to the virtual hand, while the real hand is moving towards the real object. The maximum offset is reached when the real hand meets the real object so that the virtual hand meets the virtual object. To dynamically map the physical hand position in the real world to the offset position, the technique uses the calculated offset between the virtual object and the physical object. For interpolation, the virtual hand position, P_{vh}, is given by the following equation:

P_{vh} =
\begin{cases}
P_{ph}, & \text{if}\ D \geq B \\
P_{ph} + (P_{vo} - P_{po}) * (1 - \frac{D}{B}), & \text{otherwise}
\end{cases}

Where 1 - \frac{D}{B} is the offset control value. It determines how much of the offset is applied to the virtual hand. And D is the distance between the physical object and the physical hand.
By moving the hand closer to the real object, D approaches 0. The range in which the interpolation is applied is determined by the constant interpolation boundary B:

B=|P_{po}-P_{ph}|+C

The interpolation boundary results from the initial distance between the physical hand and the physical object, plus a small buffer value C. This additional buffer C ensures that the hand remains within the interpolation range regardless of the initial hand movement. Their implementation uses C=0.1 meters. If the buffer is not used, the hand would start at the furthest edge of the boundary region and there is a possibility that the user would reach in a direction away from the effective range, missing the object completely as the offset is not applied. If D \geq B, then the hand is outside the area of influence, so the virtual hand returns to a one-to-one mapping with the physical hand.

Note: *In both approaches we had to reverse the Vector \vec{P_{vo}P_{po}} , so we swapped (P_{po} - P_{vo}) to (P_{vo} - P_{po}).

Implementation

public override void Init(RedirectedPrefab redirectedPrefab, Transform head)
        {
            var pPH = RedirectionManager.instance.realHand.transform.position;
            var pPO = redirectedPrefab.GetRealTargetPos();
            
            // interpolation boundary
            b = Vector3.Distance(pPO, pPH) + c;
        }

public override void ApplyRedirection(Transform realHandPos, Transform virtualHandPos, Transform warpOrigin,
            RedirectedPrefab target,
            Transform bodyTransform)
        {
            // set pPH to users physical hand position
            var pPH = realHandPos.position;
            // set pPO to physical (object) target position
            var pPO = target.GetRealTargetPos();
            // set pVO to virtual (object) target position
            var pVO = target.GetVirtualTargetPos();
            
            var warp = Vector3.zero;
            // apply the warp depending on the chosen technique
            switch (han_RedirectionTechnique)
            {
                case Han_Technique.TranslationalShift:
                    warp = (pVO - pPO); 
                    // apply warp to virtual hand
                    virtualHandPos.position = pPH + warp;
                    return;
                
                case Han_Technique.InterpolatedReach:
                    // distance between the physical obj and physical hand
                    var d = Vector3.Distance(pPO, pPH);
                    if (d >= b)
                    {
                        warp = Vector3.zero;
                    }
                    else
                    {
                        warp = (pVO - pPO) * (1 - (d / b));
                    }
                    // apply warp to virtual hand
                    virtualHandPos.position = pPH + warp;
                    break;
                
                default:
                    throw new Exception("No Han_Redirection Technique Set");
            }
            
        }

More information:
D. T. Han, M. Suhail and E. D. Ragan Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality In IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 4, pp. 1467-1476, April 2018 DOI: 10.1109/TVCG.2018.2794659

Back to top

Zenner and Krüger (Original Approach)

Mermaidgraph Body Warping Zenner & Krüger

The previous hand redirection methods had the goal to warp the virtual hand such that it arrives at a target simultaneously with the real hand. Zenner and Krüger's approach, however, was developed with the goal of finding detection thresholds for hand redirection. Which means this approach does not retarget the hand towards a selected target. Instead the redirection angle can be freely chosen. This means in particular that the redirection angle is not calculated but given as a fixed value that can be applied to the virtual hand. From the movement of the physical hand and the redirection angle, a warped position for the virtual hand can then be computed. To get to this position, Zenner and Krüger projected the hand while it moves away from the warp origin onto a 2D plane, then applied the redirection angle and projected it back into 3D space. They implemented two general methods, the rotational warp and the gain warp.

The RotationalWarp computes a warped position for the virtual hand \vec{p_v} from a given real hand position \vec{p_r}, the warp origin \vec{o}, a redirection angle \alpha and a 2D plane. This plane is defined by a unit forward vector \vec{f} and an orthogonal unit redirection vector \vec{r}.

The GainWarp computes the warped position for the virtual hand by scaling the distance \vec{d_r} of the real hand from the warp origin. This effectively decreases/increases the movement speed of the virtual hand.

Zenner and Krüger investigated thresholds for three different redirection dimensions: Horizontal, Vertical and Gain-Based. For the first two they used the Rotational Warp algorithm. The horizontal and vertical hand redirections are just special cases of the general Rotational Warp, using the the world axes to span the 2D planes. For the gain-based redirection they used the Gain Warp algorithm. The figure below shows these redirections in more detail.

ZennerDimensions

In our toolkit the two methods are implemented which allows users to directly choose between horizontal, vertical, gain-based or custom hand redirection. For the custom hand redirection, the two vectors that span the plane can be manually added. To do so, we use a public Transform that can be set in the Unity Editor and use its forward and right vectors as the forward and redirection vectors of the rotational warp algorithm, respectively.

Implementation

public override void ApplyRedirection(Transform realHandPos, Transform virtualHandPos, Transform warpOrigin, RedirectedPrefab target,
            Transform bodyTransform)
        {
            switch (selectedWarpingMode)
            {
                case WarpingMode.Horizontal:
                    forward = Vector3.forward.normalized;
                    redirection = Vector3.right.normalized;
                    var horizontalWarp = RotationalWarp(realHandPos.position, warpOrigin.position, forward, redirection, redirectionAngleAlpha);
                    virtualHandPos.position = horizontalWarp;
                    break;
                
                case WarpingMode.Vertical:
                    forward = Vector3.forward.normalized;
                    redirection = Vector3.up.normalized;
                    var verticalWarp = RotationalWarp(realHandPos.position, warpOrigin.position,forward, redirection, redirectionAngleAlpha);
                    virtualHandPos.position = verticalWarp;
                    break;
                
                case WarpingMode.GainBased:
                    var gainBasedWarp = GainWarp(realHandPos.position, warpOrigin.position, redirectionAngleAlpha);
                    virtualHandPos.position = gainBasedWarp;
                    break;
                
                case WarpingMode.Custom:
                    forward = customTransformCoordinates.forward.normalized;
                    redirection = customTransformCoordinates.right.normalized;
                    var customWarp = RotationalWarp(realHandPos.position, warpOrigin.position,forward, redirection, redirectionAngleAlpha);
                    virtualHandPos.position = customWarp;
                    break;
               
                default:
                    throw new ArgumentOutOfRangeException();
            }

        }

public static Vector3 RotationalWarp(Vector3 handPosReal, Vector3 o, Vector3 f, Vector3 r, float redirectionAngle) 
        {
            // compute unit height vector
            var h = Vector3.Cross(f, r).normalized;
            // save heightDebug.Log("av: " + aV);
            var height = Vector3.Dot(handPosReal - o, h);
            // project on redirection plane
            var pProj = handPosReal - height * h;
            // unwarped offset in plane
            var dProjR = pProj - o;
            // angle rel. to f & o
            var aR = Mathf.Atan2(Vector3.Dot(dProjR, r), Vector3.Dot(dProjR, f)) ;

            // adding angular offset
            var aV = aR + redirectionAngle * Mathf.Deg2Rad; 

            // warped offset in plane
            var dProjV = Mathf.Sin(aV) * Vector3.Magnitude(dProjR) * r +
                     Mathf.Cos(aV) * Vector3.Magnitude(dProjR) * f;
            
            // final warped position
            var pos = o + dProjV + height * h;
            return pos;
        }

        public static Vector3 GainWarp(Vector3 handPosReal, Vector3 o, float gainFactor)
        {
            // unwarped offset from origin
            var dR = handPosReal - o;
            // warped offset from origin
            var dV = gainFactor * dR;
            // final warped position
            var pos = o + dV;
            return pos;

        }

Back to top

More information: A. Zenner and A. Krüger
Estimating Detection Thresholds for Desktop-Scale Hand Redirection in Virtual Reality 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019, pp. 47-55 DOI: 10.1109/VR.2019.8798143

Zenner and Krüger (Target-Based Approach)

Based on Zenner and Krüger's work, we created a new redirection technique that takes a virtual and a real target location as an input (like the other body warping techniques). Instead of manually setting the redirection angle, we compute it based on the warp origin and the targets. For this we:

  1. Create a 2D plane from warping origin, real target, and virtual target
  2. With this information the redirection angle can be computed. This is the angle between the physical and the virtual target.
  3. Additionally the gain factor is calculated. This is the relation between the distance of origin to the real target and the origin to the virtual target.
  4. Afterwards the redirection angle and the gain factor are applied simultaneously to the position of the real hand to compute the warped position of the virtual hand.

Implementation

public override void Init(RedirectedPrefab redirectedPrefab, Transform head)
    {
        var targetRealPos = redirectedPrefab.GetRealTargetPos();
        var targetVirtualPos = redirectedPrefab.GetVirtualTargetPos();
        var warpingOrigin = RedirectionManager.instance.warpOrigin.transform.position;

        var originToRealTarget = targetRealPos - warpingOrigin;
        var originToVirtualTarget = targetVirtualPos - warpingOrigin;
        
        // create custom plane
        _plane.Set3Points(warpingOrigin, targetRealPos, targetVirtualPos);
        // set forward vector 
        forward = (originToVirtualTarget).normalized;
        redirection = Vector3.Cross(forward, -_plane.normal).normalized;
        // compute redirection angle alpha
        redirectionAngleAlpha = Vector3.Angle(originToRealTarget, originToVirtualTarget); 
        // compute gain factor
        gainFactor = (originToVirtualTarget.magnitude / originToRealTarget.magnitude);
    }
    
    public override void ApplyRedirection(Transform realHandPos, Transform virtualHandPos, Transform warpOrigin, RedirectionObject target,
        Transform bodyTransform)
    {
        // compute rotational warp
        var warp = Zenner_BodyWarping.RotationalWarp(realHandPos.position, warpOrigin.position,forward, redirection, redirectionAngleAlpha);
        // compute gain factor
        var gain = Zenner_BodyWarping.GainWarp(warp, warpOrigin.position, this.gainFactor);
        // apply it to virtual hand
        virtualHandPos.position = gain;
    }

Blink-Suppressed Hand Redirection Algorithm by Zenner, Regitz, Krüger

Blink-Suppressed Hand Redirection (BSHR) is based on the body-warping algorithm by Cheng et al. Similar to the algorithm by Cheng et al., BSHR continuously increases the hand offset when eyes are open. However, the continuous shift of the virtual hand is only applied below detection thresholds. When the user closes their eyes, an additional, instantaneous shift is added to the virtual hand position.

Click here to see a video about the technique

How the algorithm works

For all the details about the technique, please see the description in the paper.

In summary, the algorithm works as follows:

  • BSHR redirects the user starting at an origin location O.
  • Initially, a dummy location P' is computed:
    • P' is defined as the closest point to the physical target P that lies on the direct connection of the virtual target V and P but still is inside the unnoticeability area.
    • The unnoticeability range encompasses all real positions around V reachable with the real hand without exceeding any of the detection thresholds for either (a) redirection angle (βmax) or (b) minimum (gmin) or (c) maximum (gmax) gain
    • Based on the findings by Zenner and Krueger [2], detection thresholds are: βmax = 4.5°; gmin = real / virtual = 0.94; gmax = 1.14
  • The main loop function re-computes the virtual hand position Hv (see formula (1)):
    • Similarly to Cheng et al., a warp vector W is calculated, which represents the offset of the virtual hand from the real hand at position Hp (2)
    • When the user hasn't blinked yet, the real hand is continuously redirected towards the dummy location P'
    • Once a valid eye blink is detected, the remaining distance from P to P' is instantaneously added to the hand offset (4)

Implementation

public override void Init(RedirectionObject redirectionObject, Transform head, Vector3 warpOrigin)
    {

        v = redirectionObject.GetVirtualTargetPos();         // Virtual Target V
        p = redirectionObject.GetRealTargetPos();            // Real Target P
        o = warpOrigin;                                      // Origin O

        ov = v - o;                     // Vector origin O -> virtual target V
        op = p - o;                     // Vector origin O -> real target P
        vp = p-v;                       // Vector real target P -> virtual target V
        b = Vector3.zero;               // offset vector is set to 0 at the start
 
        ComputeDummyTarget();
        vp_ = p_ - v;
         
        GetComponent<BlinkDetector>().running = true;
        GetComponent<BlinkDetector>().validBlinkDetected = false;

    }

    public override void ApplyRedirection(Transform realHandPos, Transform virtualHandPos, Transform warpOrigin, RedirectionObject target,
        Transform bodyTransform)
    {
        if (GetComponent<BlinkDetector>().validBlinkDetected) b = p_ - p;
  
        float ds = ((realHandPos.position + b) - warpOrigin.position).magnitude;
        float dp = ((realHandPos.position + b) - p_).magnitude;

        float alpha = ds / (ds + dp);
        Vector3 w = alpha * (v - p_) + b;

        virtualHandPos.position = realHandPos.position + w; 
    }

    ...
}

How to integrate the technique into the Unity Project:

Please see instructions on the Get Started Page: Blink-Suppressed Hand Redirection

More information:

[1] A. Zenner, K. P. Regitz and A. Krüger, "Blink-Suppressed Hand Redirection," 2021 IEEE Virtual Reality and 3D User Interfaces (VR), 2021, pp. 75-84, doi: https://doi.org/10.1109/VR50410.2021.00028

[2] A. Zenner and A. Krüger, "Estimating Detection Thresholds for Desktop-Scale Hand Redirection in Virtual Reality," 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2019, pp. 47-55, doi: 10.1109/VR.2019.8798143.

Saccadic and Blink-Suppressed Hand Redirection Algorithm by Zenner, Karr, Feick, Ariza, Krüger

Saccadic & Blink-Suppressed Hand Redirection (SBHR) is the most recent algorithm addded to the HaRT. The algorithm is based on the body-warping algorithm by Cheng et al., but combines gradual warping with both blink- and saccade-induced hand offsets.

Click here to see a video about the technique
Click here for a talk about the technique

How the algorithm works

For all the details about the technique, please see the description in our paper.

How to integrate the technique into the Unity Project:

Please see instructions on the Get Started Page: Saccadic & Blink-Suppressed Hand Redirection

More information:

[1] André Zenner, Chiara Karr, Martin Feick, Oscar Ariza, and Antonio Krüger. 2024. Beyond the Blink: Investigating Combined Saccadic & Blink-Suppressed Hand Redirection in Virtual Reality. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 750, 1–14. https://doi.org/10.1145/3613904.3642073

Back to top

Clone this wiki locally