SteamVR Knuckles Driver

SteamVR Knuckles Driver

Not enough ratings
Pick up, drop, and throw with Knuckles EV2 - sensor fusion!
By Keith
Knuckles EV2 allows for much more natural interactions in VR, like picking up, dropping, and throwing objects. Rather than make everyone create their own solution, we've distilled most of this into a new Grab Mode in the input system. Let's talk about how we got here and what value this provides.
 
Rate  
Favorite
Favorited
Unfavorite
Press 'x' to pick up
One of the main things we wanted to tackle with Knuckles was interacting with virtual objects in a more natural way. With the first generation of VR controllers picking up objects was a button press or trigger pull. The nice thing about this is it is a known concept to gamers. This is pretty similar to how we've been interacting with objects in games for decades now. But we're not playing games anymore, we're experiencing realities, often with gameplay! But that brings with it significantly more complexity.

If I pick something up by pulling a trigger then I can either no longer use that trigger to release that object, or I can't interact with that object using the trigger. Think about a fire extinguisher: if I pick it up with the trigger how do I spray it? While it's possible to come up with a solution for existing controllers, all of these solutions require explanation to experienced users. With Knuckles EV2 you just tell the user to grab it and by some combination of capacitive and force sensing we recognize that grab at the location it happened on the controller. Then they can naturally pinch the extinguisher to spray it.

Let's take another example: a ping pong ball. If you were using grip buttons for your main pick up method before that certainly doesn't make sense here, that's too blunt of a release. If the user then tries to throw the ball they have to think about releasing the controller button, not the ping pong ball. If you're instead using the trigger as your main pick up method are you using the trigger to pickup everything else in your game? How do you explain that to users? How long does that take? How many failures does it take the average user to be able to reliably pick things up in your reality?

The last major obstacle is fatigue. Holding an object via a trigger or stiff button for an extended time can be exhausting to our hands. Switching to a toggle for pickup makes for more complicated and less natural interactions. Knuckles solves long term holds by simply letting you rest your fingers on the controller and drop by letting go.
Force sensors, capacitive sensing
Two types of sensors are used in Knuckles EV2 to recognize intent to grab - the force sensor and groups of capacitive sensors. We use the force sensor to recognize a strong hold, which we can then use to better determine release point for a throw. We use the capacitive sensors to recognize a light hold, generally this is just the controller resting in your hands. By combining the two we can recognize the full range of object interaction from pickup and hold, to drop or throw. And with the right algorithm the transition between these states can be seamless to the user.

There are two sets of force and capacitive sensors. One set in the body of the controller to recognize grips with the ring, middle, and pinky fingers and one set in the head of the controller for grabs with the index and thumb. This allows for natural grabs at the location of the user's choosing.

Pickup and Hold
Pickup:
We've found four reliable and comfortable types of grabs per hand. There's capacitive and force types on both the body of the controller and the head.

If your application is heavy on object interaction you might consider allowing both types of pickup, this can maximize success rates for pickup. But if you're doing a lot with the head of the controller just one can be effective as well.

Pickup is only initiated when the user has actively changed their gesture. So if I'm squeezing the controller while far from an object and then move closer I shouldn't initiate a pickup. Otherwise you get way too many false positives, especially in situations where there are a lot of objects you can interact with.

Hold:
Once an object is in your hands maintaining that grasp while allowing the user to exert a natural amount of pressure can be tricky. Often the initial pickup gesture triggers both the cap and force actions but then the force is released almost immediately after. So if we just base hold on force then we'd almost always drop things accidentally.

In addition to the difference between force and cap grabs sometimes the position in the hand can change without the user noticing it. In the scenario of picking up a virtual controller the user may initially grab it around the body of the controller but then release there and hold at the head if there's a lot of joystick or trackpad manipulation. So in some circumstances it makes sense to maintain a grab initiated with the body of the controller even when the body has been fully released.
Dropping and throwing
Concepts
Drops and throws are physically very similar actions with the only major difference being speed. But when you're looking at intent it's very different. When dropping an object you rarely care about precision, it's a blunt action, you want something out of your hands and to fall downward. But throwing is all about precision, you're often trying to get an object to a certain location.

Until we have precision haptic feedback for virtual objects and we can model the roll off your fingers and the pressure you're exerting on different parts of the object, throwing is always going to be an estimation. So it's important to recognize that it's an estimation of intent, not an estimation of physics. If you focus on the physics part it's easy to just average velocities around a button release, but different people release virtual objects at different times for the same intended target.

Implementation
The sensor fusion bits are now built into the input system at the binding layer. You can add a Grab Mode to the grip of the controller and bind it to a boolean action to take advantage of all the cap sense + force sense niceness. In that action you can configure a few parameters:


  • Value Hold Threshold: The value the combined capacitive sensors must reach to initiate a grab
  • Value Release Threshold: The value the combined capactive sensors must fall to to disengage a grab
  • Force Hold Threshold: The amount of pressure required to initiate a grab with the force sensor
  • Force Release Threshold: The amount of pressure required to release an object (or downgrade to cap sense)
  • Downgrade speed: This is the maximum speed the controller can be moving to allow a downgrade to a capacitive sense grab.

Capsense grabs aren't as good at determining release point as force grabs, so we only downgrade if we think you're not intending to throw an object.
Throwing intent
Determining a release point is half the equation, the other half is velocities. For this you can either use the SteamVR Interaction System in Unity or just refer to it. We're recording position and rotation each frame for around 30 frames in a loop. This gives you a lot of room to determine intent.

Once we've established a release point we check to see if the speed of your hand (indicated by velocity.magnitude) has increased since the last frame or decreased. If it's decreasing we go back to the point where you were moving your hand fastest. From there we average the three frames around it to get your intended speed and direction.

If your velocity is increasing then we wait a few frames to see if it's going to continue increasing, if it does we delay release slightly to wait for peak hand speed. Then we again take the average of the three frames around it to get your intended velocity and angular velocity.

To see a visualization of this data you can check out the Throwing+ scene in the Knuckles Tech Demo. Launch the demo, click the settings icon in the upper right corner of the window, then click Throwing+. This scene has a graph in game of each finger's cap sense value, all force values, velocity, and our determined release point.

< >
4 Comments
ESC_KDAWG Jun 26, 2018 @ 1:19am 
Awesome! Thanks so much for sharing :)
Keith  [author] Jun 25, 2018 @ 11:40am 
Pinch Grip: Yeah, that's the idea. The middle video is a little tricky to understand without some Mixed Reality but if you watch the bars in the lower right hand corner you can see that for some of the grenades I'm grabbing with my thumb and index finger (then the grenade turns green). Then with some grabs I'm using my middle / ring / index finger (grenade turns blue). Bright colors indicate force based grabs.
ESC_KDAWG Jun 24, 2018 @ 7:44pm 
Regarding "pinch grip" / "force pinch" would this allow someone to hold the controller like a bike handle where they can pinch the index finger and thumb together? After doing some research it appears that this was possible on the first gen knuckles controllers and have seen mention of pinch interaction it in the documentation for EV2.
charles_a Jun 21, 2018 @ 1:10pm 
Rather than save the queries frame based, why not keep the history from an input thread. Vive controllers for example update at 250hz with sensor fusion, where the game frames only happen at 90hz. Does Knuckles have the same 250hz update rate?