Quest SDK v74 Brings Thumb Microgestures & Improved Audio To Expression

Home » Quest SDK v74 Brings Thumb Microgestures & Improved Audio To Expression

The latest version of Meta’s SDK for Quest adds thumb microgestures, letting you swipe and tap your index finger, and improves the quality of Audio To Expression.

The update also includes helper utilities for passthrough camera access, the feature Meta just released to developers, which, if the user grants permission, gives apps access to the headset’s forward-facing color cameras, including metadata like the lens intrinsics and headset pose, which they can leverage to run custom computer vision models.

Quest Passthrough Camera API Out Now For Developers To Play With
Quest’s highly anticipated Passthrough Camera API is now available for all developers to experiment with, though they can’t yet include it in store app builds.

Thumb Microgestures

Microgestures leverages the controller-free hand tracking of Quest headsets to detect your thumb tapping and swiping on your index finger, as if it were a Steam Controller D-pad, when your hand is sideways and your fingers are curled.



0:00
/0:37



For Unity, microgestures requires the Meta XR Core SDK. For other engines it’s available via the new OpenXR extension XR_META_hand_tracking_microgestures.

Meta says these microgestures could be used to implement teleportation with snap turning without the need for controllers, though how microgestures are actually used is up to developers.

Developers could, for example, use them as a less straining way to navigate an interface, without the need to reach out or point at elements.

Interestingly, Meta also uses this thumb microgesture approach for its in-development neural wristband, the input device for its eventual AR glasses, and reportedly also potentially for the HUD glasses it plans to ship later this year.

This suggests that Quest headsets could be used as a development platform for these future glasses, leveraging thumb microgestures, though eye tracking would be required to match the full input capabilities of the Orion prototype.

Meta Plans To Launch “Half A Dozen More” Wearable Devices
In Meta’s CTO’s leaked memo, he referenced the company’s plan to launch “half a dozen more AI powered wearables”.

Improved Audio To Expression

Audio To Expression is an on-device AI model, introduced in v71 of the SDK, that generates plausible facial muscle movements from only microphone audio input, providing estimated facial expressions without any face tracking hardware.

Audio To Expression replaced the ten year old Oculus Lipsync SDK. That SDK only supported lips, not other facial muscles, and Meta claims Audio To Expression actually uses less CPU than Oculus Lipsync did.

Watch Meta’s Audio To Expression Quest Feature In Action
Here’s Meta’s new ‘Audio To Expression’ Quest SDK feature in action, compared to the old Oculus Lipsync.

Now, the company says v74 of the Meta XR Core SDK brings an upgraded model that “improves all aspects including emotional expressivity, mouth movement, and the accuracy of Non-Speech Vocalizations compared to earlier models”.

Meta’s own Avatars SDK still doesn’t use Audio To Expression, though, and nor does it yet use inside-out body tracking.

Leave a Comment

Your email address will not be published. Required fields are marked *