The latest version of Meta’s SDK for Quest adds thumb microgestures, letting you swipe and tap your index finger, and improves the quality of Audio To Expression.
The update also includes helper utilities for passthrough camera access, the feature Meta just released to developers, which, if the user grants permission, gives apps access to the headset’s forward-facing color cameras, including metadata like the lens intrinsics and headset pose, which they can leverage to run custom computer vision models.

Thumb Microgestures
Microgestures leverages the controller-free hand tracking of Quest headsets to detect your thumb tapping and swiping on your index finger, as if it were a Steam Controller D-pad, when your hand is sideways and your fingers are curled.
0:00
For Unity, microgestures requires the Meta XR Core SDK. For other engines it’s available via the new OpenXR extension XR_META_hand_tracking_microgestures.
Meta says these microgestures could be used to implement teleportation with snap turning without the need for controllers, though how microgestures are actually used is up to developers.
Developers could, for example, use them as a less straining way to navigate an interface, without the need to reach out or point at elements.

Interestingly, Meta also uses this thumb microgesture approach for its in-development neural wristband, the input device for its eventual AR glasses, and reportedly also potentially for the HUD glasses it plans to ship later this year.
This suggests that Quest headsets could be used as a development platform for these future glasses, leveraging thumb microgestures, though eye tracking would be required to match the full input capabilities of the Orion prototype.

Improved Audio To Expression
Audio To Expression is an on-device AI model, introduced in v71 of the SDK, that generates plausible facial muscle movements from only microphone audio input, providing estimated facial expressions without any face tracking hardware.
Audio To Expression replaced the ten year old Oculus Lipsync SDK. That SDK only supported lips, not other facial muscles, and Meta claims Audio To Expression actually uses less CPU than Oculus Lipsync did.

Now, the company says v74 of the Meta XR Core SDK brings an upgraded model that “improves all aspects including emotional expressivity, mouth movement, and the accuracy of Non-Speech Vocalizations compared to earlier models”.
Meta’s own Avatars SDK still doesn’t use Audio To Expression, though, and nor does it yet use inside-out body tracking.