Apple’s widely rumored upcoming mixed reality headset will make use of 3D sensors for advanced hand tracking, according to analyst Ming-chi Kuo, whose latest research note has been reported on by MacRumors and 9to5Mac. The headset is said to have four sets of 3D sensors, compared to the iPhone’s single unit, which should give it more accuracy than the TrueDepth camera array currently used for Face ID.
According to Kuo, the structured light sensors can detect objects as well as “dynamic detail change” in the hands, comparable to how Face ID is able to figure out facial expressions to generate Animoji. “Capturing the details of hand movement can provide a more intuitive and vivid human-machine UI,” he writes, giving the example of a virtual balloon in your hand flying away once the sensors detect that your fist is no longer clenched. Kuo believes the sensors will be able to detect objects from up to 200 percent further away than the iPhone’s Face ID.
Meta’s Quest headsets are capable of hand tracking, but it’s not a core feature of the platform and it relies on conventional monochrome cameras. Kuo’s note doesn’t mention whether Apple’s headset will use physical controllers as well as hand tracking. Bloomberg reported in January that Apple was testing hand tracking for the device.
Kuo also this week provided some details on what could come after Apple’s first headset. While he expects the first model to weigh in at around 300-400 grams (~0.66-0.88lbs), a “significantly lighter” second-generation model with an updated battery system and faster processor is said to be planned for 2024. The first model will arrive sometime next year, according to Kuo, and Apple reportedly expects it to sell about three million units in 2023. That suggests the initial product may well be expensive and aimed at early adopters.