Augmented Reality

ARKit3 introduce Motion Capture, Face tracking and more for Immersive AR Experiences

At WWDC (Apple Worldwide Developers Conference) 2019 Apple officially announced the new ARKit 3 and RealityKit software frameworks for developers. New features are introduced such as new object/image detection, motion capture, and People Occlusion. These frameworks are designed to help the developers build great AR experiences. Further, the company also unveiled a new app called Reality Composer for iOS, iPadOS, and Mac.

This augmented new possibilities, allowing developers to produce and prototype AR experiences right on the devices. Apart from that Apple is bringing HomeKit to security cameras and routers. The main concern of the whole session of the ARkit 3 was People Occlusion feature that allows virtual objects to be placed in front and behind people in real time.

Apple introduced several new features to transform the developer as well as the user’s experience.

•    Motion capture

•    Face tracking (including multiple faces)

•    Collaborative session

•    People occlusion

Throughout the ARKit 3 session, the major highlights were automatic real-time occlusion and real-time motion capture with the camera. Talking about a face tracking feature it supports 3 persons at a time by True Depth front-facing cameras. Moreover, developers can simultaneously access both face and world tracking on the front and back cameras at a time.

Here are new features for Unity developers, helping them to take a deeper insight into the latest ARKit 3 functionality in order to access it using AR Foundation 2.2 and Unity 2019.1.

Motion capture

Apple through ARKit 3 focus on enhancing AR experiences of the developer by identifying the user in the real world. Therefore, Apple introduced an exciting new feature of ARKit 3 that is motion capture. This trendy feature provides AR Foundation apps with 3D (world-space) or 2D (screen-space) representation of humans recognized in the camera frame. This new functionality AR Foundation of ARKit 3 augments the new Human Body Subsystem.

But, this feature is only available for new iOS devices with the A12 Bionic chip and the Apple Neural Engine (ANE). There is a difference between 2D detection and 3D detection. For 2D detection, humans are denoted by a hierarchy of seventeen joints with screen-space coordinates. Whereas talking about 3D detection, humans are signified by a hierarchy of ninety-three joints with world-space transforms.

The best part for the developers is that AR Foundation apps can query the Human Body Subsystem descriptor at runtime. This can be helpful to determine whether the iOS device supports human posture calculation or not.

People Occlusion

Source: 9to5mac

ARKit 3 offer the people occlusion features, which are available only on iOS devices with the A12 Bionic chip and ANE. The new AR Foundation Human Body Subsystem of ARKit 3 provides apps with the human stencil and depth segmentation images.  The stencil segmentation is helpful in identifies and monitoring each pixel of an image to find out whether the pixel contains a person or not.

With the help of the stencil, image developer can create visual effects such as outlines or tinting of people in the camera frame. Talking about the depth segmentation image, to a recognized human it consists of an estimated distance from the device for each related pixel.

As a result, rendered and realistic 3D content can be developed by using these segmentation images together.

Face tracking enhancements

For iPhone XS, iPhone XR, iPhone XS Max, and the latest iPad Pros ARKit 3 has extended its support for face tracking. Apple tried to offer a few significant changes through ARKit 3 related to face tracking. It offers the ability to empower the use of the TrueDepth camera for face tracking.

Thus, during a session designed for world tracking this helps in capturing the user’s face pose from the front-facing camera. Moreover, using the rear-facing camera the facial expressions of a character rendered in the environment can be seen.  Apart from that, during a face tracking session, the front-facing TrueDepth camera recognizes up to three distinct faces at a time.

And through the AR Foundation Face Subsystem, you may specify the maximum number of faces to track simultaneously.

Note: This new face tracking mode is available only on iOS devices with the A12 Bionic chip and ANE.

Collaborative session

image source: 9to5mac.com

Devices can share AR Reference Points in real time in AR Foundation. The implementation of the ARKit 3 Session Subsystem exposes the APIs to issue and consume these updates. With the collaborative session, ARKit 3 takes things a step further. In ARKit 3 collaborative session allow multiple connected ARKit apps to continuously exchange their understanding of the situation.

Final thought

ARKit 3 improved existing systems in a new way. Using ARKit 3, devices detect up to 100 images at a time. Moreover, the AR Foundation framework automatically enables these improvements. Apart from this, object detection is far stronger comparatively. It has to be able to more reliable in identifying objects in complex environments. In addition to this now HDR environment textures may be disabled on the AR Foundation Environment Probe Subsystem.