WWDC20: What’s new in ARKit and RealityKit?

Augmented Reality highlights from WWDC 2020

Max Cobb
5 min readJun 23, 2020

There’s a lot that has been added to all of Apple’s platforms this year, and it may seem that Augmented Reality has been left out. However, there have been a few things that went unmentioned, and others announced at the WWDC20 Main Keynote that seem to give a hint of what’s to come in Augmented Reality.

All the topics mention in this article will be added to my ongoing RealityKit series over the coming months:

ARKit

Spatial Audio 👂 (with AirPods Pro)

AirPods pro have an amazing new feature, enabling spatial audio giving a full 3D immersive experience. While Augmented Reality wasn’t mentioned during this announcement, the implications are definitely there; and I’m expecting to see this working with ARKit experiences.

Hand Pose 🙌

Hand tracking has implemented in many AR mockups, and also demoed for real within ARKit before; but these demos have been using private APIs that companies and people have made themselves. Here are some examples:

Now, just like last year Capturing Body Motion, Apple looks to have added their own model for hand tracking inside the Vision framework; highly optimised for the operating system and hardware found in all of their devices. See the WWDC Event on this API later today:

Opinion: Waving your hand around in front of your phone’s camera seems like a strange and uncomfortable user experience; however having both of your hands free in front of your eyes to manipulate something you can see seems completely natural. It looks to me that Apple wants AR developers to get applications ready for this shift.

Depth API

Developers will now get access to the per-pixel depth information output from LiDAR found on the latest iPad Pro models. Ryan Engle once compared not getting these data to not being able to see an individual pixel, and I completely agree. This will open up many more opportunities for developers, I’m excited to see what’s said about it during the ARKit 4 event:

Location Anchors 📍

Another huge advancement in ARKit that was not mentioned in the main event is Location Anchors. Not only do these anchors look to allow the user to anchor objects at specific locations, but also a whole new GeoTracking Configuration, which offers a complete Visual Positioning System. This will allow people to make Augmented Reality apps at city scale using Apple’s system.

Why do you want this? Most of us will have been in cities where our GPS is nowhere near as accurate as we need to get where we need to go, but with a Visual Positioning System, your camera can do the rest of the work by searching for feature points such as building windows, streets or other landmarks. I had worked on a project in 2017 doing something very similar called ARCity, and this was the ambition then — I’m excited to see that this tech can now reach millions of developers!

Face Tracking 😲

Face tracking is expanding to devices without FaceID (A12 Bionic chip minimum). This includes the newly release iPhone SE, so users who prefer a fingerprint unlock feature can now use Memoji, and apps which use Apple’s native Face Tracking features.

RealityKit

This year looks very light on the RealityKit front, although all of the aforementioned topics can be implemented in RealityKit of course.

Video Textures 📺

One key feature that was added this year is the VideoMaterial. This lets you to apply a video to a mesh in the exact same way you would any other Material. The aim of this is to allow developers to animate TV screens, or adding 2D animated eyes or mouth to a 3D character perhaps.

A feature similar to this was available if you chose to make your ARKit app using SceneKit, but the setup was a little more complex. This involved wrapping the AVPlayer inside a SpriteKit Video Node, then applying that to the SCNMaterial diffuse contents. And of course this was available in other engines such as Unity.

Debug Model Component 🐛

A new builtin Component has been added to RealityKit this year too, called DebugModelComponent. This component is added to all Entities which conform to HasModel, such as ModelEntity.

This new component lets you see different parts of your mesh and materials, including the emissive layer, alpha, roughness, normal, and many others. A full list can be found here.

To see more, check out what was covered during Apple’s “What’s new in RealityKit” session:

That’s a wrap! I’ll be posting more to this article as I dig into the documentation.

Be sure to give me a follow where I will be posting more RealityKit articles on how to use the latest advancements in the framework. Send me a tweet if you have any questions, and I also post quite a few open source Augmented Reality iOS Projects on my GitHub:

--

--

Max Cobb

spatial computing at apple. won’t be posting new content for now.