Getting Started with RealityKit: Procedural Geometries

Max Cobb
5 min readJun 8, 2021

--

Procedural Geometries in RealityKit banner image

At WWDC 2019 RealityKit was first released. It had some amazing features making Augmented Reality apps simple for all developers, with an Entity Component System (ECS). This was a big refresh from the native iOS framework SceneKit, which is what many developers were using to create their AR apps.

However a lot was missing from this brand new rendering system, and two years later many of those have been introduced at #WWDC21.

This guide will walk you through how to use one of these new features: procedural geometries.

What Are Procedural Geometries?

Most people understand procedural geometries as a way of writing an algorithm to generate a mesh, built up components like vertices, normals and texture maps.

Let’s say you want someone to create their own world inside your AR app, or otherwise make the experience very customisable — so far the only customising you would be able to do is shaping a sphere, cuboid, plane, text or add a USDZ into your scene. While there can be unlimited variations of a USDZ geometry, you can’t host all of them on a remote server, let alone bundled inside your app!

Now with procedural geometries you can technically make any 3 dimensional shape (or a 1 or 2 dimensional shape for that matter), and change any part of it you like based on any input parameters.

For example if you’re generating something complex like a 3D character in a game, the player could change the height of the character’s nose, size of their eyes, shape of their ears or even length of their fingers to be completely customisable. Previously in RealityKit you could only change the scale in x, y or z axis, or change the 3D model by bringing in a different USDZ file.

Creating a Geometry With RealityKit

To see a good overview, you can refer to this document:

The above link is a very detailed document that talks in depth about how to create geometries in OpenGL. Whilst I’m aware this is not RealityKit and there are some differences, most of the core principles apply in all scene graphs.

The main things you can define are:

  • Vertices — points that make up the shape
  • Indices — connections between vertices that make a surface (triangle or quad)
  • Normals — how light reflects from the surface
  • Texture Map — how a texture is mapped across the mesh

Let’s create a simple triangle in RealityKit. We need to define three points in 3D centred around the origin. For the sake of keeping the numbers clear, we will chose to round the vertices to whole numbers.

Base Geometry

The coordinates are going to be:

let positions: [SIMD3<Float>] = [[-1, -1, 0], [1, -1, 0], [0, 1, 0]]

Those points, ordered by [.red, .white, .blue] can be seen here, around the green dot representing the origin:

As you can see, the above image shows the points of an isosolese triangle around the edges, and the green dot in the centre showing the origin.

Now to place a simple mesh within those triangles, we need to create a MeshDescriptor object, and assign the positions:

var descr = MeshDescriptor(name: "tritri")
descr.positions = MeshBuffers.Positions(
[[-1, -1, 0], [1, -1, 0], [0, 1, 0]]
)

The next step is adding the aforementioned indices that tell RealityKit how to connect the dots. So let’s connect red (0) to white (1) and then blue (2).

descr.primitives = .triangles([0, 1, 2])

One important thing to remember here is that these connections must be drawn anticlockwise. This will make sure that the face of the mesh is towards us in this instance.

Now all that’s left is to generate our mesh and put it inside a ModelEntity! Let’s also give it a simple orange material too.

let generatedModel = ModelEntity(
mesh: try! .generate(from: [descr]),
materials: [SimpleMaterial(color: .orange, isMetallic: false)]
)

As you can see in the above picture, our mesh’s vertices are right in the centre of those red, white and blue spheres as planned.

Texture Mapping

If we apply a texture to this mesh, our result can be unpredictable, as we haven’t defined exactly where each vertex should be on a given material.

Let’s apply a square (512x512) image showing a RealityKit logo to the mesh:

Probably not exactly what we want, but it’s nice that it doesn’t completely fail!

Mapping texutres in RealityKit is consistent with most other rendering engines, but different from most iOS 2D coordinate systems. To map a texture you must define the points starting from the bottom left of the image, and the value is a distance from 0 to 1 in both the x and y axis.

For example, if we want the RealityKit logo image to be centred in our mesh then our bottom two coordinates would be[[0, 0], [1, 0]]; the first value in each is the x, and as we can see we are assigning the red point to the bottom left value, and the white dot will be at the bottom right.

As for the blue dot, we should assign it to a point half way along the x axis (0.5) and all the way to the top of the y axis (1). Here’s a drawing, where the part highlighted red should be our rendred output in RealityKit:

In code, assigning these coordinates would look like this:

descr.textureCoordinates = MeshBuffer([[0, 0], [1, 0], [0.5, 1]])

And for the output:

Exactly as expected!

See the full ViewController on GitHub in this Gist:

Conclusion

This has been a very brief overview of what’s available with generating meshes with RealityKit 2, if you want to dive deeper before the Lab coming at 2pm June 8th 2020, I would suggest taking a look at my post from a few years ago on geometries within SceneKit. As mentioned, many of the same principles apply in RealityKit too:

For more information follow me here on Medium, twitter or github, as I’m frequently posting new content on both those platforms, including open source swift packages specifically for RealityKit!

Also leave some claps if you’re feeling excited by WWDC’s new features this year!

👏👏👏

--

--

Max Cobb

spatial computing at apple. won’t be posting new content for now.