PlanetKit is a project aimed at creating a toolkit for making interactive virtual worlds. I’m writing it in the Rust programming language, which I’m finding delightful to work with, and a great fit for the task.
Over the past few weeks I’ve made the changes needed to represent large worlds. This broke down into two main threads of work:
First, we need to dynamically load parts of the world as they are needed, and unload them when they are not. This lets us have worlds as detailed as we’d like, because we don’t need the whole thing loaded at once. This is fairly uninteresting (until I have multiple player characters and other transient “volumes of interest”), so I’m not going to write about this further.
Second, and much more interesting, we need to represent the transform of each physical thing relative to some parent object, rather than a global origin. This lets us have arbitrarily large spaces without losing precision where it matters. More on this below.
Before we dive in, here’s what it looks like after those changes:
The planet shown in the video above has the same radius as Earth. If we could render unlimited chunks at a time, then you’d be able to see the slight curvature of the horizon. For now, though (and for as long as I don’t have any clever level-of-detail system) the limit on view distance of mere meters makes it appears flat.
Hierarchical coordinate system
Physical objects in PlanetKit have a position and orientation in space. Prior to my most recent changes, all objects had their position and orientation expressed relative to a global origin: the center of the universe. That’s fine as long as your planet is tiny, but let’s imagine what that implies when your planet is very large.
Floating point numbers are great for expressing small numbers with high precision, or large numbers with lower precision. On a very large planet, your player character’s position all the way out at the surface of the planet has so little precision in it that you will begin to notice a visual jitter in the relative positions of the character and the ground beneath it.
This problem is even more easily noticeable if you have a system of globes, e.g., the Earth rotating around its sun. Now with the sun as the center of the universe, the precision available all the way out on Earth is so low that everything around you will appear as a giant boiling jittery mess. However, while everything else jitters around, the sun will appear just fine. This is because we don’t need much precision at that distance to make it appear to stay in the same spot relative to Earth. And this difference in the precision we care about in practice between small and large distances is where the solution lies.
Drawing the sun relative to the player’s perspective doesn’t require centimeter precision. But drawing something else on the ground near the player does. So what if we organise our universe into a hierarchy, such that objects near each other on the ground can have their positions expressed relative to some nearby “local” origin that is common to them? That hierarchy might look something like this:
- Sun
- Earth
- Moon
- Earth chunk 1
- Earth chunk 2
- Hostile NPC
- Player character
- Third-person “chase” camera
- Earth chunk 3
- …
- Mars
- Venus
- Earth
To model that, here’s what I now store for each object that has some position and orientation in space:
(Iso3
above is an alias for na::Isometry3<f64>
, which neatly wraps up translation and orientation.)
Each object now holds a reference to its parent entity in this hierarchy (unless it happens to be the absolute center of the universe) and its transformation is specified relative to that parent. From there, if you want to, say, draw the moon from the perspective of a camera attached to the player’s character, you need to:
- Find the lowest common ancestor between the moon and the camera. That will be Earth. The moon has Earth as its direct parent, but on the other path we have to visit the player character and its closest chunk before we get to the Earth.
- Re-express the positions of both the camera and the moon relative to the Earth. This involves multiplying local transformations up each of the two paths up to the common ancestor (Earth).
- Invert the transformation of the camera relative to Earth, and multiply on the transformation of the moon relative to Earth.
And just like that, you have the position and orientation of any object in the universe relative to any other object in the universe. If they’re far apart, then you’ll lose a lot of precision and not care. If they’re close together and also close to their common parent, then we’ll be dealing with small numbers the whole time, and we’ll end up with a nice precise relative transformation.
I’ve added a bunch of helpers under the SpatialStorage
trait to make all this easy, e.g.:
Note that this lets us have an-game universe as large as we want, even much bigger than our own real universe, as long as we’re able to arrange the objects in it in a spatial hierarchy, and maintain it by re-parenting objects as necessary. For example, if you have a spaceship in Earth orbit, and then you fly it off to Mars, at some point you’ll want to re-parent it on Mars so that its transform can be expressed precisely relative to Mars, which now matters, rather than Earth, which doesn’t.
What’s next?
I’m going travelling, so I won’t be working on PlanetKit for the next several weeks.
When I get back, my plan is to…
- Make the camera’s movement smoothly follow the player character’s position and orientation, rather than snapping as it does today.
- Add a “goal” object or location to the world. Walking onto the object will win the game.
- Load and save worlds, so that we can build a level, define a starting point etc., then load it and play it.
As always, the source for everything I’m talking about here is up in the planetkit repository on GitHub.