4D video game OT: tesseracts, hyperspheres, and an entire 4D civilization

Started by Legend, Jan 26, 2019, 08:07 PM

previous topic - next topic

0 Members and 1 Guest are viewing this topic.


This game is not releasing soon. Apart from the prototype I made last year, the game is still in pre-production. Cube Royale and VizionEck Adventure are both scheduled to release first. However since this game is so unique and isn't story focused, I'm opening up development and sharing every step of the journey from beginning to end.

"Imagine a beautiful and surreal 4D island that you are free to explore at your own pace. Things aren't always peaceful though, as the island is inhabited by ~12 legendary beasts. Embrace your destiny and use swords, shields, and arrows to defeat them."

The fundamental goal is to make a game that is NOT focused on the fourth dimension. The game's story does not involve dimensions and the gameplay does not revolve around dimensions. Let me explain this a little bit.

We live in 3 spatial dimensions. It is considered impossible for us to see 4 spatial dimensions, so it's common to think of the fourth dimension as something extra and unique that's off to the "side" of our 3D world. There are lots of cool movies, games, and books with this approach. Fundamentally however they are not full 4D. They are 3.5D in the same sense that LittleBigPlanet and other similar games are 2.5D. The fundamental goal of this game is for it to be truly full 4D and let players see directly in 4D. If 4D humans existed and they had 4D video game consoles, this game without modification would be just a normal game for them.

It's delightfully counterintuitive but full 4D makes the game more approachable too. Our brains are really good at interpreting 2D images as 3D scenes and this carries over to 4D scenes. A player can pick up a controller and understand motion instantly. Ana/kata movements don't map to 3D motion but it's still easy enough to understand them as a game mechanic. Basic rotations are also trivial to understand. Complex rotations are the outlier and take a long time to get used to.

Frequently Asked Questions
Q: How can a true 4D game be seen?
A: A 4D camera renders 3D blocks of pixels. There is no perfect way to present these 3D pixels so the game has multiple methods that the player can choose from. This includes intuitive methods like VR and non intuitive methods like space filling curves.

Q: Is this game only for people that like math?
A: This is not a math game. The target audience is people that just want a cool indie game with swords, monsters, and puzzles.

Q: What if I only like math?
A: The game is open in nature. You can avoid combat and explore the 4D environments.

Q: Is the game called "4D video game?"
A: No. I still need to figure out an actual name for it.

Q: What does it play like?
A: Shadow of the Colossus in 4D is a pretty apt comparison. The environments will have a bigger focus however since 4D trees, mountains, rivers, shores, etc. are so incredibly interesting.

This is just the first version of this OP. I'll probably overhaul it in the coming months once I have screenshots to share.


Since I'm venturing into uncharted territory, I've decided to go ahead and support rasterization AND ray marching for rendering. In the previous thread I jumped from ray marching to rasterization since I determined it'd run way faster, the prototype currently does, but it's dependendent on just too many unknown factors. If next gen systems have 16GB of RAM but super beefy GPUs, then ray marching could work better. If the game needs reflections, then ray marching could work better. If the game needs complex geometry, then ray marching could work better.

I'll build the engine with both methods so that I can make the decision as late as possible and base it off real world performance. Using raymarching during the early parts of development will also be nice because it is very easy to get running with lighting, shadows, and reflections (the difficult part is making it run fast).


Filters, filters, and more filters.

The renderer outputs a 3D block of pixels. An advanced filter is then applied to convert the 3D block into a 2D image that can be displayed on a screen. This is very similar to CT scans so I'll use pictures from them as examples.

1. Transparent Voxels

This filter is pretty straightforward. Every voxel is made mostly transparent and the volume is rendered with a perspective camera. Here is an example of this filter in action on the previous prototype.

Straightforward and easy to grasp as a viewer.
All voxels contribute to the final image. The "insides" of objects are visible as they should be.
Works amazingly well when rendered in VR or stereoscopic 3D.

Hard to distinguish objects under some circumstances.
Hard to determine distance from the 3D camera when rendered in 2D.
Colors change depending on viewing angle.

2. Dynamic Voxels

In a similar fashion to the previous filter, the voxels are made partially transparent and rendered with a perspective camera. This time however the voxel's alpha values are used to make important objects appear mostly solid. Here is a simple example from the initial prototype.

Looks visually pleasing and is easy to understand.
Depth is more pronounced than other methods, especially in 2D.
Allows for high contrast between different parts of the image.

Hides voxels from view when they are behind solid objects.
"Insides" of important objects are not visible.
Less important objects are hard to see and easy to miss.

3. Reconstruction

Voxels from the initial render are mapped directly to pixels on the screen. The above picture shows many side by side slices. Another way to map 3D to 2D is with space filling curves. My curve of choice is the hilbert curve. A 3D curve wraps around the voxel volume and maps every voxel to a point along a line. Then this line is stretched and curved to cover the entire 2D screen. This mapping method breaks apart the scene and makes no sense when first viewed, but it can become second nature with enough experience. Here is a screenshot of this filter in action from the initial prototype.

Every single voxel is fully visible and fully understood.
Makes cool patterns on the screen.

Countless hours of gameplay are needed before it makes sense.
Cannot be viewed in VR or 3D.
Obfuscates the game's 4D nature.

Every filter can have lots of options and settings. I think it'd also be good to let the screen be divided into multiple windows so multiple filters could be viewed side by side. Are there any additional filters that you can think of?


In regards to method #3, reconstruction, I'm looking for more methods.

In general my primary approach is converting the 3D space to a 1D space using a 3D space filling curve. Then I convert the 1D space to a 2D space using a 2D space filling curve. Right now my library of curves includes: 2D hilbert curve and 3D hilbert curve. That's it! They produce a very interesting result that might be optimal, but I don't want to blindly assume it is optimal.

Here is how these two curves together map 3D space to 2D space

(rgb values are xyz offsets)

There are a lot of good things about this output. Similar colors (neighboring spots in 3D) mostly end up close to each other. Here is a screenshot of this curve combo in action:

Alternate view using method #1

The blue tesseract is mostly kept together in the final image, and the red hypersphere is mostly kept together in the final image. I believe though that this is a mostly universal feature of my method in general. Different space filling curves should have similar "locality" but might be better overall.

So which other curves should I try?

Since the beginning, I've also been on the lookout for a good 3D space filling surface. The current method converts 3D to 1D and then 1D to 2D which "shreds" the image more than necessary. A better approach might be to go directly from 3D to 2D. The simplest method would be just a 2D hilbert curve that is extruded along the third dimension but I dislike how it makes 1 dimension act completely different than the other 2.