4D video game OT: tesseracts, hyperspheres, and an entire 4D civilization

Started by Legend, Jan 26, 2019, 08:07 PM

previous topic - next topic

0 Members and 1 Guest are viewing this topic.

Legend

This game is not releasing soon. Apart from the prototype I made last year, the game is still in pre-production. Cube Royale and VizionEck Adventure are both scheduled to release first. However since this game is so unique and isn't story focused, I'm opening up development and sharing every step of the journey from beginning to end.



"Imagine a beautiful and surreal 4D island that you are free to explore at your own pace. Things aren't always peaceful though, as the island is inhabited by ~12 legendary beasts. Embrace your destiny and use swords, shields, and arrows to defeat them."

The fundamental goal is to make a game that is NOT focused on the fourth dimension. The game's story does not involve dimensions and the gameplay does not revolve around dimensions. Let me explain this a little bit.

We live in 3 spatial dimensions. It is considered impossible for us to see 4 spatial dimensions, so it's common to think of the fourth dimension as something extra and unique that's off to the "side" of our 3D world. There are lots of cool movies, games, and books with this approach. Fundamentally however they are not full 4D. They are 3.5D in the same sense that LittleBigPlanet and other similar games are 2.5D. The fundamental goal of this game is for it to be truly full 4D and let players see directly in 4D. If 4D humans existed and they had 4D video game consoles, this game without modification would be just a normal game for them.

It's delightfully counterintuitive but full 4D makes the game more approachable too. Our brains are really good at interpreting 2D images as 3D scenes and this carries over to 4D scenes. A player can pick up a controller and understand motion instantly. Ana/kata movements don't map to 3D motion but it's still easy enough to understand them as a game mechanic. Basic rotations are also trivial to understand. Complex rotations are the outlier and take a long time to get used to.



Frequently Asked Questions
Q: How can a true 4D game be seen?
A: A 4D camera renders 3D blocks of pixels. There is no perfect way to present these 3D pixels so the game has multiple methods that the player can choose from. This includes intuitive methods like VR and non intuitive methods like space filling curves.

Q: Is this game only for people that like math?
A: This is not a math game. The target audience is people that just want a cool indie game with swords, monsters, and puzzles.

Q: What if I only like math?
A: The game is open in nature. You can avoid combat and explore the 4D environments.

Q: Is the game called "4D video game?"
A: No. I still need to figure out an actual name for it.

Q: What does it play like?
A: Shadow of the Colossus in 4D is a pretty apt comparison. The environments will have a bigger focus however since 4D trees, mountains, rivers, shores, etc. are so incredibly interesting.



This is just the first version of this OP. I'll probably overhaul it in the coming months once I have screenshots to share.

Legend

Since I'm venturing into uncharted territory, I've decided to go ahead and support rasterization AND ray marching for rendering. In the previous thread I jumped from ray marching to rasterization since I determined it'd run way faster, the prototype currently does, but it's dependendent on just too many unknown factors. If next gen systems have 16GB of RAM but super beefy GPUs, then ray marching could work better. If the game needs reflections, then ray marching could work better. If the game needs complex geometry, then ray marching could work better.

I'll build the engine with both methods so that I can make the decision as late as possible and base it off real world performance. Using raymarching during the early parts of development will also be nice because it is very easy to get running with lighting, shadows, and reflections (the difficult part is making it run fast).

Legend

Filters, filters, and more filters.

The renderer outputs a 3D block of pixels. An advanced filter is then applied to convert the 3D block into a 2D image that can be displayed on a screen. This is very similar to CT scans so I'll use pictures from them as examples.

1. Transparent Voxels

This filter is pretty straightforward. Every voxel is made mostly transparent and the volume is rendered with a perspective camera. Here is an example of this filter in action on the previous prototype.


Pros:
Straightforward and easy to grasp as a viewer.
All voxels contribute to the final image. The "insides" of objects are visible as they should be.
Works amazingly well when rendered in VR or stereoscopic 3D.

Cons:
Hard to distinguish objects under some circumstances.
Hard to determine distance from the 3D camera when rendered in 2D.
Colors change depending on viewing angle.

2. Dynamic Voxels

In a similar fashion to the previous filter, the voxels are made partially transparent and rendered with a perspective camera. This time however the voxel's alpha values are used to make important objects appear mostly solid. Here is a simple example from the initial prototype.


Pros:
Looks visually pleasing and is easy to understand.
Depth is more pronounced than other methods, especially in 2D.
Allows for high contrast between different parts of the image.

Cons:
Hides voxels from view when they are behind solid objects.
"Insides" of important objects are not visible.
Less important objects are hard to see and easy to miss.

3. Reconstruction

Voxels from the initial render are mapped directly to pixels on the screen. The above picture shows many side by side slices. Another way to map 3D to 2D is with space filling curves. My curve of choice is the hilbert curve. A 3D curve wraps around the voxel volume and maps every voxel to a point along a line. Then this line is stretched and curved to cover the entire 2D screen. This mapping method breaks apart the scene and makes no sense when first viewed, but it can become second nature with enough experience. Here is a screenshot of this filter in action from the initial prototype.


Pros:
Every single voxel is fully visible and fully understood.
Makes cool patterns on the screen.

Cons:
Countless hours of gameplay are needed before it makes sense.
Cannot be viewed in VR or 3D.
Obfuscates the game's 4D nature.


Every filter can have lots of options and settings. I think it'd also be good to let the screen be divided into multiple windows so multiple filters could be viewed side by side. Are there any additional filters that you can think of?

Legend

In regards to method #3, reconstruction, I'm looking for more methods.


In general my primary approach is converting the 3D space to a 1D space using a 3D space filling curve. Then I convert the 1D space to a 2D space using a 2D space filling curve. Right now my library of curves includes: 2D hilbert curve and 3D hilbert curve. That's it! They produce a very interesting result that might be optimal, but I don't want to blindly assume it is optimal.

Here is how these two curves together map 3D space to 2D space

(rgb values are xyz offsets)

There are a lot of good things about this output. Similar colors (neighboring spots in 3D) mostly end up close to each other. Here is a screenshot of this curve combo in action:



Alternate view using method #1


The blue tesseract is mostly kept together in the final image, and the red hypersphere is mostly kept together in the final image. I believe though that this is a mostly universal feature of my method in general. Different space filling curves should have similar "locality" but might be better overall.

So which other curves should I try?



Since the beginning, I've also been on the lookout for a good 3D space filling surface. The current method converts 3D to 1D and then 1D to 2D which "shreds" the image more than necessary. A better approach might be to go directly from 3D to 2D. The simplest method would be just a 2D hilbert curve that is extruded along the third dimension but I dislike how it makes 1 dimension act completely different than the other 2.

Legend



4D golf released recently which made me think of my old game concept.

Haven't played it myself but I think it really suffers from using the slice method that almost every 4d game uses. Looks so complicated and confusing yet none of that is intrinsic to 4D geometry. I posted about it in the old thread but just imagine how needlessly complex a 3D world would feel if you could only see it like this:

(a sphere on a brown table)


I'm focused on Hapax, and the game after Hapax, and I don't even know if there is a market for this type of thing, but it would be so cool to see a 4D game that attempted to actually show a 4D image like I was doing. The fourth dimension is intuitive when you can actually see it.


I struggled with representing the 3D volume on a 2D screen, as you can see in previous posts, so I still fiddle with it from time to time. Today I think the best method is a combination of a "hologram" and reconstruction. Slice the 3D volume into a whole bunch of horizontal squares and then position the 3D camera with slightly fake parallax so when viewed from the side at a small angle, the squares fill up the view without overlapping. It's odd and artificial but very approachable with the right context.

Sliced Sculpture Artworks | Saatchi Art

Like this but no gaps or overlaps.


Give environmental context, like you're playing as a person who themselves is controlling a 4D avatar and is viewing it on their slice-o-matic hologram display, and I think most players' brains would embrace it in seconds. All their brainpower can be focused on the 4D content that is being displayed. Stuff to the left/right on the display is to the left/right of your avatar. Stuff that is toward/away on the display is to the ana/kata of your avatar. Stuff that is up/down on the display is above/below your avatar. Smaller appearing objects are farther away and bigger appearing objects are closer. Moving sideways would shift closer objects faster than it'd shift farther away objects. Rotations shift everything regardless of distance.


Left stick to move left/right/ana/kata. Shoulder button moves you forward. Right stick to rotate the camera left/right/ana/kata. Maybe another shoulder button to look up/down. 4D objects can rotate in 3 additional ways but 2 of those don't make sense in a game on the ground and the third could be optional.