Deep Learning Super Sampling is possibly the biggest game changer in decades

Started by Legend, Feb 06, 2020, 05:12 AM

previous topic - next topic

0 Members and 1 Guest are viewing this topic.

Legend

Ray casting can be pretty cool. Variable rate shading has some nice applications. Deep neural networks inside games can be powerful.

Deep learning super sampling however is the holy grail. Or at least a version of it will be. Hence from here on I'll just say deep learning max settings emulation (DLMSE).


At this point we've all seen examples of neural networks increasing the resolution of pictures or interpolating frames, but the secret sauce of DLMSE is that the neural network is trained per game.

The setup is simple. Develop a variant of the game that renders the game to two separate screens/files. One screen has the game running at max settings beyond what the best gaming computers can do, while the other screen has the game render at average or even lower settings. Worse resolution, worse anti-aliasing, worse textures, worse geometry, worse shadows, etc. Could even lower the framerate if its not too much work on the devs.

Thousands of hours of gameplay are then recorded and shipped off to the cloud for neural net training. The net has the clearly defined goal of making the lower quality video look like the max quality video. It may take a lot of time/processing power but this is something that neural nets are fairly good at.

Once trained, the neural network can run on player computers and consoles to bump up the game graphics with only a minor performance hit. A PC with a 2 teraflop GPU could make the game look better than a 10 teraflop GPU running the game natively.

It even has applications for game streaming. The cloud can stream players a low bandwith lower resolution video and local hardware using DLMSE could bump it back up to highest quality 4k pixels.


For this reason, I predict next next generation consoles will include dedicated hardware for deep neural networks, just like new Teslas.

darkknightkryta

This doesn't sound too applicable considering how many hours it needs to sample.

the-pi-guy

This doesn't sound too applicable considering how many hours it needs to sample.
Why not?
You could just have a few people playing the game like the Quality Assurance people, and simply feed that into the network.

Legend

This doesn't sound too applicable considering how many hours it needs to sample.
It's already a thing.



Even in its current form where only resolution and aliasing are altered, it's extremely effective. Plus like Pi says it would be possible to integrate it with parts of QA and/or automate it.

BananaKing

Yeah I hope games go with 1440p Dlss next gen rather than native 4k. Native 4k is just not worth it. The performance gains are just insane and the picture quality is extremely impressive.

darkknightkryta

I don't think people know how much QA time is necessary for a project as large as a game...

Plus do you want the amount of glitches that pop up early development to get put into the A.I.?

They'd have to do it once QA is finished. 

Legend

I don't think people know how much QA time is necessary for a project as large as a game...

Plus do you want the amount of glitches that pop up early development to get put into the A.I.?

They'd have to do it once QA is finished.  

Yeah it would not be used during all of QA. That wouldn't even make sense considering a lot of QA isn't playtesting and happens during development.

We're only talking about the very last qa moments.

darkknightkryta

Yeah it would not be used during all of QA. That wouldn't even make sense considering a lot of QA isn't playtesting and happens during development.

We're only talking about the very last qa moments.
Yeah, that's normally the last month or so.  This adds on quite a bit more than a month from the sounds of it.

the-pi-guy

Yeah, that's normally the last month or so.  This adds on quite a bit more than a month from the sounds of it.
Why would it?

It'd be done in parallel.  

darkknightkryta

Why would it?

It'd be done in parallel.  
Do what in parallel?  You'd still have to have people play through the game an immense amount of hours.  It's gonna take longer than the final QA itself.

the-pi-guy

Do what in parallel?  You'd still have to have people play through the game an immense amount of hours.  It's gonna take longer than the final QA itself.
How much time does QA generally take for a big game?

the-pi-guy

Xbox Series X it is already mentioned that there's hardware to support machine learning for improving visuals.  

Wonder if PS5 will support something similar.

darkknightkryta

How much time does QA generally take for a big game?
Ideally QA is time and a half.  For IT.  For games?  I can't say for sure.  Just going by hearsay, I would imagine a year in a 3 year dev cycle.  Final polish I would imagine is 6 months.  I think Killzone Shadowfall was something like that.  They had finished the game 6 months before release it I recall.  They spent the rest of the time polishing.

kitler53

How much time does QA generally take for a big game?
i don't know games,  but for my software QA is around 40 percent of the R&D budget. 

Ideally QA is time and a half.  For IT.  For games?  I can't say for sure.  Just going by hearsay, I would imagine a year in a 3 year dev cycle.  Final polish I would imagine is 6 months.  I think Killzone Shadowfall was something like that.  They had finished the game 6 months before release it I recall.  They spent the rest of the time polishing.
QA starts on day 1,.. not just the final months of hardening.  


Featured Artist: Vanessa Hudgens

Legend