Programming Thread

Started by the-pi-guy, Mar 13, 2016, 10:39 PM

previous topic - next topic

0 Members and 6 Guests are viewing this topic.

Go Down

Legend

It's almost like you know what you are doing!




I'm still away but I think I've figured out my map.

Every pixel on the 512x512 grid will be its own renderer in charge of just that tile. All ~260,000 will update individually and only care about their own needs. A tile far from the player will mostly not change and just have a 2x2 or 4x4 texture. As the camera gets closer, the tile re-renders to a higher resolution. This isn't super fast but can look like texture streaming. At max the tile could be 1024x1024 or higher and cover the full screen. As the camera moves away, the texture is scaled down to decrease memory.

This has all the pros of a sparse quadtree while also rendering faster. Rendering speed is dependent on tile count and resolution so this method works with that. Also there is no need for multilevel stuff like a quadtree. In addition, every tile can bake some info instead of recalculating it always.

Labels can be decals rendered on top. The grid of textures could be packed into a single texture to make 3d rendering really fast.

the-pi-guy

It's almost like you know what you are doing!
I have a pretty good handle of this stuff.  :D

I think I forgot about it when I was writing stuff about compilers, but there are programs to generate the parser from a grammar.  There is still a lot of work that has to be done outside of the parser though.  

Grammars have a few nuances, that sometimes leads to bizarre problems if the grammar isn't set up correctly. Still have to implement a symbol table, implement what everything in the grammar does.  This project will definitely be more work than it's worth, but it'll still be kind of fun I hope...  

The end result will be basically be a command terminal that uses an interpreter to act on the inputs.

the-pi-guy

In progress:

set_func-> set id( args ) = statements

any_exp -> bool_exp | set_exp | mod_exp
bool_exp -> (bool_exp bool_op bool_exp) | exp op exp | true | false | null
set_exp -> SET id = any_exp
mod_exp -> exp (MOD exp)

exp -> exp + factor | exp - factor
factor -> factor * fact | factor / fact | factor % fact
fact -> fac ^ fact
fac -> term !
term -> (exp) | real | id | id( in_args )

op -> = | != | >= | <= |  < | >
bool_op -> || &

statements -> while bool_exp { statements }
statements -> if bool_exp { statements } (else{ statements })?
statements -> set_exp;
statements -> return exp;
statements -> print exp;


----------------------

PRINT
RETURN
SET
IF
WHILE
LPAREN
RPAREN
LBRACK
RBRACK
LSQ
RSQ
EXP
ADD
SUB
DIV
MUL
FACTORIAL
OR
AND
EQUAL
NEQUAL
GT
LT
GEQ
LEQ
MOD
REMAIN
REAL = (0-9)*.?(0-9)*
ID = (A-Z | a-z) (A-Z | a-z | 0-9 | _)*
Found some errors that I am fixing in my program.
And I still have a lot of parsing errors I have to figure out.  

Legend

The markov chain like system I set up for words starts producing really really good results once there are hundreds and hundreds of source words.

It made me interested in doing the same thing with pictures. Take a database of a few thousand pictures of the same thing, faces for example, and see how good the system is at generating new ones.


Pixel by pixel a new image could form. Pixels would be added in a random order so that there isn't any bias towards a direction, and later pixels could be less random and focus more on only the best output.

Hardest part would be figuring out a "scoring" system. My word system is fairly basic and just checks the quantity of matching letters. This system would need to go pixel by pixel and search in a circle for similar colors. Find the best match based off similarity and distance. Then add up the scores for all already defined pixels.

Do it in black and white first, and then deal with color later. Would not work to do rgb separately.

the-pi-guy

I've been studying some of the basics.  
Objected Oriented was 5 years ago now.  I just looked over most of what I was a little unsure of.  I was familiar with the words, but I've forgotten some of the distinctions.  

Polymorphism:
There are a couple types of polymorphism.  Generically, polymorphism is where one thing can be used in multiple ways.

One type: function overloading.  
You can name multiple functions the same way, but with different parameters.
So like:
add(int, int)
add(double, double)

Another is subtyping.  

------
Another concept is abstract classes and interfaces.  

An interface is more useful when you want to share behavior, an abstract class is when you want to share code.  
In an abstract class you can basically mix methods that need to be implemented, and methods that are already programmed.  An interface is something that needs to be entirely programmed.  

-----------

Might do a post for data structures that I work with less often.  

the-pi-guy

Starting data structure notes.  

Tree traversal, I usually get these ones mixed up.  

Postorder:
callChildren
visit

calls the postorder function on each of the children first, then visits the current node.  

Basically causes the lower children to get called first.  

Preorder:
visit
callChildren

Simply the opposite of the above.  

InOrder:
callLeft
visit
callRight

Only works on binary trees.  


-----
This probably won't be useful for anyone else.  I'm going to write these up for myself.  
I'm basically making a set of notes for myself to review.

Object Oriented

Next ones will be much more detailed.  

Up next for data structures:
-Heaps
-Hash tables
-search trees
-sets
-possibly strings (pattern matching and text compression)
-graphs


After data structures I am not sure what I'll do. A few things on my mind.  

-databases
-artificial intelligence
-algorithms
-graphics
-security

A few things also on my mind, but further down:
-operating systems
-compilers
-computer organization


Legend

I'm thinking about combining a neural net with celluar automata. Regular machine learning is great for instant solutions but poor when trying to give it some sort of memory. Celluar automata isn't used for problem solving but it's great at flowing from state to state.

So what about something that has a little bit of both? A function that has to think before producing an output.

the-pi-guy

I'm thinking about combining a neural net with celluar automata. Regular machine learning is great for instant solutions but poor when trying to give it some sort of memory.
By memory, do you mean that the solution doesn't quickly change?  

That is done in a neural network by decreasing the learning rate, or making it small in the first place.  

Legend

By memory, do you mean that the solution doesn't quickly change?  

That is done in a neural network by decreasing the learning rate, or making it small in the first place.  
I mean an equivalent to RAM. Once a specific neural net is fully trained and is just being used, it is equivalent to a black box function. Input goes in one side and output comes out the other. There is no internal state for that specific neural net.



Information only flows left to right, from one layer to the next. If the input nodes don't change, then the state of the network is static.

Cellular automata however needs to be calculated over and over again with "information" freely flowing.



My idea is to basically merge both of them. Essentially cellular automata with unique weights per pixel. The whole grid is white, then input pixels are made black, and then output pixels are monitored.

Legend



I set up a quick environment to test the idea out. Bottom left is a visualization of the neural grid. It has 4,096 neurons so it only fills up the bottom of the square. On the right is the race track. It is a super simple track where the AI only needs to turn left. If the car drives off the track (the white ring) then it kills the AI and moves on. If the AI is too slow it is also killed.

Top left are stats showing progress. I'm using natural selection to tune the neural grid since modern neural net approaches wouldn't really work.


At this moment, it's functionally identical to a normal neural network. Programing a neural network to do this with natural selection is borderline trivial. The big challenge for the AI to overcome is that the controls flip after the second lap. Left swaps with right and gas swaps with brake. A neural network with identical inputs would not be able to handle this. That AI would crash into a wall almost instantly even if it was mathematically perfect.

I have yet to test my AI on this so maybe it'll fail just as bad, but that's the goal: a neural brain that is actively thinking and not just reacting.


Current record is 11.8 radians before going off track.

the-pi-guy

I mean an equivalent to RAM. Once a specific neural net is fully trained and is just being used, it is equivalent to a black box function. Input goes in one side and output comes out the other. There is no internal state for that specific neural net.



Information only flows left to right, from one layer to the next. If the input nodes don't change, then the state of the network is static.

Cellular automata however needs to be calculated over and over again with "information" freely flowing.



My idea is to basically merge both of them. Essentially cellular automata with unique weights per pixel. The whole grid is white, then input pixels are made black, and then output pixels are monitored.
I mean I know how a neural network works.  

I'm just confused as to why you couldn't constantly train the network?  

Legend

edt:




Experimenting with a less active brain so that the changes are more visible. A.I. number 17 makes the full loop and dies as the video ends. You can see the input pixels on the bottom, showing the AI raycast info about the track.

the-pi-guy

So I'm guessing you mean that:
with a Neural network, you're training it to simulate a function f(x,y)  and if you reach the same point, a neural network would have to give the same output, which wouldn't be correct.  

Ie, f(x,y) on the first round is different from f(x,y) the second round, because you need different outputs.  Or in other words, the neural network has to have the memory to know what round it is in.  

FNN don't support memory.  
But there are other types of neural networks, that do.

https://en.wikipedia.org/wiki/Recurrent_neural_network

The solution you're putting forward should simulate that though.  

Legend

I mean I know how a neural network works.  

I'm just confused as to why you couldn't constantly train the network?  
Just trying to be as clear as possible with what I'm talking about. I know you know how a neural network works haha.

So I'm guessing you mean that:
with a Neural network, you're training it to simulate a function f(x,y)  and if you reach the same point, a neural network would have to give the same output, which wouldn't be correct.  

Ie, f(x,y) on the first round is different from f(x,y) the second round, because you need different outputs.  Or in other words, the neural network has to have the memory to know what round it is in.  

FNN don't support memory.  
But there are other types of neural networks, that do.

https://en.wikipedia.org/wiki/Recurrent_neural_network

The solution you're putting forward should simulate that though.  
Yeah exactly this is focused on similar things to recurrent neural networks.

Unlike a recurrent neural network though, this is much closer to cellular automata. There are no layers and there is no directed graph. Connections are bidirectional with the whole grid updating in steps.

The goal isn't so much to make a useful AI that solves real world problems, but to make an AI that's random and chaotic with ever evolving inner thoughts.

Legend

A genetic solution for adjusting the weights doesn't work. Unlike a neural network, this is super chaotic and isn't smooth.

A few hundred generations tends to reach something ok but then progress stops. It's mostly just getting lucky with random mutations instead of actually zoning in on a local maximum.


It also takes a fair amount of steps to become populated. The system favors AI that can jump into action instead of those that take a while to wake up.

I'm trying to figure out a way to train the grid like a normal neural net. That way a single brain can live for millions of ticks and develop into a "mature" ai. Not sure how or if I can actually do that though haha.

Go Up