Programming Thread

Started by the-pi-guy, Mar 13, 2016, 10:39 PM

previous topic - next topic

0 Members and 1 Guest are viewing this topic.

Go Down

Legend

I tried my hand at some neural nets from scratch but I had some issues. Felt like I was brute forcing results instead of making predictable progress.




With just an input and output layer, I can get to this level of quality super quick. Sometimes it makes it a bit farther but never down to the next loop.



With a middle layer it reaches the same spot but takes a bit longer to train. This is around 50,000 epochs.



Two middle layers doesn't improve the final result, it just takes longer to reach there. This is 150,000 epochs.


I'm guessing I'm having serious problems with overfitting. It easily finds a method that works for the first swerve but it can't modify itself without losing that. Maybe I need use a less naive reinforcement method, or maybe I need to add more starts to the data.

Fun seeing how close it loves riding edges though. The reward function is based off distance travelled not distance along the path.

Legend

I fixed it. Problem was my input and output generalized poorly. It couldn't learn from one corner and then directly apply that to the next. I wanted it to learn how to interpret the input and output data itself, but that was asking too much for how simple the net is.

All I needed to do was rotate the velocity output to match the car and it loops the track with ease.

Legend

Bet yall haven't seen a neural net that looks like this before.

It doesn't do anything impressive but it uses the cellular automata structure I've talked about before. Every node is connected to the 8 or less nodes around it and updated simultaneously, not in series. The green nodes are controlled by sensors and the blue nodes are sampled as output.



The structure allows the AI to have a memory and iterate as part of problem solving. On multiple occasions when I had the reward system wrong, it had the ability to come to a complete stop in the road and then resume "driving." It is not told velocity info. The 5 inputs are 5 ray distances extending forward.

Of course the big negative of this structure is that information has to physically travel and overlap other information. The two green bars on the left have to send information across to the right blue bar which controls steering. I purposely designed the shape of this network to show this isn't a deal breaker, but the green cells can be placed on the bottom and the blue cells can be placed on the top to improve it. A regular neural network is much nicer because all inputs have equal potential within the system. Also the massive massive limitation of this setup is that traditional backpropagation is impossible.


Go Up