Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed (Author was flagged for spam)

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

8
My Recursive Self Improvement/Neuroevolution Attempt
Post Body

I got to thinking the other day: what's the simplest system I can build that can improve itself? Confident I wasn't going to set off the Singularity using IDLE, I downloaded PyBrain, the easiest-to-install neural net library I could find, and set to work.

Basically, I wanted to use neural networks to suggest modifications to neural networks. I defined each individual in my population as a list of layer sizes, and each individual could take one of these lists as its input. The output was supposed to be an "intelligence rating", describing how "smart" a network with the given layer sizes was supposed to be. The intelligence rating was a bit circular: the first generation was rated randomly, and later I would train each network on the intelligence ratings of the previous generation's population, and then assign it an intelligence based on how low the final training error was (with all the problems that training set error entails).

The networks with the highest intelligence got to reproduce (truncation selection), but the mutation wasn't completely random. Instead, at each iteration, I would feed each neural network with randomly perturbed versions of itself, and then pick the perturbations that were rated highest by the "parent" as the "children".

In practice I don't think it really worked at all; I do see a reasonable neural network shape coming out at the end (more neurons in low layers, fewer in high layers), but that's probably all attributable to the selection component. I also don't think the task I gave was really well-formed: at the beginning I'm just making them memorize random network/intelligence pairs, with no underlying rule to learn, and when they get actual performance data to work on the differences in performance between individuals are so slight (and my training mechanism so haphazard) that I don't really know if they pick up on anything. The fitness values also seem to be super high overall in one generation and super low in the next; not really sure what's going on there.

Anyway, if you want to look at my code and diagnose its many flaws, it's over here on Pastebin under a don't-take-over-the-world license.

Has anyone else managed to evolve a neural network that operates on neural networks? Did yours work better?

Author
Account Strength
0%
Account Age
17 years
Verified Email
Yes
Verified Flair
No
Total Karma
19,675
Link Karma
3,103
Comment Karma
16,528
Profile updated: 2 months ago
Posts updated: 7 months ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
8 years ago