Neurogenesis and serialization

I’ve decided I’m going to keep a neural network up long term and just modify it to accept new data instead of creating a new network whenever I need something new. That way, it might eventually be able to recognize patterns between different tasks.

The requirements are:

  • allowing neurogenesis and neurodegeneration (I don’t think current HTM implementations support changing the size of a layer)
  • saving to a common file type
  • the file type should support reading even when corrupted. (It doesn’t strictly need to, but I really don’t want to have a network I built for years break because of a corrupted hard drive and because it’s too large to make many backups of.)
  • the file should allow quick reading of large amounts of data, without an index

Allowing for neurogenesis and neurodegeneration is easier than I thought it’d be. I’m just using a random selection from 1 to a million for the neuron names, and allowing deletion/insertion from a sorted list (I’m not sure about whether a hash map type or tree type would be better though). There are probably other ways to do it though.

What I’m wondering about is the file type. I’m thinking of using two zeros the size of the largest data type to denote the beginning of a neuron, then storing a sorted list of neurons, each with info of which layer it’s on, and occasionally storing properties of that layer. However, before I create a whole new file type, are there any other file types for storing neural networks In a corruption resistant way?

I found something called FANN, and I think there’s a serialization option in Java HTM, but I’m not sure FANN meets the requirements of being corruption resistant, and I’m not sure if java HTM meets the requirement of being a common filetype usable in multiple languages.

1 Like

Hi @SimLeek! Welcome!

I’m trying to get a sense of what you’re trying to accomplish and I’m kind of wondering if the process you are describing is applicable to HTM technology. One thing is, HTM.Java serializes its state but not the data so that a network can be run from any point in its processing; starting from the saved state. The data, however is an application centric item which the developer using it could save in any way they see fit.

If by FANN, you mean this?, then the network’s saved state would not have any correlation to a FANN network because the two formats, algorithms and structures are vastly different. If you have a way to combine the output of a FANN and an HTM in an application of your making, then you would have to integrate the output of both in the way that follows your design.

I’m sorry if my comments aren’t very helpful - we would need more information or more specific inquiries to respond to - to be of any more help.

Anyway, from what I can gather, it sounds like an interesting project! :wink:

Cheers
David

1 Like

Also, as a side note - HTM.Java is undergoing some major updates and I would recommend using the newest version to be released probably toward the end of next week.

Also as another side note :stuck_out_tongue: Synaptogenesis and Synaptic Elimination (pruning) is not only a feature of HTM, but a cornerstone of how it functions. Whereas classic Neural Networks adjust weights to statically connected “neurons” - HTM neurons dynamically form connections and cull connections ( Dendritic Segments with Synapses) to/from each other in a process known as Synaptogenesis (just as in neuro-biology)

Hello cogmission!

Sorry if I was a little confusing. Really, what I want to do is make a corruption resistant serialization of htm neural networks. That way, if there’s a copy error when copying the neutral network files between computers, the file is still useable.

Also, I’d rather load up the file containing the data describing the neutral network, as well as its state, instead of keeping it as the application. Then, I can use that as a single file format for use by different systems. (Ex: Cpu vs gpu vs fpga, or different languages for access to different libraries and platforms, or different networks running on the same changed codebase)

The only reason I mentioned Fann was because it was the first thing I could find on serializing neural networks. Also, handling neurogenisis is required if I want to change the neural network between runs, while keeping the original state/network. (Like adding a video feed input to a system that previously only handled audio.)

Anyway, it looks like I’m going to have to make this file type. Should be fun.

Thanks!
Josh

1 Like