When I look at what deep neural networks do

When I look at what deep neural networks do what I see at each layer is a linear mapping followed by a pull to one of the attractor states. That gives decision regions based on the boundaries between the attractor states. Also it seems to me there are problems with lots of unwanted boundaries that are hard to get rid of.
Associative memory alone cannot replace what deep networks do in terms of conditional decision regions. Where if a vector is in one decision region in a given layer it will lead to a conditional choice between some attractor states in the next layer.
It could be though that if you gave associative memory a helping hand it could find conditional decision regions robustly and efficiently. That would be very shocking for the deep learning community. One tool to do that might be to have agreeing associative memories. If two (or more) memory systems agree and are wrong then do something. If two memory systems agree and are right then do something else. If two memory systems disagree then do something else again. It should be possible to create conditional decision structures based on that.

2 Likes

I suppose then with an 'agreeing" associative memory you can split its responses into known, (both equal but) wrong and unknown. For wrong responses you create an additional memory system to learn a conditional alternative. You know to use the conditional rather than the original because it agrees too (self signaling.)
For unknown you start again with a new unconditional associative memory to learn less common things than the first.

You could end up with a forest of rooted trees.
Such matters have a lot to do with self organizing maps (SOM.)
Work has been done before with SOM, but not so much on associative memory based SOM. Mainly because notions about extreme learning machines weren’t current at that time and only poor quality Hopfield AM was available.
It’s time to revisit these things and have a look at the literature.

1 Like

This might interest you:

This made me think about the works of William H. Calvin. He did some pioneering work on a hexagonal organization of neural information and described some of the exact situations you describe. If you note the similarity to the current work on grids his work becomes very relevant to the current discussion.

COMPRESSING THE CEREBRAL CODE - William H. Calvin
http://williamcalvin.com/socns94.html
http://williamcalvin.com/bk9/bk9ch6.htm
The entire book:

1 Like

Thanks for the links. If deep neural networks are all about compounded decision regions (defined by basins of attraction) and that is all, then there might be more efficient ways of doing things.

The next project I am trying is to let evolution very directly specify the decision regions, their size and location and then combine that with what I call if-except if decision trees.

I tried those special decision trees on text before and the results were quite impressive, with semi-readable text via prediction as output, always with correct spelling and a limited amount of sentence structure. However higher order reasoning about grammar etc was not possible because there was no search mechanism to identify and link together concepts. Evolvable decision regions should allow that to happen, but might not be terrible efficient. Anyway I’ll try.

I split this from another topic and I don’t know what to title the thread, so I just used @Sean_O_Connor’s firs sentence. Anyone may suggest a better title (or edit the title if you have permission).

I always wonder what it is that people expect to happen with word generative networks without some “concept anchors” to drive the sentence construction.

Learning pairs of words leads to autocorrect at best.

There is a type of neural defect called fluent aphasia where the poor victim spouts off streams of gibberish that “sounds” like real speech. The grammar sub-system is working but it has nothing to work with.

Now if you could create a chatbot using this general approach:
Use Parsey Mcparseface to pick out the elements of speech, post the found words into a “blackboard” and look for keywords.

Make a list of concepts structures to manipulate these elements into a semi-intelligent sounding conversation. Access to these structures would be based on the old frames concept; combined with some power data mining to build your frames.

There would have to be some overarching goals to select and combine access to the frame structure.
Add some grammar constructions primed with the output of this frame parsing and it might be a fun toy.

Furby 3.0… (or 4.0? I forget how many variations there have been)

1 Like

Might end up being an expensive toy.

1 Like

This chip will probably be less than $5.
http://www.tomshardware.com/news/rockchip-rk3399pro-ai-chip,36270.html

1 Like