Neural networks and rule-based AI?

Neural networks and rule-based AI?
https://bdtechtalks.com/2019/06/05/mit-ibm-hybrid-ai/
As was noted ReLU activation functions are literal switches.
They switch together webs of dot products into a linear projection of input to output in neural networks.
Since each switch is either on or off they are ideal predicates for constructing decision trees, inductive logic systems etc. I would imagine they are quite information rich predicates.

1 Like

Rule based AI was tried by a host of players during the “Fifth Generation period” and most seemed to follow the same path: some promising early efforts followed by a distressing grinding to a halt. As you expand the scope of the project towards the unbounded world things get progressively harder. It seems that Combinatorial Explosions blew up every project.


During the years before the middle of the 1970s, the expectations of what expert systems can accomplish in many fields tended to be extremely optimistic. At the beginning of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems. The expectations of people of what computers can do were frequently too idealistic. This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s.

3 Likes

Don’t throw the baby out with the bathwater!
https://phys.org/news/2016-06-video-games-artificial-intelligence-tactical.html

And as I mentioned elsewhere there are really powerful symbolic systems you can construct like ‘if except if’ trees that produce generalized outputs before more specialized results, and will default to the generalized response in the case of lack of further information.
Why not look at everything, or at least the basics of everything.
What I find is the "experts’ often race ahead before working through and correctly understanding the basic (mathematical) elements they intend to use.
Only to construct skyscrapers on foundations of straw.

That’s the spirit - be excited about success in a limited domain like a game.

Then try to apply that experience to a larger domain like a domestic robot to experience the agony of combinatorics explosions!

What do they say about those that fail to learn the lessons of history?

I used to be excited about expert systems until I learned about the problems of applying them to any real-world setting; they fold like tissue paper. No common sense and very brittle failures when something unexpected presents itself. On of the achilles heels of rule based systems is that the programmer has to provide rules for EVERY condition a-priori. Yes, you can provide fail-safe default behavior but when that turns out to be the wrong thing (like taking the default behavior of freezing & blocking a hallway in an unexpected fire situation) you discover that you could not foresee every situation in your logic tree.

Who expected your prototype lethal battle robot to go out the fire door and end up in a densely populated neighborhood? That was not in the test plan that was devised. Mayhem ensues.

2 Likes

@SeanOConnor I don’t think you’re going to find much interest in investigating GOFAI strategies around here. Of course building a complete intelligent system is going to take a combination of techniques, but the hard part is the brain stuff we (humans) are still trying to understand and model. Like it says in Read this first :

They’re all Constraint Satisfaction Problem solvers anyway (e.g. ANN, HTM, Symbolic AI) they just represent and refine constraints in different ways.

I sort of thought that HTM was a coincidence detector. How does that fit with what you are offering?

These algorithms, if one pauses them at one discrete state, one would see a combination of values. These values “model” inputs. The combination of these values is a solution to a CSP problem where the constraints are the variables that are represented by these values at a certain state.

As a side note, we somehow believe that there is this super state that “generalizes” a set of inputs. However by intuition, at least for me, there will be likely none because these states’ values can be mutually exclusive, just like a memory you remember something but then you tend to forget another one. Therefore, I’m very interested with finding these set of states that when formed to an ensemble would provide a more performant model.