Free Energy Principle

Does anyone have any experience thinking about the ideas in Karl Friston’s Free Energy Principle and how they relate to HTM? Seems like many of the same ideas, especially when it comes to prediction and action.

5 Likes

I watched https://www.youtube.com/watch?v=NIu_dJGyIQI to get a feel for the principle.

Seems as though they’re delineating the relationship (the feedback loop) between prediction and attention; that the brain chooses what to attend to in order to maximize its ability to predict in relation to its currently developed models (identity).

1 Like

While I haven’t personally thought about it much myself, it reminded me of this thread which seems to be touching on a similar possible driving force.

From the perspective of someone who was just introduced to the theory, on the surface it seems fairly self-evident. I could see it being useful when weighing new theories or ideas – how well do those ideas align with the goal of reducing entropy (ultimately meaning better predictions). On the other hand, that kind of tends to already be the goal of new theories for AI anyway…

2 Likes

At a high level, it feels right, but what is up with this? I took this from Karl Friston’s slide deck from a 2016 talk.

I don’t think this represents current neuroscience ideas. Are each of those blue boxes supposed to be Bayesian models?

I have no idea about the Bayes question but the localization of functions is way off.

2 Likes

This paper, “The free energy principle for action and perception: A mathematical review”, may be of help for at least coming to grips with the math of FEP:

Also, “The Predictive Mind” by Hohwy offers an account of Friston’s work at more of a layperson’s level; Friston himself corresponded with Hohwy during the writing of the book and provided feedback to the author.
This may be a good place to start if you find Friston’s papers to be a bit overwhelming (like I do).

And way over here there’s me saying that the brain is a motor pattern learning and selection machine.
The body has needs and picks the best motor pattern(s) to solve those needs.

Consider this from Patricia Churchland in her book “Neurophilosophy”:
“If you root yourself to the ground, you can afford to be stupid. But if you move, you must have mechanisms for moving, and mechanisms to ensure that the movement is not utterly arbitrary and independent of what is going on outside. Consider a simple protochordate, the sea squirt. The newborn must swim about and feed itself until it finds a suitable niche, at which time it backs in and attaches itself permanently. Once attached, the sea squirt’s mechanisms for movement become excess baggage, and it wisely supplements its diet by feasting on its smartest parts.”

3 Likes

I found this blog article recently that does a good job at explaining things:

1 Like

Free energy? Where have I seen this before?

Points to this paper.
http://rstb.royalsocietypublishing.org/content/370/1668/20140169

2 Likes

Thought I would drop this link here rather than start a new thread. This piece in Wired is mostly a lengthy human interest piece on Karl Friston, but it does make a passing attempt to describe the Free Energy Principal for a more lay audience. Once you get through the first half of the article (the biography) the author eventually starts discussing Friston’s ideas and the influences he’s had on numerous other disciplines. Apparently, Friston’s Free Energy Principal was inspired, at least in some part, by the work of Geoffrey Hinton.

Friston came to Queen Square in 1994, and for a few years his office at the FIL sat just a few doors down from the Gatsby Computational Neuroscience Unit. The Gatsby—where researchers study theories of perception and learning in both living and machine systems—was then run by its founder, the cognitive psychologist and computer scientist Geoffrey Hinton. While the FIL was establishing itself as one of the premier labs for neuroimaging, the Gatsby was becoming a training ground for neuroscientists interested in applying mathematical models to the nervous system.

Friston, like many others, became enthralled by Hinton’s “childlike enthusiasm” for the most unchildlike of statistical models, and the two men became friends.

Over time, Hinton convinced Friston that the best way to think of the brain was as a Bayesian probability machine. The idea, which goes back to the 19th century and the work of Hermann von Helmholtz, is that brains compute and perceive in a probabilistic manner, constantly making predictions and adjusting beliefs based on what the senses contribute. According to the most popular modern Bayesian account, the brain is an “inference engine” that seeks to minimize “prediction error.”

In 2001, Hinton left London for the University of Toronto, where he became one of the most important figures in artificial intelligence, laying the groundwork for much of today’s research in deep learning.

Before Hinton left, however, Friston visited his friend at the Gatsby one last time. Hinton described a new technique he’d devised to allow computer programs to emulate human decisionmaking more efficiently—a process for integrating the input of many different probabilistic models, now known in machine learning as a “product of experts.”

The meeting left Friston’s head spinning. Inspired by Hinton’s ideas, and in a spirit of intellectual reciprocity, Friston sent Hinton a set of notes about an idea he had for connecting several seemingly “unrelated anatomical, physiological, and psychophysical attributes of the brain.” Friston published those notes in 2005—the first of many dozens of papers he would go on to write about the free energy principle.

5 Likes

You may also be interested in this recent conversation with Karl Friston:

1 Like

I found this recent talk to be very enlightening, well the parts of it that weren’t math I felt like I could grasp.

2 Likes

Sounds smart even if I was only able to understand 5% of the talk…

Mapping those equations with neural process is still a mystery for me! :wink:

3 Likes

:point_up:

3 Likes

When I saw that slide you posted I had the vague impression that he was pointing out, indirectly that the cortical circuitry is a microcosm of the entire macro circuitry of disparate brain structures.

It’s almost like evolution built the macro brain and then said, “oh, this structure’s pretty versatile, I can simplify it and shrink it down and repeat and repeat and repeat it as a society of intelligent structures on top.”

That’s the level of my understanding; all intuition and all analogy, and zero comprehension of the math. lol

1 Like

Friston’s ideas, including Active inference, are based on believing that agency is completely integrated with intelligence, or even more, that the teleodynamics-like principles are bases on how the information processes organized in the human-like intelligence. Here are some thought on why this approach can be fundamentally wrong: Don’t mix Agency with Intelligence

1 Like

My position on agency is almost exactly opposite: Agency, Identity and Knowledge Transfer

1 Like

A post was merged into an existing topic: Agency, Identity and Knowledge Transfer

Posting here rather than starting a new topic.
Just listened to Sean M. Carroll’s Mindscape podcast number 87, talking with Karl Friston on Brains, Predictions, and Free Energy. Friston posits that the brain using very simple mechanisms to minimize surprise, and thus arrive at a homeostasis that approximates Bayesian inference.
https://www.preposterousuniverse.com/podcast/2020/03/09/87-karl-friston-on-brains-predictions-and-free-energy/
Starting around minute 56, they are talking about stuff that sounds astoundingly close to Numenta’s work. Especially at 56:55 where Carroll talks about reaching out his hand and how the mismatch between what his eye sees and what it expects allows him to correct the hand’s trajectory. This is just like Hawkins’ coffee mug example.

4 Likes

Something extremely dangerous the free energy prinzip of Prof Karl does , is that it almost totally materializes the “Psyche” …