A conceptual proposal similar to Reference Frames: The Abstractive Thinking Model (ATM)

Hi everyone, my name is Jonathan Monclare.

I am an AI enthusiast coming from a non-traditional background.

Since I lack professional technical skills and engineering knowledge, I attempted to approach this from a different angle.

Through a philosophical perspective, I have analyzed the limitations of current LLMs and summarized a proposal for a new conceptual architecture.

You can read the full proposal here:
https://github.com/Jonathan-Monclare/Abstractive-Thinking-Model-ATM-/tree/main

I welcome any advice or feedback in the comments. Thank you!

I quickly glossed over this. I see nothing relevant to HTM in here.
Regarding the “Abstractor”, You need to plug human brains in to teach this thing “Theme”? And you’re measuring stress, heart rate etc? It’s really odd that human’s being plugged in is part of your training process. It might be a sign you’ve gone wrong somewhere.

Thank you for the reply.

Regarding the forum:
I apologize for the misunderstanding. I assumed “HTM” was simply a general community name for AI discussions. I was not aware this forum was dedicated strictly to Hierarchical Temporal Memory theory.

Regarding the use of Bio-signals for “Theme”:
You asked why humans are “plugged in.” To clarify: I do not intend for the Abstractor to “learn” the Theme.

Here is my rationale:

  1. Subjects/Details: I believe Neural Networks are excellent at feature extraction for specific objects (Subjects) and their attributes (Details).

  2. The Theme (Holistic Vibe): A “Theme” is not simply the sum of various Subjects stitched together. It is closer to a “Gestalt”—an instantaneous emotional response or “first impression” upon seeing the whole image.

  3. The Gap: It is technically difficult to rely on current algorithms to ignore local object features and perceive this global stylistic essence accurately.

Therefore, I designed the system to use human biological responses (brainwaves, micro-expressions) as direct input parameters. The human effectively acts as the “sensor” for the Theme, feeding data directly to the Abstractor. The Abstractor simply projects this data; it does not attempt to learn or simulate a capability (human qualia) it does not possess.

Your approach sounds more like you are trying to learn representation for concepts but grounded in observations. Given that language is not the best foundation for this you need tangible experience, I think you would benefit from environments. The ones used for reinforcement learning could be a good start.

A simple case would be an aliased grid world. Your system would need to learn path equivalence, i.e. it should generate representations that show that path A and path B in the world are the same, this is grounded because the world is consistent.

I would say LLMs (or even Deep learning in general) already do this as they generate good abstractions, the thing that may be missing is the grounding but that is an environment or training set up problem not a learning problem.

TLDR: the neural network is already an abstraction layer, so I am not sure what you are planning to do besides introducing ways to enable grounding