Post here if you have a complete AGI

@matan_tsuberi I have my complete AGI model. Maybe you might like it.
Here is a synopsis:
https://groups.google.com/forum/#!topic/artificial-general-intelligence/0rHVcqNoFG8

What did it think when it read Moby Dick?

4 Likes

Itā€™s good to get excited about AI, but one does not simply create artificial general intelligence. Maybe you have a piece of it, which would be flipping awesome, but I donā€™t have any context in which to put your theory because it looks very different from HTM, so I donā€™t really want to spend the time to understand it because it would be like restarting my understanding of AI. If you explain it concisely even if not in complete detail, maybe people will be more interested.

I have all the pieces. i only show what i thing you can understand.
I am not here to make you believe in it. i show all new people once and then move on.
If some one show a interest then i will start to explain it to them.
To make this work project in my life time work i need a lot of believers for my cause.

A post was merged into an existing topic: Tradeoff between generality and optimallity in regards to AI alignment issues

Not there yet, if ever.

We donā€™t need to build AGI, only the AIā€™s that can concieve one (or more AGIs if you wish). We are just lacking a clear objective of what AGI would be capable ofā€¦or arenā€™t weā€¦:wink:

I would be interested in further explanation. The Google Groups link that has been floating around isnā€™t really coherent for someone from the outside. The current status of the model is not clear, and there are lots of tangential links. Could you give a coherent summary of the main points of your model (in its current up-to-date form)?

2 Likes

The machine dose 4 main things thing.

  1. learns the world with neural evolution detectors. Recorded what activates a detector.

  2. Learned things, form line one, by compared each against other by using
    ā€œsimulated annealingā€ that relies on gradient decent
    algorithm. Which is like A editing distance. First, all objectst and pattern are
    weighted.
    To move in N dimension space you select target and a starting object. Weights are changed to make the two alike. This is done iterativelly in small random steps until
    the most direct movement is found. Than bigger step can be taken. The amount,
    of work or, steps is the eigen distance.

  3. Detected things and temporal patterns that deal with less chaos, energy,
    damage/pain, and accelerate matching, have the highest priority. And make up the reward system.

4). Uses a special physics engine to learn physical distance between object and
movement of objects.

Neural physics engine:
http://mbchang.github.io/npe/

Gradient of decent:

Simulated Annealing is about moving around in N dimension space:

I see some problems in your algorithm.
First of all, you canā€™t learn anything using neutral evolution in a single generation of organism. Evolution doesnā€™t evolve the organism itself but the entire species.

Second, you cannot anneal a learning algorithm if you donā€™t have a loss function. What is your loss function in your AGI? Also Gradient Descent is a separate algorithm from Simulated annealing. In fact, GD is SA guided by the gradient instead of using random values to determine the moving direction.

Best,
Martin

1 Like

Well yes of coarse. Loss function goes with out say anyone familiar with the algorithm.
There also allot of reward functions too, with my AGI model. And neural evolution is fast enough, and other algorithms that i have modified.

Here it is.
ā€˜Breakthroughā€™ Algorithm Exponentially Faster Than Any Previous One:

Maybe itā€™s just me but itā€™s extremely difficult to understand what exactly is novel about your model given the resources youā€™ve provided. Moreover, what does the ā€œGā€ in AGI mean to you? What is it that brings AI across the threshold into AGI that your model has achieved? And how have you assessed it?

1 Like

G is for general. But it will have to start form sub human beginnings.
It O.K if you do not understand or do not want to understand. But i have to hold my card close, while poaching season is in full swing.
Patent trolls will also sit on ideas. And make people pay money to go through them, which will slow AGI work to a crawl.
Just need people to remember my words. When the poacher and troll die off and the
young take their place, and when they wonder why AGI, is still, not here? Then their eyes will turn to me. And a new age will begin.

I also have a complete model of human psychology. That can answer all question
about human condition. It may not be inline with popular theories. But it will work
just as good and ā€¦ i can code it!

You have one year in the US to file a patent on your work after disclosing it on the internet.

4 Likes

Iā€™m afraid you didnā€™t answer any of my questions. I didnā€™t ask what the G stands for. Iā€™m asking what it means to you for an AI to transcend to the level of AGI. Itā€™s not universally agreed upon what qualifies and disqualifies something as AGI. Itā€™s crucial we understand what it is about your model that you believe qualifies it as AGI (and equivalently how other work does not qualify as AGI). Without that, thereā€™s no structure to your claims. Just as important, we also need to see and evaluate your evidence that supports said claim.

These are the fundamentals of the scientific method. You say you wish to gather ā€œbelieversā€ for your methodology. With all due respect, how can you expect people (especially a knowledgeable community such as this) to believe you without any explanation or evidence?

3 Likes

This forum is for neuroscientific based cognitive models, not ā€œAIā€ or ā€œAGIā€ that have forums elsewhere on the internet that would better understand your issues. I just finished explaining how I see the difference, for another AI related topic:

None here that I know have to care about what great ideas the patent trolls are pretending to be sitting on. I only hope that they flush when finished.

I donā€™t think this conversation is fruitful, so Iā€™m going to lock it.

1 Like