Post here if you have a complete AGI

agi

#1

@matan_tsuberi I have my complete AGI model. Maybe you might like it.
Here is a synopsis:
https://groups.google.com/forum/#!topic/artificial-general-intelligence/0rHVcqNoFG8


Tradeoff between generality and optimallity in regards to AI alignment issues
#2

What did it think when it read Moby Dick?


#3

It’s good to get excited about AI, but one does not simply create artificial general intelligence. Maybe you have a piece of it, which would be flipping awesome, but I don’t have any context in which to put your theory because it looks very different from HTM, so I don’t really want to spend the time to understand it because it would be like restarting my understanding of AI. If you explain it concisely even if not in complete detail, maybe people will be more interested.


#4

I have all the pieces. i only show what i thing you can understand.
I am not here to make you believe in it. i show all new people once and then move on.
If some one show a interest then i will start to explain it to them.
To make this work project in my life time work i need a lot of believers for my cause.


#5

A post was merged into an existing topic: Tradeoff between generality and optimallity in regards to AI alignment issues


#6

Not there yet, if ever.

We don’t need to build AGI, only the AI’s that can concieve one (or more AGIs if you wish). We are just lacking a clear objective of what AGI would be capable of…or aren’t we…:wink:


#7

I would be interested in further explanation. The Google Groups link that has been floating around isn’t really coherent for someone from the outside. The current status of the model is not clear, and there are lots of tangential links. Could you give a coherent summary of the main points of your model (in its current up-to-date form)?


#8

The machine dose 4 main things thing.

  1. learns the world with neural evolution detectors. Recorded what activates a detector.

  2. Learned things, form line one, by compared each against other by using
    “simulated annealing” that relies on gradient decent
    algorithm. Which is like A editing distance. First, all objectst and pattern are
    weighted.
    To move in N dimension space you select target and a starting object. Weights are changed to make the two alike. This is done iterativelly in small random steps until
    the most direct movement is found. Than bigger step can be taken. The amount,
    of work or, steps is the eigen distance.

  3. Detected things and temporal patterns that deal with less chaos, energy,
    damage/pain, and accelerate matching, have the highest priority. And make up the reward system.

4). Uses a special physics engine to learn physical distance between object and
movement of objects.

Neural physics engine:
http://mbchang.github.io/npe/

Gradient of decent:

Simulated Annealing is about moving around in N dimension space:


#9

I see some problems in your algorithm.
First of all, you can’t learn anything using neutral evolution in a single generation of organism. Evolution doesn’t evolve the organism itself but the entire species.

Second, you cannot anneal a learning algorithm if you don’t have a loss function. What is your loss function in your AGI? Also Gradient Descent is a separate algorithm from Simulated annealing. In fact, GD is SA guided by the gradient instead of using random values to determine the moving direction.

Best,
Martin