Post here if you have a complete AGI

agi

#1

@matan_tsuberi I have my complete AGI model. Maybe you might like it.
Here is a synopsis:
https://groups.google.com/forum/#!topic/artificial-general-intelligence/0rHVcqNoFG8


Tradeoff between generality and optimallity in regards to AI alignment issues
#2

What did it think when it read Moby Dick?


#3

It’s good to get excited about AI, but one does not simply create artificial general intelligence. Maybe you have a piece of it, which would be flipping awesome, but I don’t have any context in which to put your theory because it looks very different from HTM, so I don’t really want to spend the time to understand it because it would be like restarting my understanding of AI. If you explain it concisely even if not in complete detail, maybe people will be more interested.


#4

I have all the pieces. i only show what i thing you can understand.
I am not here to make you believe in it. i show all new people once and then move on.
If some one show a interest then i will start to explain it to them.
To make this work project in my life time work i need a lot of believers for my cause.


#5

A post was merged into an existing topic: Tradeoff between generality and optimallity in regards to AI alignment issues


#6

Not there yet, if ever.

We don’t need to build AGI, only the AI’s that can concieve one (or more AGIs if you wish). We are just lacking a clear objective of what AGI would be capable of…or aren’t we…:wink:


#7

I would be interested in further explanation. The Google Groups link that has been floating around isn’t really coherent for someone from the outside. The current status of the model is not clear, and there are lots of tangential links. Could you give a coherent summary of the main points of your model (in its current up-to-date form)?


#8

The machine dose 4 main things thing.

  1. learns the world with neural evolution detectors. Recorded what activates a detector.

  2. Learned things, form line one, by compared each against other by using
    “simulated annealing” that relies on gradient decent
    algorithm. Which is like A editing distance. First, all objectst and pattern are
    weighted.
    To move in N dimension space you select target and a starting object. Weights are changed to make the two alike. This is done iterativelly in small random steps until
    the most direct movement is found. Than bigger step can be taken. The amount,
    of work or, steps is the eigen distance.

  3. Detected things and temporal patterns that deal with less chaos, energy,
    damage/pain, and accelerate matching, have the highest priority. And make up the reward system.

4). Uses a special physics engine to learn physical distance between object and
movement of objects.

Neural physics engine:
http://mbchang.github.io/npe/

Gradient of decent:

Simulated Annealing is about moving around in N dimension space:


Functional Emergence, or so I hear the cool kids are talking about it
#9

I see some problems in your algorithm.
First of all, you can’t learn anything using neutral evolution in a single generation of organism. Evolution doesn’t evolve the organism itself but the entire species.

Second, you cannot anneal a learning algorithm if you don’t have a loss function. What is your loss function in your AGI? Also Gradient Descent is a separate algorithm from Simulated annealing. In fact, GD is SA guided by the gradient instead of using random values to determine the moving direction.

Best,
Martin


#10

Well yes of coarse. Loss function goes with out say anyone familiar with the algorithm.
There also allot of reward functions too, with my AGI model. And neural evolution is fast enough, and other algorithms that i have modified.


#11

Here it is.
‘Breakthrough’ Algorithm Exponentially Faster Than Any Previous One:


#12

Maybe it’s just me but it’s extremely difficult to understand what exactly is novel about your model given the resources you’ve provided. Moreover, what does the “G” in AGI mean to you? What is it that brings AI across the threshold into AGI that your model has achieved? And how have you assessed it?


#13

G is for general. But it will have to start form sub human beginnings.
It O.K if you do not understand or do not want to understand. But i have to hold my card close, while poaching season is in full swing.
Patent trolls will also sit on ideas. And make people pay money to go through them, which will slow AGI work to a crawl.
Just need people to remember my words. When the poacher and troll die off and the
young take their place, and when they wonder why AGI, is still, not here? Then their eyes will turn to me. And a new age will begin.

I also have a complete model of human psychology. That can answer all question
about human condition. It may not be inline with popular theories. But it will work
just as good and … i can code it!


#14

You have one year in the US to file a patent on your work after disclosing it on the internet.


#15

I’m afraid you didn’t answer any of my questions. I didn’t ask what the G stands for. I’m asking what it means to you for an AI to transcend to the level of AGI. It’s not universally agreed upon what qualifies and disqualifies something as AGI. It’s crucial we understand what it is about your model that you believe qualifies it as AGI (and equivalently how other work does not qualify as AGI). Without that, there’s no structure to your claims. Just as important, we also need to see and evaluate your evidence that supports said claim.

These are the fundamentals of the scientific method. You say you wish to gather “believers” for your methodology. With all due respect, how can you expect people (especially a knowledgeable community such as this) to believe you without any explanation or evidence?


#16

This forum is for neuroscientific based cognitive models, not “AI” or “AGI” that have forums elsewhere on the internet that would better understand your issues. I just finished explaining how I see the difference, for another AI related topic:

None here that I know have to care about what great ideas the patent trolls are pretending to be sitting on. I only hope that they flush when finished.


#18

I don’t think this conversation is fruitful, so I’m going to lock it.


#19