@matan_tsuberi I have my complete AGI model. Maybe you might like it.
Here is a synopsis:
https://groups.google.com/forum/#!topic/artificial-general-intelligence/0rHVcqNoFG8
What did it think when it read Moby Dick?
Itās good to get excited about AI, but one does not simply create artificial general intelligence. Maybe you have a piece of it, which would be flipping awesome, but I donāt have any context in which to put your theory because it looks very different from HTM, so I donāt really want to spend the time to understand it because it would be like restarting my understanding of AI. If you explain it concisely even if not in complete detail, maybe people will be more interested.
I have all the pieces. i only show what i thing you can understand.
I am not here to make you believe in it. i show all new people once and then move on.
If some one show a interest then i will start to explain it to them.
To make this work project in my life time work i need a lot of believers for my cause.
A post was merged into an existing topic: Tradeoff between generality and optimallity in regards to AI alignment issues
Not there yet, if ever.
We donāt need to build AGI, only the AIās that can concieve one (or more AGIs if you wish). We are just lacking a clear objective of what AGI would be capable ofā¦or arenāt weā¦
I would be interested in further explanation. The Google Groups link that has been floating around isnāt really coherent for someone from the outside. The current status of the model is not clear, and there are lots of tangential links. Could you give a coherent summary of the main points of your model (in its current up-to-date form)?
The machine dose 4 main things thing.
-
learns the world with neural evolution detectors. Recorded what activates a detector.
-
Learned things, form line one, by compared each against other by using
āsimulated annealingā that relies on gradient decent
algorithm. Which is like A editing distance. First, all objectst and pattern are
weighted.
To move in N dimension space you select target and a starting object. Weights are changed to make the two alike. This is done iterativelly in small random steps until
the most direct movement is found. Than bigger step can be taken. The amount,
of work or, steps is the eigen distance. -
Detected things and temporal patterns that deal with less chaos, energy,
damage/pain, and accelerate matching, have the highest priority. And make up the reward system.
4). Uses a special physics engine to learn physical distance between object and
movement of objects.
Neural physics engine:
http://mbchang.github.io/npe/
Gradient of decent:
Simulated Annealing is about moving around in N dimension space:
I see some problems in your algorithm.
First of all, you canāt learn anything using neutral evolution in a single generation of organism. Evolution doesnāt evolve the organism itself but the entire species.
Second, you cannot anneal a learning algorithm if you donāt have a loss function. What is your loss function in your AGI? Also Gradient Descent is a separate algorithm from Simulated annealing. In fact, GD is SA guided by the gradient instead of using random values to determine the moving direction.
Best,
Martin
Well yes of coarse. Loss function goes with out say anyone familiar with the algorithm.
There also allot of reward functions too, with my AGI model. And neural evolution is fast enough, and other algorithms that i have modified.
Here it is.
āBreakthroughā Algorithm Exponentially Faster Than Any Previous One:
Maybe itās just me but itās extremely difficult to understand what exactly is novel about your model given the resources youāve provided. Moreover, what does the āGā in AGI mean to you? What is it that brings AI across the threshold into AGI that your model has achieved? And how have you assessed it?
G is for general. But it will have to start form sub human beginnings.
It O.K if you do not understand or do not want to understand. But i have to hold my card close, while poaching season is in full swing.
Patent trolls will also sit on ideas. And make people pay money to go through them, which will slow AGI work to a crawl.
Just need people to remember my words. When the poacher and troll die off and the
young take their place, and when they wonder why AGI, is still, not here? Then their eyes will turn to me. And a new age will begin.
I also have a complete model of human psychology. That can answer all question
about human condition. It may not be inline with popular theories. But it will work
just as good and ā¦ i can code it!
You have one year in the US to file a patent on your work after disclosing it on the internet.
Iām afraid you didnāt answer any of my questions. I didnāt ask what the G stands for. Iām asking what it means to you for an AI to transcend to the level of AGI. Itās not universally agreed upon what qualifies and disqualifies something as AGI. Itās crucial we understand what it is about your model that you believe qualifies it as AGI (and equivalently how other work does not qualify as AGI). Without that, thereās no structure to your claims. Just as important, we also need to see and evaluate your evidence that supports said claim.
These are the fundamentals of the scientific method. You say you wish to gather ābelieversā for your methodology. With all due respect, how can you expect people (especially a knowledgeable community such as this) to believe you without any explanation or evidence?
This forum is for neuroscientific based cognitive models, not āAIā or āAGIā that have forums elsewhere on the internet that would better understand your issues. I just finished explaining how I see the difference, for another AI related topic:
None here that I know have to care about what great ideas the patent trolls are pretending to be sitting on. I only hope that they flush when finished.
I donāt think this conversation is fruitful, so Iām going to lock it.