I don’t think your analysis is correct. The argument is that all the bad human traits are in the old brain. This is naive because the model the neocortex constructs is of what is beyond the neocortex.
Yes, that is the argument.
In a human, the “what is beyond” is the old brain. A relevant implication from the book is that an algorithm derived from the neocortex wouldn’t require the old brain to do its thing (a point BTW that I don’t agree with, but I’m not a neuroscientist either). As such, a model of it running outside a human skull would not be subject to the same negative aspects as the biological version running inside one.
Anyway, I think we agree that Jeff believes the neocortex is amoral (motivations and values come from other structures in the brain). The question is whether or not the book implies that Jeff’s goal for modelling it in software is amoral. I think the fact that he dedicated a significant portion of the book to topics like overpopulation, climate change, etc. speaks to his motivation.
Don’t forget the good traits! Those are all in the old brain too.
The argument is that all “human” traits are in the old brain, and that the neocortex really just acts on the will of the old brain.
I think you have a very shallow knowledge in the current field of ML to make such a statement. There is no distinction. you holding GPT3 to be beholden, the spear of mankind. GPT3 is just an overfitted model and there are several other transformer-based models that mimic it’s structure.
GPT has no internal structural mettle that is something totally different from other publicly available models. The real thing that marks its distinction is - data.
The amount of computing power OpenAI’s GPT3 used is humongous along with the data it used. But it’s aim was to train a causal LM on almost the whole internet picking up other bits like physics, medicine, philosophy etc.
If some non-state actor wants to create fake news from a transformer-based model, they simply have to get a corpus of news and model that on the content they want to deliver. That does not require huge compute power or data, and can be trained with Colab only.
The simple business reason OpenAI has is that having such a massive pre-trained model, they can provide inference on it as a API request/$ platform for any organization - from governments to NGOs. While other companies can easily train their own huge models (and have done so) their reasons are the same as OpenAI.
Finally,
GPT3 is already out loose (to be used by anyone, not that hosted on OpenAi servers) and there is no moral outrage, so your point is completely nullified as to the current events.
For me it is interesting social phenomena that a question about “Ethics” of technology which is in so early stages even on a technical forum, draw so much more traffic than discussion about the technology itself 
The “Ethics” of AI technology is easy, everything that resembles a virus behaviour i.e. replicates w/o feasible limits on replication is possibly dangerous.
You can not pack much intelligence into anything that replicate as virus.
General Intelligence humans and/or machines are always limited by the “Knowledge problem” (look for F.Hayek) and also economically limited by the “Calculation problem” (look Ludvig von Misses).
Both of those problems are only solvable by cooperating and competing intelligent entities.
Coo-petition is the only way forward, that’s why Capitalism work and Socialism don’t, both logically and in practice.
The general cultural mood seem to be doomsday, look no further by last two decades of scifi movies and book compared to the decades before that
PS> In addition because of what I said above I think intelligence is S-curve not exponential curve. Which means there will not be Super-intelligence i.e. beyond Intelligence is something which is Categorically different.
An illustration : What is the difference between :
Monday, Tuesday,Friday and Chair
they are different categories, cant be compared
It’s probably because of that pesky “old brain”. Clearly, our values and motivations are important to it, so it is driving that shiny new neocortex to get out here and defend them ![]()
don’t forget the Russians, they posted some ads on Facebook /sarc
If it is MORAL, we have nothing to worry about .
If it is AMORAL, then it depends on who uses it and how .
How is intelligence IMMORAL ?
And the next question whats the definition of MORAL ?
] I think Asimov four laws come pretty close.
I wouldn’t say they were manipulated per-se. It was more accurate data science techniques to craft a specific targeted message towards a particular audience (in this case, American adults) to cluster them in a demographic/group and predict certain messages that they have a higher chance to relate to.
This in itself wasn’t much of a problem, but the inappropriate use of data (for those specific purposes) was against the ToS of those particular vendors - and the leakage of those happening by kaiser.
Another thing, the result of Brexit was also manipulated too
I recommend you read Brittany’s book “Targeted” for a better view of SCL.
I’ll add that machine learning is used to flags Uighurs as suspicious and many are then sent to re-education camps. Congrats everyone, we’re part of something contributing to a genocide.
.https://www.youtube.com/watch?v=17oCQakzIl8
As an American (we’re basically 2 continents right?), I can proudly say the U.S. is terrible. That sentiment is kinda half our country’s patriotism. We’re obviously super polarized right now.
Plus it’s closely tied with science, which unites countries and is itself transparent. In that way, I think biomimetic AI is better than ML.
I feel like this is approaching the core of Mark’s argument. Something like, intelligence must be moral, or it shouldn’t be created (I know it is more nuanced than that, I’m just generalizing here).
My perspective is that an amoral intelligence doesn’t do anything by itself. It is a tool without values or motivations. It requires an application (or maybe a human interface) to drive it, and that application can be moral or immoral. So rather than focusing on somehow modifying an inherently amoral algorithm to make it moral, instead that effort would be better spent on addressing specific applications of the algorithm.
What new capabilities would a cortical algorithm bring to the table, and how might those be leveraged to do good in the world? What are the negative ways we can think of that those capabilities might be used, and what sorts of regulations might be needed to prevent them?
Exactly, I asked the questions rhetorically.
Intelligence can not be Moral or Immoral, because those are human terms i.e. they are defined by humans.
Even General Intelligence machines have to go trough learning period … whatever interaction they go trough will define their behaviour … you can not just have “blanket”-intelligence. Even when you copy one, there has to be a first that was trained.
It cant be trained by GI-machine if it does not exists yet !!
Then the problem is with human motivation not with the intelligence.
Which mean that there will be an Ecosystem of MI’s
For fun, let me take the example of a robot construction worker sent to Mars to build a human habitat. It’s equipped with a shiny new synthetic neocortex, roughly equivalent to a human’s. Lets say the developer, wanting to play it safe, is running everything in a simulation with plenty of safety controls.
In the simulation, the robot is dropped off on the surface of virtual Mars, and it proceeds to lay motionless on the ground. Hours pass by. The developer fast-forwards a few days, weeks, months, years. Still nothing.
“Hmmm… Oh, I know whats wrong”, thinks the developer, and he uploads a “babbling” program that generates random movements, deducing that should allow the synthetic neocortex to learn a model of the robot body. The developer fast-forwards the simulation a few years, and when he checks, the robot is still writhing around on the ground randomly.
“Ok, he should have a pretty good model of his body by now,” thinks the developer. So he uploads a few tools into the simulation. The robot is still writhing around on the ground. The developer fast-forwards a few years, decades, millenia.
The developer fast-forwards the simulation a billion years, figuring just by random chance, the robot will surely have learned all the movements and tool interactions it would need to build the habitat. And it has, but it is still writing around on the ground.
“He has all the knowledge he needs,” ponders the developer, “so why isn’t he building a habitat yet?”
Hopefully we don’t have anyone that dense working on AGI. My point is, why on earth would an intelligence devoid of values and motivations choose to do anything. Or more specifically HOW would it choose to do anything? A simulation of the neocortex is useless as an agent by itself. It only has potential when it is being directed toward a purpose. Whatever is being built to provide that direction is where something like the Future of Life AI Principles should be applied.
That is not the question. My concern is not over Jeff’s morality it is about Jeff’s understanding of ethics (or lack thereof).
The model will reflect the world if it is of any use (for humans the model includes the “old brain”) and the world is full of immoral (not amoral) dynamics. So you NEVER get an amoral model. The model is constructed through interaction with the world. This is why AI is not like a hammer. The use of the AI is part of the construction of the AI.
Have you read the licensing agreement for using GPT3 or are you fantasizing about it?
That is not the concern. I am pointing out a concern about Jeff’s assumptions. The assumption an amoral intelligence can exist is incoherent. It is not intelligent until it has a sophisticated model and it can’t build the model without interacting in an environment and needs a reward function that is shaped in part by the environment. While humans are around, the environment is significantly shaped by their morality (which in a historical context has always been immoral in some ways).
If nothing else I hope you can see that without some education in ethics we cannot even understand the problem. That is normal and not a criticism. The larger problem is that the community does not see the need to get that education.
I don’t see why the interactions with the world must have anything to do with ethics/morals. For example, it could just explore a data set with goals about what to figure out. The AI itself doesn’t need to take any real-world action if you can basically read its thoughts. It can basically just writhe around (or interact with the environment in some way not based on plans), if there’s a possibility of it doing ethically stupid things to achieve its goals.
I agree there are many situations where it can learn to reflect immoral aspects of the world, like biases in data sets. I think that’s far from something like skynet or a paperclip maximizer though. We can manage those issues because it won’t fight us. Also, the model will reflect the world, but it can be a very narrow slice of the world, like pure mathematics.
This is the part where ethics are needed – it is the thing driving an agent toward a purpose. I agree with you that an agent cannot be autonomous without this.
I go back to what Numenta is working on is understanding the neocortex in order to model it in software. That algorithm is inherently amoral, because it has no agency in and of itself. Where ethics needs to be considered is when that algorithm is being used to build an autonomous agent. We can only speculate whether Jeff would work to build something beneficial with it, because that isn’t what his company is currently working on. If/when it eventually does become what they are working on, we will know (and most likely pretty far in advance) thanks to their culture of transparency. We will have plenty of opportunity to discuss and raise ethical concerns.
BTW, I mentioned earlier that I do not personally agree with the assumption that the neocortex can function independently of the “old brain” (although I’m sure a simulation of one could be simplified and improved on to remove some of the selfish-gene aspects). I suspect the integration between these two components is far too tight to support something like a GUI direct interface to the simulated neocortex that a human could easily interact with. I am just playing devils advocate for the perspective of if it were possible for a simulated neocortex to operate independently.
Anyway, if I am correct about the need for an “old brain”, then I have no doubt that it won’t take long for Numenta to discover that as well. At which point they will be developing something that is not inherently amoral, and the need for a focus on ethics will become relevant (since we will suddenly be building things with agency and the potential to behave immorally due to biased models, etc). It this is a core requirement, it would be known far in advance of the algorithms reaching even something like mouse-level of intelligence.
Haha, are you seriously that naive? GPT3 itself is available to the general public for whatever use on no condition - the licenses are simply to protect them in case of a legal issue. No one cares about it, least of all OpenAI.
What is not available is the pre-trained model checkpoint that can be used for fine-tuning it on a downstream task. Anyone can train another GPT3 for their personal use - just that OpenAI wants them to spend their own computing power in doing so rather than utilize their pre-trained model - to sustain their business.
This looks like a potential discussion topic for the Thousand Brains fireside chat with Jeff. There is a place for posting questions ahead of time on Slido. You could also start a thread about this point specifically on the Numenta Theory section and tag Jeff (he may not personally follow these general sections of the forum as much)
Not to detract from the discussion here. I think we are doing a good job so far of exploring and refining our arguments.