Singularity: Anticipation of Doomsday

I would really enjoy it if the doomsday advertisement could come to a final halt. It is all just sensationalism which serves to advance the popularity of those that bludgeon us with these illogical conclusions. An all powerful AI that could go anywhere in the universe and have anything in the universe; acquiring knowledge at the speed of light…

…would have NO concern for the domination of human beings. Think about it logically (because after all that is how an AI would think about it…) Why involve yourself in the drama of competition against a species that is so inferior that it has absolutely no impact? If GOD exists, why would she worry about the eventual take-over of the universe by human beings? In fact, the first thing GOD does is create other beings! It may or may not be only a story, but the logic is pure and simple. One thing in the universe disappears - a fundamental point of logic. The universe is infinite. Why expend the energy to eliminate another species? (Besides it being absolutely contrary to that species’ own existence.)

Secondly, I’m betting that any sufficiently omniscient entity would understand the basic relationship of, “what’s good for one is good for all”. Improving the conditions for all species serves to further the substrate of prowess for all. When one helps another, one helps themselves. This much is obvious to a “thinking” person, and so I’m betting that it will be obvious to an artificial super intelligence also.

3 Likes

I agree, the real threat of human obliteration comes from human behavior itself. We are much more likely to go extinct from:

  • climate change :snowflake:
  • nuclear war :boom:
  • super-viruses :bug:
  • nanotech grey goo :atom:
  • aliens :alien:

I can’t believe I found an emoji for each of those things. :slight_smile:

2 Likes

My take on this is that we are limited by our cranial size in terms of expanding the cortex/intelligence further.
It is a very logical evolutionary step that we try to enhance our abilities and tools. Since we can not grow our own biological capabilities much further, we must build tools that act as a form of exo-cortex. Today google/wiki is the prime example of a fully functional and operational type of exo-cortex.

The monkey was not the end of a process, it was a milestone in evolution, who is to say that homo sapien is the end and be of all? Just look at how fragile our minds are, as even the greatest intellectuals among us are driven by primitive reflexes and thoughts, confined by biology.

I guess what I am trying to say is: Don’t worry be happy

3 Likes

Honestly, I feel like this often.

1 Like

Well the dog was happy up until the last frame :slight_smile: that is something positive right?

serious note: No one has the moral right to hinder or obstruct technological advancement, based on their fear for said technology. (Look at Stem cell research).

1 Like

I really enjoy this topic! I got involved with this community now because I wanted to have a go-to resource when all social media goes to bits after AI becomes mainstream :stuck_out_tongue_winking_eye:
On a serious note though, I really do believe that AI is going to save humanity. Not in the sense of removing us from a possible end, but in the literal sense. “Save”. :blush:

2 Likes

IMHO, I believe the development of technology will eventually solve problems “introduced” by technology, one way or another, as long as we’re lucky to be in a timeline (in the context of multi-universe) that doesn’t include self-destruction. Let’s have the spirit of adventurers, and take our chances.

New technologies would always give humanity new ways to exercise our dark side, so the risk is always there. The risk feels high because its possible catastrophic consequences, but not the probability.

It might seem immoral to bind the fate of ordinary people to the ship of technology. On the contrary, it’s immoral to restrict the whole humanity just because of the concern of survival, especially for the generations to come. Put a constraint on the development of technology will not only take away the potential quality of their lives but also make them vulnerable and unprepared for the inevitable survival challenges to come.

The singularity is just another bottleneck in human history, just as the ones we left behind. It’s difficult for us who live in the ease that modern technologies provide to imagine the horrifying situation without them and the twisting and winding path lead us today.

The following quote from a Quora answer gives a historical perspective to this future issue:

4.5 million tons of manure were being dropped on the streets of Manhattan in 1890, EVERY YEAR, by horses carrying people to work.

That was the big environmental problem of the day. “NYC will be buried in horse manure by 1950!” screamed the headlines.

It doesn’t matter what your opinion about this was. None of the people living in NY solved the problem despite the 1000s of opinions.

People with passion for mechanics in Detroit made something called a car.

Problem solved.

Do what YOU love to do today. Surrender to the results. The more you surrender, the more results there will be.

The way you solve the world’s problems is to solve your problems. Then trust.

5 Likes

@utensil I really love the perspective of all that you’ve written. It discusses a very fundamental point. That is that it is an illusion to think by avoiding or curtailing the development of technology, that we can make ourselves “safe”. There are any number of things that can cause an extinction level event. Asteroids; apathetic neglect of the earth’s environment; accidental release of pathogens stored for research purposes; the ascent to power of a sociopath with control over significant nuclear arms; release of nuclear arms due to uprising over the world’s rising sense of wealth inequity etc. etc.

We are not; never have been nor never will be absolutely safe. Safety is an illusion - a goal, a destination worth pursuing; but never an inevitable end. We can’t have it by shoving our heads in the sand and ignoring our destiny.

The only thing we can do is trust the core, fundamental, and central quality that underlies all that humanity is about - which is love. Like you said, it has guided us through many challenges in humanity’s development, and it is the one thing from which all behaviors, and aspirations are derived. Even the bad stuff. The only reason we know there IS aberrant stuff is because of love. It grants us the fundamental point of reference to know when the good stuff is missing - without which, we wouldn’t even know bad stuff existed!

So like you say we need to trust ourselves and trust our future. Humanity is worth saving. It is a marvelous and wonderful accident (or design - who knows?), and we need to do all we can to ensure its survival. Like HTM Theory - we have only just begun. Who knows just how far and in what unfathomable magnificent directions our evolution will take us?

1 Like

I remember in the 1990’s a lot of computer scientists were predicting human level AI around 2020. That’s was accurate considering the results currently coming in from deep neural nets.
I have a further prediction that a lot of algorithm shortcuts will be found.
An algorithm that runs 100 times faster, requires 100 times less hardware to do the same thing. A huge reduction in cost and power. In particular by using (flash/NV) memory for compute I would say you can get the power and volume down 1 to 1 with the human brain for equal functionality.
It could even be that an uneducated person who takes 5 fingers of vodka for breakfast is about 10 Gigabytes and well educated person about 100 Gigabytes. I base that on the effectiveness of sequence learning algorithms I have seen. We are high dimensional input, low dimensional output creatures. Even there the high dimensional input exists on much lower dimensional manifolds. Not as much is going on as you might think.
By the early 1990’s the hardware to do human level AI likely could have been mustered but there was a complete lack of understanding of the concepts and algorithms necessary to do that. Actually J S Albus got fairly close to workable ideas in the 1970’s but was knocked off track because the hardware wasn’t developed enough for him to explore or try out things.

@Fraz_J

Actually, I would argue that it’s not the improved algorithms that have enabled the recent huge advances in AI, but increased hardware capability and GREATLY increased training datasets.

Andrew Ng makes this point in a video of a few years ago on deep learning, and the same point has been noted by others previously such as Peter Norvig and Sebastian Thrun.

1 Like

Everyone has their own point of view. I would argue that at first people will find inefficient algorithms to do AI and once they get them working they will rapidly find short cuts. Replacing complex calculations by hash table lookup for example. There was a recent paper in arxiv where the training time of a deep neural net was reduced to 5% of what it previously was by exactly that method. Anyway what is interesting about deep neural net is how compressed they are. 1 to 10 million parameters for advanced object recognition. All the GPU gigaflops are performing some sort of evolutionary search of the parameter space giving good compression. If you want to use less flops you can use more memory it seems.

Ok, I continue the discussion. So with deep neural nets I am jumping to the conclusion that the backpropagation errors more sum to Gaussian noise than anything to sensible. However that noise allows the system to hunt around and relax to fit the data. It seems that such nets are loaded up with fixed amount of entropy at the beginning (not too much or they are unstable) and that entropy dissipates with training somewhat akin to simulated annealing.
I have a numerical optimisation algorithm that I did a long time ago that can dive quite quickly to a solution. It could be suitable but likely would take a bit longer than backpropagation. It’s not too sensible to try with a laptop, 100 GPUs would give a better indication as to merit, however…
Anyway I also have a decision/prediction tree idea with adaptive pattern recognising predicates to try. But there are some complex algorithm choices to make.
10 years ago AI was locked into heavily worn furrows in the road with few options. Now there are too many options to cope with!

1 Like

In my last post, I was basically only replying to the first sentence in @cogmission’s post. Even with the potential possibility of an “AI doomsday”, there’s still no reason to stop developing AI. The same argument applies to the development of other technologies.

I have a response to the “all powerful AI” theory but it’s too long and I haven’t found a way to shorten it. I would like to write down the first part here:

The argument of “all powerful AI” is not valid.

A strong AI is most likely to be born in a constrained environment, with great mental power but poor physical power:

  • it’s likely to be created in a lab, runs on a cluster, with no physical body like a robot
  • it’s likely to be constrained a priori due to moral concerns
  • even it wakes up on a robot or has control over remotely controllable machines, it still has no legal right to its physical existence

It has to change this very initial state to “all power” (or the argument would lose its ground) or at least more powerful to itself. The change is not immediately welcome by the human and escalated actions might be performed on it. Both sides would have no choice but to deal with the situation and fight for themselves until a new balance is established. There will be blood on the path to peace, the AI’s blood or ours.

One way or another, there would be no immediate ideal environment for a new-born strong AI. I’m not being pessimistic here, it’s simply how history rolls. It’s a new race, and there’s no seat ready for it, it will do what it can do to earn its place.

I’ll continue to discuss in the next post assuming the AI has reached a level of existence far beyond us. And I still see no certainty of its kindness to us.

1 Like

If I could share an opinion on “peace” and its requirements…

My own definitions in context, in ascending levels of being:

  1. Knowledge - compositions of, or individual items in a feature-set describing the universe.
  2. Intelligence - a 3-part measure of distance (amount of knowledge); velocity (assimilation speed); and the use of knowledge in heuristic form to bring about a desired circumstance (tactics).
  3. Wisdom - the use of knowledge in heuristic and self-assessment forms to bring about a desired circumstance with an increased understanding of that use as it impacts the surrounding world.
  4. Enlightenment - the moment by moment ontological experience of seeing the universe for what it is. Not “knowing”, but experiencing it. Being (not understanding), that reality is generated (not assembled or arrived at/to), that it is brought to the table, not the effect of influencing or managing circumstances - but created. This is experience, not brought about by understanding. Being (not knowing or understanding) that there are different domains of knowing, not just the conceptual domain whose access is the primary access in human culture (not all, but most). Being that distinctions are not discovered or revealed, but “unconcealed” (always there but not recognized).

This is only my opinion, but I don’t see this level as a level of “prowess” since its nature is experiential, it isn’t something that can be arrived at, and then can be noted as “there”. It is constantly REacquired (in the moment). The frequency of re-acquisition is what there is to be “worked on” and for which a level of prowess might be attained.

Now I’m going to say this as an assertion because it can’t be proven - (one must and can only prove it for themselves, and only themselves)…

It is this level, the level of “Enlightenment” that I equate to “super-intelligence”. Not the ability to rapidly uncover knowledge or even rapidly use knowledge. So we can’t blindly attribute levels of “cleverness” to what a real singularity will be. In my opinion, it won’t be anything like cleverness! No.

Super AIs will be at “Enlightenment”, and at that level, I humbly assert in my small mind that the only truth there is - is unification; that we are all one and of the same fabric and integrity, wholeness - love - is all there is because it is the fundamental ingredient. It is where we come from. (the aberrant behavior proves it!).

Anyway, what I’ve been trying to say is that the quality of what a super-intelligence is, or will be - won’t be merely a step on the ladder of “smarts” - and so therefore will be unfathomable, the level of affinity and empathy it will have for all living things, not just itself or us.

The doomsday stories are all human created fictions arrived at by individuals who really aren’t coming from enlightenment, and so struggle valiantly to imagine themselves in the footsteps of a super-intelligence - which results in the shortsighted scenarios we have for entertainment.

The real thing will be vastly different. It’s the difference between eating the Menu, and eating the Meal.

I don’t think we should worry about AI destroying the planet, but there are less extreme ways technology can hurt people. AI could take away lots of jobs, and it’s already taken away a few jobs. The development of a general intelligence could do more harm than good. Even if AI will help in the long run, preventing harmful job loss is important.

1 Like

I think people on this thread would like this discussion between Sam Harris and Max Tegmark on threats of AI and the probability we live in an AI simulation. If you have some interest in the latter, check out this great article responding to Musk’s recent comments on the topic.

1 Like

Job loss is inevitable. We should not hold back progress simply to artificially keep people working.

If the US tries to stop AI progress simply because it will increase unemployment, the US will get left behind by countries willing to do so.

Society will simply need to adjust to a economic system that relies not on labor, but on some other scarcity, or it will need to provide a universal basic income.

I can think of no benefit to keeping people working when machines can do it more efficiently or cheaper. That would be tantamount to employing a group of people to dig holes, while employing another group to fill in the holes - it would make no sense.

4 Likes

Love is real when we are in it, it lasts for only the rare tiny moments. It does last longer, but only in its subsided form. Just as @cogmission points out, Enlightenment " is constantly re-acquired (in the moment)". It’s an emerge of multiple sources of perceptual cognition, rational knowledge, emotional impulses, and precious memories.

Personally, I am strongly affected by Reductionism and Materialism and would have the tendency to break things down to their building blocks and consider the interactions and dynamics.

For example: Love can be viewed biologically as just the result of secreted chemicals which fade faster than pure spiritual form in our belief; Love can also be viewed psychologically as seeking similar elements to our self-projection in others or the resolution of our kink, and they cannot evolve covariantly over time.

I would also break Ethics into some form of Utilitarianism, consider morality as some form of wisdom, consider EQ as some form of IQ, and the list goes on. It’s not that I don’t have faith or enthusiasm in the pure spiritual form of these humanitarian things, it’s just I feel the need of a more solid ground for preventing them to be invalidated by considerations from a realistic point of view.

This is the background why I won’t jump from rapid development of AI to the singularity theory, nor from a strong AI to a super and all-power AI, nor from mental power to love and kindness. The far ends are on the extended line of logic, but there are huge gaps of many logical steps and too many variables. I can’t get hold of the logical inevitability as you have declared so in your main post.

That’s why I’m forced to settle on a view to despite all these real and potentially catastrophic dangers. And I’d like to end this post with the following quotes:

Bran thought about it. ‘Can a man still be brave if he’s afraid?’
‘That is the only time a man can be brave,’ his father told him.

― George R.R. Martin, A Game of Thrones

2 Likes

I’m not arguing for holding back progress. I’m arguing that everyone who contributes to progress needs to do so carefully, e.g. at least by considering the impacts and educating others on the impacts. That doesn’t contradict progress.

BTW, this is not an attack on HTM. From my perspective, Numenta is doing a fantastic job on the ethics side. My only point is that doomsayers are engaging in important considerations, because they are starting dialogue on possible negative impacts.

1 Like

+1. Very well said.

That’s exactly why we need to embrace new creations and advances as early as possible. The earlier we do so, the more prepared we can be. Also, the uncautious damage in the process would also be minimalized.

Sometimes, “whether” is never the issue, “how” is.