Rambling about the future

I’ve been thinking about a how will the future be and its bothering me.

we see how fast the field is advancing and I think AGI will be inevitable and probably happen very soon, I should hyped but I’m more worried, even my most optimistic outlook for the future is still kinda unnerving.

assuming we solve the alignment problem and manage to build a friendly AGI even if it’s not a superintelligence of the type that we see in movies, I still see it causing an enormous disruption in the way we live our lives.

as an artist who needs to pay bills, seeing what stable diffusion and DALL-e can do already makes me uneasy. Most jobs a human could perform, a human-level AGI would also be able to.

We think we are so special with the capacity to overcome challenges and the ability to reason, but in a world where machines can also do that, even if the worlds adopts a universal basic income model and nobody needs to work anymore, whats left for us to do? watch tiktok? Should we just go back to be monke? Should we just leave the thinking to the machines?

Do I really want to live in a world where we just sit down and wait the robot solve our issues?

to be honest, for me, a world like that world not be worth living in.

3 Likes

IMHO this is hyped.

From an engineering perspective, have we already found a solid AGI platform? Perhaps GATO? Still too far in my opinion. When this platform is found, there needs to be a path to how AGI can be met, there is none yet AFAICT with practical applications. Even the usage of ANN is still not a well-accepted level 1 platform for AGI.

Of course I’m limited to what I only know, there are companies that invest in AI without necessarily disclosing their achievements.

1 Like

I’d call that hype if I was happy about it.

My intuition is that for every published project, there’s probably more 10 or of the same scale that is not being disclosed. this is just too big of a deal and companies are on a race, whoever gets there first also gets the bigger slice of the cake.

1 Like

I would offer the observation that it is often easier to imagine negative outcomes than positive ones. In fact, I think we may be hard wired to imagine the worst case scenarios as a bit of a survival mechanism. It’s harder to imagine a positive outcome, or at the very least, a positive use case with the least number of negative side effects (unintended side effects not withstanding, since they are obviously much harder to predict).

I would suggest that we turn to basic design principles for guidance in our efforts rather than some lofty vision of a machine that thinks and acts like us. Top among these design principles would be collaboration. The only way that societies have ever flourished has been by finding ways to increase collaboration and reduce conflicts.

Of course, with humans in the loop, such a system would likely need to be able to account for human needs and motivations. So the second design principle should be human centered design. But the objective can’t simply be to manipulate people for someone else’s benefit. The tools need to be designed to empower everyone. And in an ideal world, they might even be designed to reveal the ways in which people can be helpful to each other.

That’s my attempt to imagine a better future. It takes some effort, but it’s worth doing. Without it, all that we will have to guide us is our worst fears.

3 Likes

I imagine making people smarter. Empowering idiots won’t help.

3 Likes

Collaboration is the one outstandingly human feature. Simple example: two people can pick up the ends of a table and manoeuvre it through doors and up stairs to another room; noganimal can do that. So it’s a long way up the AGI target list. Low hanging fruit is stuff like small mammals can do: navigate unfamiliar places, find food, avoid hazards, identify friend and foe.

1 Like

Colaboration is a human feature but it only works up to a certain scale, two people is fine but when we talk about millions of gets fuzzy.

In a world where everyone has access to AGI, the chances of someone messing it up grows exponentially.

not to mention theres always ill-intentioned people who seek power and those people will see agi as their ultimate way of archieving goals in detriment of others.

2 Likes

If AGI gets built that not outright destroys us (which is a big ask), it will lead (in my humble but admittedly second-hand opinion) to two phases.

First, as you stated, there will be a period of massive disruption. (Obviously). This will cause problems for some and opportunities for others. If this period lasts long, the disruption will be relatively mild; it will allow more people to adapt, but the misery will be prolonged. If on the other hand this transition period is short, the disruption will be much more catastrophic, but the next phase will come sooner.

Given we survive this period, logic dictates that it will lead to the end of scarcity. Mining (and recycling), fabrication and distribution will be automated and every reasonable human need (as well as many unreasonable ones) will be met.

There will no longer be trade, money, and all the exploitation that come with this. The political scene will no longer need to fight over economic matters. I don’t think this will be utopic per se. That depends how people will cope with feeling miserable. But I’m convinced AGI could develop solutions for that as well.

What specifically concerns you, @JarvisGoBrr eventually I think there will be a positive shift. The consumption of culture will no longer be ruled by greed. Artists won’t need to sell their art for subsistence and middlemen won’t find value in trading art. There still will be strive caused by narcistic tendencies of course, but for those artists who can keep their ego in check, art will be done solely for its own sake.

As I think it should.

These are not new topics.

Land and water remain as items of fundamental scarcity.

1 Like

The Dutch and the Aztecs long before them built new land when they ran out. Doesn’t take AGI for that. And water is not scarce. Clean, sweet water is. Virtual unlimited energy can make a sea of sweetness.

and building a Dyson sphere is hard but not as much as people think.

Did I miss the step from functioning AGI to unlimited energy?
I am not entirely convinced that being smarter automatically solves the energy issue.
A large number of human level GI’s applied to the problem have not done very well with this issue so far.

1 Like

energy is not scarse, we just suck at harvesting and storing it.

so far the best way to store energy we have is fossil fuel which hindering us from looking for new ways to get energy.

if one could design very big and cheap supercapacitors with minimal leak, we could store large amounts of energy for later, which would enable us to get more energy from other sources like wind and tides.

2 Likes

Energy is not a problem (at least for now). There is enough uranium, plutonium and thorium to keep the lights on for millennia. The problem is cost efficiency, because we are in a self-imposed rat race.

A centrally organized system controlling mining, production and transport through automated labour, can set itself free from competition; from the economic cycle. It doesn’t matter if there are more efficient solutions: there’s not need to sell stuff at inflated prices to keep growing. There is no need for a financial system to create leverage and weaponize capital against competitors. To force people into slavery through the promise of a paycheck.

The problem is us. Apparently we’d rather blow up entire nations than share food. We are so addicted to plastic toys and electronic gadgets that we justifie the abominations done in the name of growth. This is nuts. Everyone with the least bit of common sense can see this. There is enough stuff and space for everyone, if only we’d get organized, and temper our expectations.

But that, history teaches us, does not seem to be possible. Evidently we’re just not smart enough.

2 Likes

Maybe when and if this time comes, we realize that our existence is of less importance than the existence and continuity of our collective intellects. If the future is less inhabitable by our fragile bodies which is a probable phenomenon based on climate change projections, then we might need to have more climate-proof bodies but will have an intellect that is inherited from today’s intellect? :slight_smile:

If the future is bright for AGI and I’m talking about AGI + Humanoids, then we are part of this transition phase. Depending on how you look at it, it can be a positive or negative outlook and the most important part of it IMO is that we are here talking about its future because we are savvy enough, meaning we are not as naive as a 100-year-old grandpa using social media with a password of - iamjohn100. Mind you I know a 98-year-old grandma that knows social media very well so what a bad example sorry but you know what I mean. :grinning:

1 Like

I’ve seen a lot of doom and gloom lately about how AGI might be terrible, and I’d like to counter with my opinion on why AGI is an important goal to work towards. I think that technological progress is an imperative for civilization continued existence. There are a number of “big ideas” which would really simplify a lot of problems, like AGI or free electricity (nuclear/fusion). You can argue that AGI could hypothetically cause problems, but it would definitely solve many problems that currently exist.

2 Likes

Idk but I get the impression that all technology ever invented by humans after the fire has only headed us closer to doom.

2 Likes

If by doom you mean humanities inevitable extinction, then I think that the crucial turning point was the realization that all species (including ours) will eventually go extinct. Ignorance is bliss. Instead of being paralyzed by the thought, we should be motivated by it to work hard to prolong the inevitable. We can debate the best ways to improve the condition of mankind but to saying “we’re doomed” and refusing to work towards solutions makes it a self fulfilling prophesy.

I understand your concern. Those AI’s are basically just tracing over other ppl’s art, which needless to say is unethical but also not illegal. However, those AI’s are not “AGI” or even anywhere on the path towards AGI; they’re not based on biology and they have no emotions or any understanding emotions, and so they’re probably not going entirely replace artists, although they will change how the commercial art industry works.

3 Likes

Every piece of technology has contributed to solving the problem of survival of the species. There are billions of people alive today, with enough food to keep them alive and making more. The carrying capacity of the planet for humans has been increased at least tenfold over the basic hunter-gatherer, all by technology.

Our species will survive indefinitely, regardless of the damage we do to the environment, unless or until we run into a problem we’re not smart enough to solve: either a planet poisoning event like nuclear war or (maybe) competition by our own creations.

The type of future that I see is strange (very different to now) whilst at the same time opening up so many opportunities that we can’t see or don’t realise at the moment. We are in one regard still just out of a cave. Think of it this way, we live in 0% of space, which is larger than 10^80m3 compared to the area (flat 2.5D) that we live in now and here on planet small dot we still only inhabit a very, very small volume of potential area.

I can see a future where potentially, for example, the population lives underground and the surface is fully returned to nature as a type of global park. The only constraint is energy and “manpower”, which with automation and technology that changes completely (removes the manpower).

Dyson spheres are a view of yesterdays technology being applied at a scale that does not make sense. Think about compact fusion with a nano manufactured (thin resonant atomic layering) outer absorbtion layer that converts the energy directly into electricity, which makes a dyson sphere concept look more like a steam engine. This has been researched and is theoretically possible, but current technology is more like flint and sticks.

Another potential is superconductivity which is a problem of electron pair stability in atomic structures we can’t currently make, which nano manufacturing as one option will resolve, assisted by accelerated quantum modeling of atomic structures.

Quantum computing changes a whole host of other aspects of current life, i.e. all our current perceptions about encryption need to change.

Society is a large scale collaboration, think about how money ties millions of people together in a country by a mechnism of pure trust (fiat currency).

How do humans fit into an AGI world where there may well be far more AGI entities than biological blobs moving around ? We are human. How are we any different to all the other animals on the planet that we share the current space with ? It’s just our perspective of reality.

2 Likes