We're the last analog object in a digital universe

This here has been in the news this weekend. Mostly the usual rah-rah about AlphaGo, Watson, whathaveyou, that’s not worth watching. But at 1:08:45 they say:

[…] an unlimited amount of intelligence: The closest we can get to that is by evolving our intelligence by merging with the artificial intelligence we’re creating. Today our computers, phones, applications, give us superhuman capability. So, as the old maxim says, if you can beat 'em, join 'em.

It’s about a human machine partnership. I mean we already see that how, our phones for example, act as memory prostheses. I don’t have to remember your phone number anymore because it’s on my phone. It’s about machines augmenting our human abilities as opposed to completely displacing them.

If you look at all the objects that have made the leap from analog to digital over the last 20 years, it’s a lot. We’re the last analog object in a digital universe. And the problem with that of course is that the data input/output is very limited. It’s this [points to mouth], it’s these [wiggles fingers].

Our eyes are pretty good, we’re able to take in a lot of visual information. But our information output is very, very, very low. The reason this is important if we envision a scenario where AI is playing a more prominent role in society, is, we want good ways to interact with this technology, so that it ends up augmenting us.

It is incredibly important that “AI” not be “other”. It must be “us”. […] We’re either going to have to merge with AI or be left behind.

Discuss.

5 Likes

I would like to see some research how human brains have been adapting to the digital species neurologically. While AI will only get smarter/more powerful, the human species had/have been resilient. I guess the fear is our over-dependency on AI or any smart technologies that make decision for us will weaken humans as an adaptable species.

I’m all for transhumanism, sign me up, after all we’ve been augmenting our abilities with technology for centuries. Re the direct interface, I’m not sure about the argument that our existing senses are huge bottlenecks though, doesn’t the visual cortex already have quite high bandwidth to the neocortex?

I’m also not convinced that our information output is “very, very, very low”… Particularly when you consider things like tone of voice, facial expression, body posture etc… Isn’t information richness or density one of the major challenges limiting human-AI interaction?

Its like i told an analog sound engineer a long time ago, digital will replace analog. He said it couldn’t be done. And he was right, for the time because people were thinking of 8-bit sounds. That how we are in robots and AI, we are thinking in 8-something or even 4-bit. I work for a company that had 16 colors on the screen! twice as many as the original IBM pc. Before I left the company the PC have 1024 real colors with $5 something and in several months they have 16M color with addon card.

We are at the 4-bit Texas Instrument level just getting started.

that why I am concentrating on digital humans on a screen instead of a real robot. There are many, many problems to work out.

This certainly is a weekend kind of topic. I guess I should take it in order. First this comes from a website that seems to be run by James Barrat the author of a book titled: Our Final Invention: Artificial Intelligence and the End of the Human Era. James is listed first “expert” among a long list of experts including: Ray Kurzweil, Sebastian Thrun and Elon Musk. The last expert is Shivon Zilis she’s listed as Project Director at Neuralink / OpenAI / Tesla. Neuralink is Elon’s brain link startup that opened A round with $27M last August but I haven’t heard anything else, I think he’s been busy but @jimmyw might want to watch that space. So I guess I understand the other end “merge with AI or be left behind”. They had better watch their power management though or it’ll be the merged who’re left behind.

In between there is some fuzzy logic to deal with, such as insisting that we must evolve faster. That might be nice but evolution goes at it’s pace. It is the tech that needs to evolve and it is and will faster than some people like. @Chris has it right, we put out lots of information but the computers aren’t paying attention to most of it. They will soon though. Cadilac’s new autopilot watches the driver’s eyes to see that they are paying attention to the road. Situational awareness will come soon enough as will sentiment analysis from tone of voice and facial expressions. Not to mention our heart rate sensors and glucose monitors. Along the way AI will evolve to be less and less “other” but we still may have to worry about falling into the uncanny valley.

1 Like

that’s why I want to concentrate on Unreal Engine 4 (UE4) Last night I was watching a video called “Learning Digital Humans by Capturing Real Ones” by Michael Black. I had seem some of the clips before but not as much detail as this.

UE4 has real actors through the devices move and talk and avatars that can pass the uncanny valley. He also has some reasons why we would want digital humans. I think I have some scanning of real people (from Black) but I would ask Alexa where I have put them and my drive.

Unity and UE4 don’t have real AI but both are working on it. Meanwhile there is https://www.neuralink.com/. I think I saw this somewhere else dealing with motion sensors.

True analog demands the continuum, the infinite, it does not exist. Even the membrane potential consists of a finite discrete number of particles across the membrane and their imprecise locations. But any real world system must have such limit that it can be equaled by sufficient precision digital systems, otherwise true analog would exist and hypercomputation would be possible if any means of infinite precision existed, you’d basically have a real numbers computer.

The brain is digital, discrete, especially the action potential signals that move long distances. But as I said even the membrane and the atoms across are not truly analog, but can only handle a finite discrete number of elements with finite quite limited precision

I have been pushing my dumb boss/smart advisor model here for a while. In essence - the old lizard brain is given some very predigested version of the world by the cortex and the lizard reacts by emitting some command for the cortex to elaborate into a more detailed version of that action. This is a layer added onto the basic lizard brain.

Now that we have computers that can fetch and present vast amounts of information and vastly amplify our commands we can extend this model; we can add an external layer to this system and radically extend the capabilities of the individual.

A true digital personal assistant could travel along with you your entire life and learn what it is you know as you learn it. It could know what are looking at and know your commands to the point where if you even started to wonder about something it could offer the degree of assistance you need. Since it knows what you know and what it is you want to know it can offer a personally tailored understandable explanation.

WIth some good contact lens displays and hearing aid type audio devices it would whisper names in your ears when you meet people, translate speech on the fly, tell you as much or little as you want to know about anything you are looking at.

With a great OS and instant global communications the current phone would look as primitive as the old schoolroom personal chalkboard. Telepresence and good robotic devices could reduce the need to travel for work - or to work.

Children are immensely curious - a device like this would be a powerful learning tool to allow the mind to go as far as it could want to go.

There are some interesting things that could become available in the road ahead. You don’t need a direct neural interface to do great things.

2 Likes