We're the last analog object in a digital universe


#1

This here has been in the news this weekend. Mostly the usual rah-rah about AlphaGo, Watson, whathaveyou, that’s not worth watching. But at 1:08:45 they say:

[…] an unlimited amount of intelligence: The closest we can get to that is by evolving our intelligence by merging with the artificial intelligence we’re creating. Today our computers, phones, applications, give us superhuman capability. So, as the old maxim says, if you can beat 'em, join 'em.

It’s about a human machine partnership. I mean we already see that how, our phones for example, act as memory prostheses. I don’t have to remember your phone number anymore because it’s on my phone. It’s about machines augmenting our human abilities as opposed to completely displacing them.

If you look at all the objects that have made the leap from analog to digital over the last 20 years, it’s a lot. We’re the last analog object in a digital universe. And the problem with that of course is that the data input/output is very limited. It’s this [points to mouth], it’s these [wiggles fingers].

Our eyes are pretty good, we’re able to take in a lot of visual information. But our information output is very, very, very low. The reason this is important if we envision a scenario where AI is playing a more prominent role in society, is, we want good ways to interact with this technology, so that it ends up augmenting us.

It is incredibly important that “AI” not be “other”. It must be “us”. […] We’re either going to have to merge with AI or be left behind.

Discuss.


#2

I would like to see some research how human brains have been adapting to the digital species neurologically. While AI will only get smarter/more powerful, the human species had/have been resilient. I guess the fear is our over-dependency on AI or any smart technologies that make decision for us will weaken humans as an adaptable species.


#3

I’m all for transhumanism, sign me up, after all we’ve been augmenting our abilities with technology for centuries. Re the direct interface, I’m not sure about the argument that our existing senses are huge bottlenecks though, doesn’t the visual cortex already have quite high bandwidth to the neocortex?


#4

I’m also not convinced that our information output is “very, very, very low”… Particularly when you consider things like tone of voice, facial expression, body posture etc… Isn’t information richness or density one of the major challenges limiting human-AI interaction?


#5

Its like i told an analog sound engineer a long time ago, digital will replace analog. He said it couldn’t be done. And he was right, for the time because people were thinking of 8-bit sounds. That how we are in robots and AI, we are thinking in 8-something or even 4-bit. I work for a company that had 16 colors on the screen! twice as many as the original IBM pc. Before I left the company the PC have 1024 real colors with $5 something and in several months they have 16M color with addon card.

We are at the 4-bit Texas Instrument level just getting started.


#6

that why I am concentrating on digital humans on a screen instead of a real robot. There are many, many problems to work out.


#7

This certainly is a weekend kind of topic. I guess I should take it in order. First this comes from a website that seems to be run by James Barrat the author of a book titled: Our Final Invention: Artificial Intelligence and the End of the Human Era. James is listed first “expert” among a long list of experts including: Ray Kurzweil, Sebastian Thrun and Elon Musk. The last expert is Shivon Zilis she’s listed as Project Director at Neuralink / OpenAI / Tesla. Neuralink is Elon’s brain link startup that opened A round with $27M last August but I haven’t heard anything else, I think he’s been busy but @jimmyw might want to watch that space. So I guess I understand the other end “merge with AI or be left behind”. They had better watch their power management though or it’ll be the merged who’re left behind.

In between there is some fuzzy logic to deal with, such as insisting that we must evolve faster. That might be nice but evolution goes at it’s pace. It is the tech that needs to evolve and it is and will faster than some people like. @Chris has it right, we put out lots of information but the computers aren’t paying attention to most of it. They will soon though. Cadilac’s new autopilot watches the driver’s eyes to see that they are paying attention to the road. Situational awareness will come soon enough as will sentiment analysis from tone of voice and facial expressions. Not to mention our heart rate sensors and glucose monitors. Along the way AI will evolve to be less and less “other” but we still may have to worry about falling into the uncanny valley.


#8

that’s why I want to concentrate on Unreal Engine 4 (UE4) Last night I was watching a video called “Learning Digital Humans by Capturing Real Ones” by Michael Black. I had seem some of the clips before but not as much detail as this.

UE4 has real actors through the devices move and talk and avatars that can pass the uncanny valley. He also has some reasons why we would want digital humans. I think I have some scanning of real people (from Black) but I would ask Alexa where I have put them and my drive.

Unity and UE4 don’t have real AI but both are working on it. Meanwhile there is https://www.neuralink.com/. I think I saw this somewhere else dealing with motion sensors.