Does anybody else find "Sophia" concerning?

Full disclosure: This has nothing to do with HTM specifically, but I wanted to discuss this with the HTM community because it seems to be a very relevant topic in the AI world.

Perhaps you’ve seen Hanson Robotics before. They made a robot with chatbot-like capabilities that they call “Sophia” who is getting a lot of attention in the media. Their website states:

Hanson Robotics has enchanted and captured the imagination of the world with uncannily humanlike robots endowed with remarkable expressiveness, aesthetics, and interactivity. Our robots will soon engage and live with us to teach, serve, entertain, delight, and provide comforting companionship. In the not-too-distant future, Genius Machines will walk among us. They will be smart, kind, and wise. Together, man and machine will create a better future for the world.

I understand they are a company trying to make a product. But the claims they make seem irresponsible, manipulative and delusional. They’ve gone so far as to grant citizenship to “Sophia” and mention the “rise of the robots has started.”

You may say it’s all in good fun to do this. Such as joking about “death to humans” and other popular topics in sci-fi AI interpretations. You may say it’s not harming anybody but I disagree. Aside from being totally delusional (Hanson Robotics’ founder said Sophia is “basically alive” when presented on Jimmy Fallon’s late night talk show) they are lying to the general public who don’t know any better. I’ve seen hundreds of thousands of people online taking Hanson Robotics’ claims to heart and genuinely fearing AI as a result. They’re bloating up fears born of ignorance in the general public and setting up the stage to harm real AI science in the future. We all know AI acceptance is a huge issue already and it’s only going to get worse as actual AI gets more sophisticated.

Sentience and aliveness are very poorly and vaguely defined terms even in AI. What makes a robot alive? Who knows. Hanson Robotics seems to be abusing that gray area, though. To jump the gun like this on a chatbot with a barely human-like face that is definitely resting at the bottom of “uncanny valley” seems dishonest and de-legitimizes actual AI research which looks nothing like this.

Sorry if this seems like a rant; just wanted to get my thoughts out.

2 Likes

My personal opinion is that we are decades away from real human-level AI implementations. Stunts like these may get people worked up in the short term, but I don’t personally think they will have any lasting impacts on AI science. That said, we should definitely be vigilant and be ready to push back against any regulation proposals coming from legislators wanting to capitalize on public paranoia and ignorance about the real state of AI.

6 Likes

Hanson Robotics should have a chance to defend their claims.

I think we should simply respond to these things with requests for live, non-scripted examples. I can tell if a conversational AI is AGI or not within a few questions I bet.

4 Likes

The Wikipedia page for Sophia (Sophia (robot) - Wikipedia) states:

Sophia is conceptually similar to the computer program ELIZA, which was one of the first attempts at simulating a human conversation.[7] The software has been programmed to give pre-written responses to specific questions or phrases, like a chatbot. These responses are used to create the illusion that the robot is able to understand conversation, including stock answers to questions like “Is the door open or shut?”[8] The information is shared in a cloud network which allows input and responses to be analysed with blockchain technology.[9] The robot’s range of facial expressions are facilitated by its artificial “frubber” skin, which is mechanically manipulated.[10]

The citations can be found on the page included in the link. If this is true, I don’t need any more convincing that their claims are sensationalized.

I don’t so much care if they’re lying about their robot. I care about the effect that it has on the general public.

2 Likes

It is not annoying, but everybody should be enlighted that really intelligent systems, including humans, crucially learn from humans and adjust to humans.

A very timely announcement from Hanson Robotics:

But what does AGI have to do with the Blockchain?

As I understand it, folks can develop cloud-based AI utilities and market them. External systems can then enter into contracts using blockchain technology, and leverage those utilities on demand. These utilities might serve more sophisticated AI systems, or simply be used by programs that need to leverage AI capabilities for one reason or another.

The concept in principle is intriguing, but IMO the name “SingularityNET” is a bit over the top (much like the overly sensationalized “Sophia” itself).

I found several video interactions with Sophia online. It is very obviously not AGI.

Actually, in my estimation, AGI needs the blockchain. Providing intelligence as a service - data processing of some intelligent kind - needs a way to transact with others efficiently. It’s all about microtransactions because if you have that you have the ability to speed up the game. Not sure if singularitynet is the answer, but they’re the biggest player trying to build a blockchain platform for AI cooperation in processing data, which would necessarily include the ability to pay and get paid. The pricing mechanism is a wonder of information management and dissemination in our macroeconomic world - it orders our labors to be in union with our combined desires masterfully - and it may find in the union between blockchain and AI it’s highest and best use.

Blockchain and AI are a match made in heaven.

1 Like

To be honest, blockchain technology isn’t exactly necessary for this type of “AI cooperation” system – an organization such as Google for example has the infrastructure to be able to set up an exchange for users to market cloud-based AI services. What blockchain brings to the table is decentralization. Transactions are verified and participants kept honest through peer-to-peer verification process. Depending on implementation, it can also bring horizontal scalability (the more participants in the network, the greater its capacity, the more secure it is, the faster its transactions are verified, etc.)

1 Like

Too true! And a blockchain is the only way, as of yet, to have a decentralized digital monetary system.

Yes, as much as my qualms against Hanson Robotics are, I do very much appreciate the sentiment expressed by the SingularityNET team:

“operates on a belief that the benefits of AI should not be dominated by any small set of powerful institutions, but shared by all.”

Yes, it’s a good sentiment. I never appreciated their approach to AGI, seeing as how their philosophy isn’t looking for a master algorithm that contorts itself to fit the present circumstance, but rather, it is to build out custom applications for each situation and then let them interact. Philosophically I lean towards the first idea, but communication between different systems is important, if they can pull it off I’ll be happy.

1 Like

I think that 5 minutes of unstructured discussion with anyone would cure them of thinking that this is an intelligent agent.
Turing test anyone?

I find it concerning that the heir apparent to the throne of KSA was defrauded by people claiming to be “AI” experts. This poisons a $1T well.

I found some interesting details about Sophia recently posted on the opencog mailing list:

And from Ben Goetzel:

And my favorite:

So I don’t think this thing is anywhere near a functioning intelligence at the moment.


On another note, I just spent some time reading through their mailing lists… and this place is a lot nicer :wink:

1 Like

Sophia is a joke and anyone with a limited background in AI will know that in a second. It’s a publicity stunt. In fact, it’s not only Sophia but a lot of publicity is going on everywhere in AI.

I’m just afraid that it might cause another AI winter. A lot of funding is now being directed toward AI companies (same as what happened very early days of AI), if no ROI is attained in future then we may be hit by another AI winter. Hopefully not though, given big companies are now starting to get in and it’s not only government.

But hype has never done any good for real science, in fact, numenta has been fantastic in the way they have presented themselves.

Jeff seems to be very cautious of the claims he makes, even though, deep inside you can tell that he’s the most confident of his approach.

4 Likes

6 posts were split to a new topic: OMG AI Winter

Porn bot potential? Sex sells hate to say it…why didn’t they call it Ralph? Humans really are dumb but I think if we unraveled learning and applied it to schools you guys would all be out of A job…I don’t think funding is necessarily an issue but if it became one?.. meh…lots of stupid people can fund smart research…just saying.

It is just for fun. It does not worry me.