Could a GPT-3 type systems be useful in research?

We have had a few posts where there is some suspicion that the poster is, in fact, a GTP-3 agent.
The fact that it is just a suspicion and not blatantly obvious is in itself a tip of the hat to a successful Turning test. Following up on this, would a successful agent act as a rubber duck to help explore concepts the way we do in these forums by having people with varying backgrounds interact?

5 Likes

I think the key is how correct the answers are for them to be valuable.

If the question is about something that is widely understood and often discussed, then the probability that correct statements are inserted in the output blurb is high. But if the research is very specialized, or controversial or based on little data, then the output is going to be untrustworthy.

For example, I think that GPT-3 can count well in lower ranges, because the learning data came from many sources where people did simple calculations. But in higher ranges, GPT-3 makes more mistakes. And blatantly so.

1 Like

:rofl: GPT-3 ! I doubt a human could replicate GPT-3 output this well.

1 Like

I personally find it most useful for stimulating the creative juices when I am exploring a concept. It is quite good at re-mixing ideas in new ways – you can start with the same prompts and go down completely different rabbit holes each time. You just have to go into it not with the intent to use it as an oracle, but as a tool to assist with free-flowing/unstructured thinking.

2 Likes

So you’re suggesting a more modern version of ELIZA, but tuned for bantering about a domain specific topic?

3 Likes

And a much bigger (HUGELY!) better database behind the banter.

isn’t that what fine-tuning is?

Yes. I would find that extremely useful.

1 Like

In William Gibson’s Neuromancer, the protagonists get in contact with a saved consciousness (called a construct in the story) of the deceased genius hacker McCoy Pauley, to aid them in their mission.

Pauley agrees to help them, but asks Case (the antihero of the story) to erase him after the mission is completed.

2 Likes

We already have a GPT-like engine inside our skulls. If we turn its “fear to sound stupid” knob on low, then once in a while it might even burp interesting ideas out of its ceaseless chatting.

2 Likes

Not to distract the thread, but I find this notion fascinating and think that once again Sc-Fi has predicted science. An artificial consciousness could scan all the media available about someone and then take on that persona. I have dreamed of asking questions of Julian Jaynes regarding his bicameral mind theory, a sufficiently powerful AI could indeed make that possible.

Of course, you read their work, you discuss their work with others, but at the end of the day it is all interpretation. Would be great to have a singular source, albeit artificial, that would be focussed completely, and in possession of the sum total of knowledge, of that person.

1 Like

Said this to someone a few days ago. Years ago I realized this when my manager had favorite word sequences he liked to utter. I notice as we get older we trust to let our word sequence generators to run. Kind of like driving without realizing you are driving. How did I get here? Did I stop at the stoplights?

3 Likes

I’m not sure if consciousness is required. And I’m not even sure if Gibson’s Pauley construct was really conscious. That’s how he was described on the wiki.

(Actually I scanned through the book, and Gibson never specifically mentioned consciousness. He called it a recording).

I found it interesting that Pauley asked to be switched off after accomplished services, a bit like the Bishop synthetic person in Alien 3 in this scene. (Warning, not for sensitive viewers).

1 Like

I don’t know if you have kids, but my two are both grown and they freak me out when they say one of my “favorite word sequences” or worse, one of my wife’s :crazy_face:

1 Like

If I am going, to be honest here, It seems this thread is overly glorifying GPT3’s capability while ignoring other scaled-up multi-modal Language Models that actually demonstrate higher levels of causal reasoning and prediction than the standard marketing “GPT3” model.

A simple example can be MuM, which is closed-source for now for still represents a formidable challenge to GPT3 as well as more humanlike behaviour. (MUM: A new AI milestone for understanding information ,https://arxiv.org/pdf/2105.02274.pdf)

Even though it’s constrained to query matching, its expected scaling to overcome this (which is how it played out in “Language models are few shot learners”) and multimodal LMs seem to be the way for actually doing research in DL - if we ignore the requirement for AGI…