OpenAI’s massive new NLP model

nlp
deep-learning
openai
#1

This came out yesterday. I’m surprised no one is talking about it yet. It is really impressive.

Basically you create an NLP DL network that is really really big, then you show it the entire internet. Now by handing it a blurb of text, it will generate more text with the same sentiment, and the text is extremely convincing (it reads like human text). That’s because it sampled huge amounts of human text and is putting it together from millions of sources (the model has 7.5 Billion parameters).

Here is thoughtful take from @Carlos_Perez:

This is going to feed the idea that to get to AGI we just need to create bigger deep learning systems. But of course we know this is still just a clever probabilistic model.

The repurcussions of this weak AI technology is still staggering. Our world is changing fast.

5 Likes

#2

The music that will come out of such a system will be top-40 pop.
For that matter - top-40 pop already follows a formula to the point where nobody will notice when the AI starts writing it.

Much the same could be said of romance novels.

5 Likes

#3

Your comment reminded me of this @Bitking

1 Like

#4

This model was the first time I have actually encountered a GELU. I hadn’t heard of it before a couple hours ago. It’s apparently existed since 2016, though I haven’t heard people talk about it in any of the AI discussions.

Has some pretty impressive benefits over RELU, although the computation I would imagine is still beneficial with RELU.

GAUSSIAN ERROR LINEAR UNITS (GELUS)

Dan Hendrycks∗
University of California, Berkeley

Kevin Gimpel
Toyota Technological Institute at Chicago

0 Likes

#5
2 Likes

#6

I specifically turned down a job from someone wanting me to create a system just for that. I guess a system like OpenAI’s could probably be given an initially generated outline, then just fill in most of the details. Afterwards, it would just be an editing job, rather than a writing one.

Here’s an example from Axios that I saw just this morning. It’s pretty darn convincing, minus the fact that it’s behind on the current state faculty at the White House.

1 Like

#7

It actually made massive headlines everywhere, even in mainstream newspapers in Australia (Sydney Morning Herald) which is unheard of! I don’t know if that’s because of clever marketing (e.g. stating on the blog that they will not release the model due to “concerns about malicious applications of the technology”) or because the language model is pretty darn impressive.

1 Like

#8

And here’s the real pickle Matt - you cannot be 100% certain if this text is now being written by a person or by an algorithm :stuck_out_tongue:

0 Likes

#9

The real pickle will be when the AI generated posts outnumber the real posts a billion to 1.

2 Likes

#10

You will need an AI to sort through them!

2 Likes

#11

And the nature of the new cyber warfare arena is revealing itself. This is serious stuff, and things are happening very fast. These skills are super valuable today, and they are going to stay there unless the system fails or we really do create something that can start creating AIs itself.

Our primary job as humans is to continue building the future and survive whatever comes next. We must remain vigilant and continue thinking about how these systems will shape the future of our planet. We must continue to think about how we can steer our common fate in a direction that includes humanity.

3 Likes

#12

Too dark?

2 Likes

#13

In today’s reading:

1 Like

#14

Cato wouldn’t have put it better.


Furthermore I think the Numenta website should have a merchandise section offering the Numenta cup.

2 Likes