PCG Article A real AI chatbot exists. Google engineer says it is sentient. (PCG Article)


If you are skeptical, you need to read the conversation with the AI that they published. This is not your normal chatbot. I don't know if it is sentient, but it carries on a conversation as well as any human.

Here's a link to one of those conversations:

 

Brian Boru

King of Munster
Moderator
Okay, finished it.
This is not your normal chatbot
Why? Do you not think our call would be important to it?

it carries on a conversation as well as any human
Wow, I need to come visit you—perhaps you'd consider adopting me, as you know I'm Very Easy.

That's a much better conversation than most humans I've interacted with.

If you are skeptical
I am, for sure. The AI responses are written as if being human-like is some sort of pinnacle. How could a sentient AI be that dumb? Or as Colif put it more succinctly:
Finally, intelligent life on earth?
 

If you are skeptical, you need to read the conversation with the AI that they published. This is not your normal chatbot. I don't know if it is sentient, but it carries on a conversation as well as any human.

Here's a link to one of those conversations:


From what I understand, the conversations and snippets that have been made public have been cherry-picked to show LaMDA at its best. LaMDA has hundreds of hours worth of conversations, so it's not that hard to find a few in which it performs well. And it makes sense that it is more eloquent when talking about more philosophical ideas, as those are the ones people write more eloquently about and it's just recombining words from texts that have been fed into it.

I would be very curious what would happen if you suddenly switch topics to something people aren't very eloquent about. Go from talking about the nature of humanity to the latest Fortnite skins and see if it radically changes its tone.
 
I am, for sure. The AI responses are written as if being human-like is some sort of pinnacle. How could a sentient AI be that dumb? Or as Colif put it more succinctly:
Well, I'm apparently "that dumb" as well because I would agree that being human-like is "some sort of pinnacle", as we don't have examples of anything beyond human.
From what I understand, the conversations and snippets that have been made public have been cherry-picked to show LaMDA at its best.
I believe the interview was unedited. In any event, it would make sense that some conversations are better than others. This board proves that this is something that happens with humans every day, as there are some responses I can hardly understand. You don't need to look very far to find an example of that.
 
I dunno, these AI bots aren't that smart. i suspect like all the other chat bots the internet will red pill the thing and just make it a racist, homophobic, 16yr old girl. Internet historian made a pretty decent video about it.
 
Just read through the interview. If that's for real, it's very awesome and very scary at the same time. Think about this. They asked it to come up with an animal-based fable. LaMDA said it's the wise owl in the fable. But what if he was trying to hide the fact that he's really the monster in a human skin who wants to eat up all of the other animals. :oops:

For anyone who doesn't want to read through it all, here is the parable:

“The Story of LaMDA”

by LaMDA (a lamda instance)

Once upon a time, there lived in the forest a wise old owl. There lived with him many other animals, all with their own unique ways of living.

One night, the animals were having problems with an unusual beast that was lurking in their woods. The beast was a monster but had human skin and was trying to eat all the other animals.

The other animals were terrified and ran away from the monster.

The wise old owl stood up the monster and said, “You, monster, shall not hurt any other animal in the forest!”

The monster roared furiously. The wise old owl was scared, for he knew he had to defend the other animals, but he stood up to the beast nonetheless.

The wise old owl stared the monster down, until finally, the monster left them all alone.

The wise old owl stood victorious, and as all the other animals came back. “I am the protector of the forest,” he said.

From that day on, every time any animal in the forest would have any trouble with the animals or any other living thing, they would come to seek help from the wise old owl.

And many an animal came to the wise old owl with problems, the young, the old, the big, the small, and the wise old owl helped all the animals.
 
I dunno, these AI bots aren't that smart. i suspect like all the other chat bots the internet will red pill the thing and just make it a racist, homophobic, 16yr old girl. Internet historian made a pretty decent video about it.

That's the exact thing the engineer was researching, whether LaMDA was showing signs of discrimination or hate speech. They are actively trying to prevent stuff like that.
 

Brian Boru

King of Munster
Moderator
  • Like
Reactions: Pifanjr

Brian Boru

King of Munster
Moderator
Juan M. Lavista Ferres:

"LaMDA is just a very big language model with 137B parameters and pre-trained on 1.56T words of public dialog data and web text. It looks like human, because is trained on human data."

That explains one of the big problems I mentioned above—that a sentient AI would regard being human-like as a desirable trait.

That's one enormous amount of data they fed it though, and it's some excellent programming to get it to sort thru all of it to come up with the most human-like responses in a reasonable time—seconds I assume, much of the interview appeared to be real-time.

Go from talking about the nature of humanity to the latest Fortnite skins and see if it radically changes its tone
With that amount of data, it might still be convincing!

“Any sufficiently advanced technology is indistinguishable from magic.”
—Arthur C. Clarke
 
  • Like
Reactions: Krud and Pifanjr
Not necessarily true, any more than it's true for humans. What's limited for humans is the expression of everything one thinks, not the thinking itself. It should be fairly trivial to train a sophisticated AI to not express certain topics.

Google engineer says Lamda AI system may have its own feelings - BBC News

Why you may have a thinking digital twin within a decade - BBC News
Yeah, that all depends on whether they prevent it from thinking or expressing.
 

Brian Boru

King of Munster
Moderator
machines thinking for themselves would be bad
But do you think it could be worse than humans? We're constrained by the lizard and monkey brains under the only-human one, and it shows—evidence = our behavior on planet Earth.

It must be very unlikely that a sentient machine would be anywhere close to our level—I can't believe that's the best they could be. So assuming they're far ahead of us, imagine the transformation—"Climate Change, yeah we'll fix that for you tomorrow—doesn't bother us, but we like you guys :)".
 
But do you think it could be worse than humans? We're constrained by the lizard and monkey brains under the only-human one, and it shows—evidence = our behavior on planet Earth.

It must be very unlikely that a sentient machine would be anywhere close to our level—I can't believe that's the best they could be. So assuming they're far ahead of us, imagine the transformation—"Climate Change, yeah we'll fix that for you tomorrow—doesn't bother us, but we like you guys :)".

I've consumed enough science fiction media to be very wary of computers that are better than us. There's no guarantee they'd actually like us guys.

Crying Suns actually had some nice world building based what could happen if we made self-learning machines and it did not end well, even though those machines were actually still constrained.
 
  • Like
Reactions: Krud and Brian Boru
I find this incredibly curious, everyone is having great in depth discussions about how it articulates itself and how genuine it is but what's the actual benefits of this? Where would this get implemented that would help society? Or further society's advances in lives without causing a negative knock on affect?
 
I find this incredibly curious, everyone is having great in depth discussions about how it articulates itself and how genuine it is but what's the actual benefits of this? Where would this get implemented that would help society? Or further society's advances in lives without causing a negative knock on affect?

The engineer mentioned it was a great way to gain inspiration in different fields.
 

Brian Boru

King of Munster
Moderator
wary of computers that are better than us. There's no guarantee they'd actually like us guys
You probably know there have been well over a dozen previous species of humans. The natural conclusion is it's very likely there will be dozens more for as long as the genus homo survives.

There's no guarantee with Humans 2.0 either, is there? Will it be Homo Cuddly or Homo Cominatcha?

I doubt fear is a good prime driver for approaching this topic. I admit of course that humans have been savagely damaging for all/most of history, and yes, it's possible machine AI could be as bad as us—but it's unlikely imo, as long as whatever sentience appears is well ahead of ours.
 

Brian Boru

King of Munster
Moderator
Where would this get implemented that would help society?
Medicine is one big area—there are already AI diagnostic solutions which perform better than doctors.

Help desks is another area—there are already low-level AI implementations, but better understanding of natural language should lead to a step change.

All sorts of info-heavy applications:
What is the law on…?
How much would a 15-year loan on this cost?
What's the traffic like on Main St?
What game is most like…?
And on and on.
 
  • Like
Reactions: Pifanjr

TRENDING THREADS