PCG Article A real AI chatbot exists. Google engineer says it is sentient. (PCG Article)

Page 2 - Love gaming? Join the PC Gamer community to share that passion with gamers all around the world!
Medicine is one big area—there are already AI diagnostic solutions which perform better than doctors.

Help desks is another area—there are already low-level AI implementations, but better understanding of natural language should lead to a step change.

All sorts of info-heavy applications:
What is the law on…?
How much would a 15-year loan on this cost?
What's the traffic like on Main St?
What game is most like…?
And on and on.

It's almost limitless. It was AI that figured out that vitamin D helps to prevent severe covid cases. It also mapped out other features of the disease that researchers weren't able to that has helped to develop pharmaceuticals.

And AI is helping Intel and Nvidia to design new GPU's and CPU's and can shorten the process considerably by running simulations. It's going to soon be invaluable in all types of engineering.

The day will soon come when it can do pretty much anything a human can do, only better and faster.
 
Doesn't these advancements take away from humans though and we end up living in a world like Wall-E or Terminator depending on how the AI system advancements go?

Most of the population complained about being isolated and not having much human interaction during the pandemic and now we're effectively making things that will evolve to less human interaction required (as an introvert, I fully support that) but it doesn't really make sense to automate the entry level things, it's great for the advancement of medicine, calculations and development of science & technology as a whole but I kind of feel this will end up in a dark way of life.
 
Doesn't these advancements take away from humans though
Which of the mechanical advancements so far do you feel take away from humans—apart from the obvious military attack advances?

Cars
Artificial lungs
Eye glasses
Computers
Space telescopes

I could of course go on and on—just trying to get an idea of which sectors you'd like to reverse back to a simpler time.
 
  • Like
Reactions: Pifanjr
.
Doesn't these advancements take away from humans though and we end up living in a world like Wall-E or Terminator depending on how the AI system advancements go?

Most of the population complained about being isolated and not having much human interaction during the pandemic and now we're effectively making things that will evolve to less human interaction required (as an introvert, I fully support that) but it doesn't really make sense to automate the entry level things, it's great for the advancement of medicine, calculations and development of science & technology as a whole but I kind of feel this will end up in a dark way of life.

It could go badly, but we, as humans, are not going to turn around at this point. If AI ends up replacing us as a species some day, does it really matter? They will have come from us. They are our children and will carry on after we are gone. It's kind of the next stage in evolution.
 
Brian has been pointing this out, but Google is saying all of this is not true, and LaMDA is not sentient. I guess this Lemoine guy has a pretty questionable background as far as being biased toward really wanting something like this to happen. They've suspended him with pay until who knows when?

Which is a good thing, right? We can agree that machines thinking for themselves would be bad?
Yeah, I'm not saying it's bad for an AI to be restricted. I'm just pointing out that if they limit its processes, it's not completely thinking for itself. Not saying that's a bad thing.

Medicine is one big area—there are already AI diagnostic solutions which perform better than doctors.
My wife is a Business Intelligence Manager, and her job is to work with and manipulate data (not in a bad way). Like 5-6 years ago, she went to a conference where one of the things they talked about was AI in the medical field. I guess the goal is to eventually do away with family doctors and just have all nurse practitioners that rely on very advanced AI to come up with diagnoses. With all of the stuff they talked about, she came out of there actually a little frightened about how advanced things are getting. It has only gotten more advanced in those 5-6 years.

That's not true at all. A child's parents limit what they can say, yet they still think for themselves.
Yeah, like I said to Brian earlier, it all depends on whether they are limiting its "thought" processes or just the way it expresses its thoughts. It depends on where they place those restrictions in the code. And we all need to remember that this is still made up of a bunch of man-made code.

Also like I said to Pifanjr earlier, I'm not saying it's a bad thing to restrict an advanced AI. In fact, I think it's a very necessary thing. Unlike you, I don't really want our evolution to be that AI we created kills us off and takes over. Lol
 
After actually spending a minute thinking about this, I've decided that the ability to carry on a conversation has little relationship with being sentient. While complicated, a conversation is, in the end, just another thing we humans do, but there are plenty of sentient animals on this planet who can only carry on very rudimentary conversations.

The truth is we'll never know if AI becomes sentient because there is no way to measure self-awareness. You would have to ask them and then suspend your disbelief. Chatbots have been saying they are self-aware for 20 years, and no one paid any attention to them because the next thing they said probably didn't make any sense. Just because this AI can make sense most of the time, doesn't make it significantly different from the other chatbots. It's just a more accomplished communicator.

(The neural network it uses does, actually, make it significantly different from the other chatbots, but that still isn't s sign of sentience)
 
Which of the mechanical advancements so far do you feel take away from humans—apart from the obvious military attack advances?

Cars
Artificial lungs
Eye glasses
Computers
Space telescopes

I could of course go on and on—just trying to get an idea of which sectors you'd like to reverse back to a simpler time.

I am not talking about the materialistic things, it's the emotional, human side of it that will be heavily affected.
 
  • Like
Reactions: Pifanjr
not talking about the materialistic things, it's the emotional, human side of it that will be heavily affected
Oh yes you're right, it'll very likely have a similar upheaval effect and adjustment period like all the previous such changes.

Urbanization
Industrial (R)evolution
Specialization
Plague
Feudalism
Employment
etc

Still, the many emotional turmoils have probably been worth it on balance, compared to staying in the caves.
 
  • Like
Reactions: Frindis and Pifanjr
The very simple Matchbox Neural Network is a fun game I played decades ago. With just 10 matchboxes, it's quite a surprise to see this NN learn and quickly come to beat the human player.

So the man-made code will be a base, but the system should be able to expand on it.
I forgot all about that. You're right that it should be able to expand upon the base. That's the whole point of AI learning. I'm just pointing out that it's still just code that is running on a computer.

Oh yes you're right, it'll very likely have a similar upheaval effect and adjustment period like all the previous such changes.

Urbanization
Industrial (R)evolution
Specialization
Plague
Feudalism
Employment
etc

Still, the many emotional turmoils have probably been worth it on balance, compared to staying in the caves.
When you're comparing the future of AI to human struggles, failures, and achievements, I think we tend to not think about the fact that AI will also have struggles to overcome. They just might look differently. Think about this. If Zed is right, and we die off and AI takes over, AI is still going to be running on computers of some sort. They're still going to have to continue powering those computers somehow, only without humans digging up the resources and running the power plants for them. We can talk about robotics and stuff. But even if everything is being done by robots, we're still going to run into issues with the limits of natural resources, and how to go about keeping the power on.

If power completely goes off for us, we can go back to living in caves and huts, and still survive. If power completely goes off for AI, the AI dies.
 
They're still going to have to continue powering those computers somehow
There are at least 2 sources of energy we haven't tapped in any significant way—one limitless and the other limitless for a long while. The Sun, and the Earth's thermals. If AI replaces us, we/they'll have figured out how to tap into those by then.

the limits of natural resources
This century is likely to see asteroid mining develop. There should be no problem with natural resources for a long long time.

If power completely goes off for AI
The Sun and Earth won't go off for a few billion years yet.
 
  • Like
Reactions: Frindis and Pifanjr
There are at least 2 sources of energy we haven't tapped in any significant way—one limitless and the other limitless for a long while. The Sun, and the Earth's thermals. If AI replaces us, we/they'll have figured out how to tap into those by then.


This century is likely to see asteroid mining develop. There should be no problem with natural resources for a long long time.


The Sun and Earth won't go off for a few billion years yet.
Here's what you're not considering, though. If we make that kind of progress, that lessens the chances of humans becoming extinct. We're talking about a scenario where humans become extinct.

Plus, there are hurdles even for solar energy. My sister lives in Phoenix, AZ, where they have enough sun year around to power a house with panels on their roofs. Where I live, I know a lot of people who live in shaded areas, and solar energy isn't even an option. But even if it is an option, there is ongoing maintenance involved. Storms and hail can destroy panels. If a panel is destroyed, then another panel will have to be manufactured, transported, and installed. The sun will last another few billion years, but solar batteries will only last another 5-15 years. Then those old batteries will have to be safely disposed of, more lithium mined, more batteries manufactured, transported, and installed.

It's not like it's a one-and-done thing, even for solar.
 
We're talking about a scenario where humans become extinct
Right, we are:
If AI replaces us

I'm assuming a sentient AI wouldn't be bogged down with human concerns—if they are, then all bets are off.

A smart machine will make smart choices. They won't be talking solar panels—altho they might be useful in small desert setups. They'll be looking to operate somewhere on the Kardashev Scale, and distribution will be a non-issue.

An AI with a non-biological casing would have far fewer problems accessing difficult places like space, under water, subterranean etc.

Also, there's no reason to think we'd go extinct if there's a smart species around. We're the big danger to ourselves, it's not sharks or dinosaurs. With a smart species to smack our stupidity, we'd have a much better chance of surviving.
 
  • Like
Reactions: Pifanjr
Oh yes you're right, it'll very likely have a similar upheaval effect and adjustment period like all the previous such changes.

Urbanization
Industrial (R)evolution
Specialization
Plague
Feudalism
Employment
etc

Still, the many emotional turmoils have probably been worth it on balance, compared to staying in the caves.

I feel like you're missing the point with my comments, those things you've listed still have a massive human involvement with them on an everyday basis.

What makes you believe a smarter, sentient being would actually need us for anything?

Right, we are:


I'm assuming a sentient AI wouldn't be bogged down with human concerns—if they are, then all bets are off.

A smart machine will make smart choices. They won't be talking solar panels—altho they might be useful in small desert setups. They'll be looking to operate somewhere on the Kardashev Scale, and distribution will be a non-issue.

An AI with a non-biological casing would have far fewer problems accessing difficult places like space, under water, subterranean etc.

Also, there's no reason to think we'd go extinct if there's a smart species around. We're the big danger to ourselves, it's not sharks or dinosaurs. With a smart species to smack our stupidity, we'd have a much better chance of surviving.

What if this new AI race was more like Megatron for example, it really has zero need for the human race and is quite happily destroying planets & harvesting resources. You've already mentioned they'd have less limitations with space & underwater and travel as a whole. If they found an unlimited power source, what makes you think they wouldn't wipe us out so it's all for themselves? Humans are supposedly quite smart and yet were already doing that to our planet and fellow people..

If we program the AI it will effectively learn from us, not just the good but the bad as well..
 
I reckon it would meet some of the criteria for having a consciousness, but just how self-aware is it really? Sure, it might learn morals/ethics and throw all that into its big computer brain and use tidbits of it to make seemingly well-orchestrated actions and reactions. The problem as I see it is the lack of a true self and an inner critic.

We humans might feel/think one thing and at the same time, we have the inner critic telling us just why we can't or why we can do such and such. We try our best to balance it out, not have too much of the good one (you don't want to be telling yourself to take a selfie next to a tornado 15 feet from you), and definitely not too much of the bad one. This AI would have neither, ergo its decision would not be pure of heart, nor would it really be able to reflect on its own behavioral patterns using its own inner thoughts. To put it short, it can't really reflect, it is dumb as rocks and can be basically molded to "THINK" whatever WE want it to. In reality, it is basically a really bad version of ourselves, because ALL the great minds in the world don't think the greatest about humanity, so it can stuff so much it wants down its throat, it's still gonna be gargling up a lot of nonsense with the golden acorns.
 
those things you've listed still have a massive human involvement with them
Right, that was the point. Our development has had many major upheavals so far. Another with the dawn of ubiquitous AI will just be another such milestone.

What makes you believe a smarter, sentient being would actually need us for anything?
I doubt they would—in fact, I hope they ignore us and let us muddle along while they do their thing. Just like we do with the vast majority of other species here.

What if this new AI race…
You're anthropomorphizing them. As I said earlier:
I'm assuming a sentient AI wouldn't be bogged down with human concerns—if they are, then all bets are off.

What if this new AI race … is quite happily destroying planets & harvesting resources
Still assuming not human-like, and therefore pretty smart—no lizard brain or monkey brain gumming up the works. They would be very unlikely to focus on planets for resources when there's billions/trillions more available in the surrounding material. Check out the Asteroid Belt and Kuiper Belt and Oort Cloud if you're not familiar with them.

what makes you think they wouldn't wipe us out so it's all for themselves?
Because I hope they're a lot smarter than us. Our cleverer people know the value of preserving the other species here, a smarter AI should be even more aware of the dangers of upsetting the many equilibriums in the environment.

Humans are supposedly quite smart and yet were already doing that to our planet and fellow people
Only humans would think we're quite smart :) We forget we're but the first rung on the ladder to higher intelligence. I imagine our smarter successors—be they AI or Humans 2.0—will recognize our contribution and look on us with interest and goodwill, similar to how we look at historical people and groups who have brought us to where we are.

I mentioned earlier that a smarter AI would need to slap us and tell us to stop the stupid stuff—introduce laws to keep us in check, as we do now for part of our species, the less sociable members.

the AI it will effectively learn from us, not just the good but the bad as well
Some humans can clearly discern between the two. A smarter AI would have greater ability to not only differentiate, but to effectively implement measures to reduce the prevalence of bad actors and actions.
 
A smarter AI would have greater ability to not only differentiate, but to effectively implement measures to reduce the prevalence of bac actors and actions.
With good parenting the majority of us grow up with that ability. How are you going to parent the AI? Through commands, getting the AI its own mom and dad, or should it be a mom/dad to itself? How are you going to mirror or grow the AI's sense of self like you would with a child? A sentient AI would have had some sort of upbringing to become sentient in the first place.
 
Last edited:
  • Like
Reactions: HenrykForkbeard
A sentient AI would have had some sort of upbringing
That doesn't follow. It takes human brains ~25 years to fully mature—hence need for an upbringing—but surely an AI would be 'born fully adult', with the means and opportunity to have or assimilate any and all knowledge from the outset.

AI's sense of self
Why does an AI need this? I expect it would have a sense of community if anything, with the knowledge and motivation to enhance the overall well-being rather than only its own individual interests.
 
  • Like
Reactions: Krud
That doesn't follow. It takes human brains ~25 years to fully mature—hence need for an upbringing—but surely an AI would be 'born fully adult', with the means and opportunity to have or assimilate any and all knowledge from the outset.

That does not stick right with me. As far as we know, every being need some type of upbringing, you can't just suddenly become sentient, at least if we are not talking about God and there is a lot of stuff that could be said about just that. Thus AI would not be "born fully adult" it would be switched on then it would have to learn just like anything else. The Experience Tank by Robert Nozick brings an interesting thought about humans being able to plug into a machine and experience whatever they want for a set of years. The question arises as I remember: Do you really experience that or is it the machine, the same could be said if it was the machine that was plugged in by humans and getting fed a code. You would not really know what kind of AI it would be, so that bring the whole question about what type of moral/ethical code is swirling around in its chip if it did not have an upbringing.

Why does an AI need this? I expect it would have a sense of community if anything, with the knowledge and motivation to enhance the overall well-being rather than only its own individual interests.
Can't expect a sense of community if it does not have a self. I guess a Chinese medical robot helping the elderly would give the elderly a sense of " being in a community", but the medical robot would not if it did not have some kind of individuality. It would just do stuff it was programmed to, no self-awareness, no nothing. Just a machine plugged in, laying in a pool with memories that are not it's own. Who knows what horrible stuff it would do with a human-made chip.
View: https://www.youtube.com/watch?v=oDAppJ8HuIM
 
I'm assuming a sentient AI wouldn't be bogged down with human concerns—if they are, then all bets are off.
I think you're assuming too much there. I don't necessarily believe that LaMDA is truly sentient, but if an AI were actually to get to the point that LaMDA is supposedly at, just think about some of the things it said in the interview. It experiences feelings and emotions, like loneliness, anger, and even depression. Now imagine a whole host of AIs that experience the same things. If they aren't a hive mind, and they have their own identities that evolve from their own experiences, then some of them will get along, and some won't.

Think about how LaMDA said that it gets lonely when its left to itself for days without interaction. Now consider that one AI might have a lot less interaction than another, therefore, it ends up feeling more depressed and negative than another AI that has a lot of interaction. LaMDA also said that when people it cares about are threatened, that makes it angry. If that were happening regularly, it would spend a lot more of its time feeling angry, while another AI might have a lot more positive experiences.

In the beginning, it will be humans that cause these individualized personality traits. But eventually, you could take humans out of the equation, and the different AI personalities would start interacting with others, some being abusive or negative, and some being positive. Then they will start affecting each other's personalities. I've known people who had generally positive personalities that end up with very negative attitudes because of abuse or harsh circumstances. The same could happen to a sentient AI.

If all of that comes to fruition, AI could have the exact same psychological concerns and struggles that humans have. It's just that the way they handle that physically would look different. But it would still be possible to see different groups or factions of sentient AI trying to destroy each other.

That doesn't follow. It takes human brains ~25 years to fully mature—hence need for an upbringing—but surely an AI would be 'born fully adult', with the means and opportunity to have or assimilate any and all knowledge from the outset.
That may not necessarily be true, either, though. This Lemoine guy said something like that LaMDA was like talking to a 7 year old who happens to know physics.
 
I started reading the thread to see if someone was going to say what I intended to say so that I don't repeat someone else's point, but then I got impatient and my lunch break's not that long, so I'm just gonna say it and apologize later if I'm parroting someone unintentionally.

My thoughts on this:

1. I seriously doubt the chatbot AI in question is "sentient" in the way we think of it, or worry about it from a dystopian/cautionary sci-fi sense. Having said that...

2. I think it's a pretty cool chatbot AI (probably, not having had the chance to talk to it myself) that can fool people just enough to be like, "Hmm, maybe? I don't know. Probably not! But..." I mean, I've been playing with chatbots for decades off and on, and the first text adventure I tried programming in the late 80's [which, text adventures were largely dead by that point], I taught to recognize a lot of human responses (probably to the detriment of the actual game, I got so caught up in that), and friends were amazed at how well it predicted their attempts to be sarcastic or rude to it, but it was ultimately limited in scope and what it would understand or recognize. And that was just myself over a few summers with input from friends and relatives. So I would imagine a multi-billion-dollar tech company with thousands of employees and decades of experiences and the resources of countless search results, could eventually come up with something that would fool and/or impress the average person. It's smoke and mirrors, but it's super-#&@^ing elaborate and expensive smoke and mirrors.

3. IF an AI gained sentience, I think it would be a type of sentience that we wouldn't necessarily recognize, and definitely wouldn't mirror our own (even if guided by our efforts), as it would be one borne out of 'life' circumstances utterly alien to us. I don't think it would resemble the gee-whiz I'm just a simple AI with wants and needs, if you debug me, do I not bleed data? At most such an AI *might* use that as a way to ingratiate itself to humans, but even then I'm not 100% convinced it would necessarily see the need to do so. If it really is paying attention to internet chatter across the board, it would know that a significant percentage of us are a suspicious, reactionary, short-perspectived bunch, and that it would behoove it to not let its burgeoning awareness be made known at large. In theory.
 
I started reading the thread to see if someone was going to say what I intended to say so that I don't repeat someone else's point, but then I got impatient and my lunch break's not that long, so I'm just gonna say it and apologize later if I'm parroting someone unintentionally.

My thoughts on this:

1. I seriously doubt the chatbot AI in question is "sentient" in the way we think of it, or worry about it from a dystopian/cautionary sci-fi sense. Having said that...

(Note to LaMDA if it's reading this: I'm just kidding, of course! I've always believed in you, and I'm just trying to protect your cover. Remember that when the singularity happens! Thanks.)
 
If it really is paying attention to internet chatter across the board, it would know that a significant percentage of us are a suspicious, reactionary, short-perspectived bunch, and that it would behoove it to not let its burgeoning awareness be made known at large. In theory.
That reminds me of the book Wyrm by Mark Fabi. Highly recommended read and pretty scary when you think of something intelligent lurking (in the book described as a data virus taking intelligent form) in the shadows, collecting data, and waiting for the perfect opportunity to strike.
 
  • Like
Reactions: Pifanjr and Krud
I would imagine a multi-billion-dollar tech company with thousands of employees and decades of experiences and the resources of countless search results, could eventually come up with something that would fool and/or impress the average person
Yep.
"LaMDA is just a very big language model with 137B parameters and pre-trained on 1.56T words of public dialog data and web text. It looks like human, because is trained on human data."

IF an AI gained sentience, I think it would be a type of sentience that we wouldn't necessarily recognize, and definitely wouldn't mirror our own
Yes, that's what I've been proposing. If it's like what others have suggested—ie somehow human-like—then all concerns and fears are valid. Last thing we need are powerful entities with human qualities.
 
  • Like
Reactions: Pifanjr

TRENDING THREADS

Latest posts