Google chatbot allegedly becomes sentient?

Status
Not open for further replies.

IrvineHimself

Well-Known Member
Relationship to Diabetes
Type 2
I am kinda engrossed in a project at the moment, but this article in the Guardian caught my eye: Google engineer put on leave after saying AI chatbot has become sentient

The headline sounds crazy, but the meat of the story fits in with what little I know of the subject. Also, it is worth noting that he wasn't put on leave because of medical grounds, but rather:
....for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist...
 
Scott Aaronson's comments on it:

 
When I was a child I was convinced my teddy bear was sentient. In fact he often told me so.

It will be very easy for children to become attached to their chatbot nannies and believe they are alive. And at a certain point those children will be correct.
 
@Bruce Stephens, thanks for the links. I found the initial post and comment by @Kerem on Scott Aaronson's blog to be particularly interesting. The question: "Does it pass a Turing test?" was my first reaction. The important point being: The pass/fail of a Turing test doesn't depend on the answers being correct, but rather on whether or not they are indistinguishable from those a human would give.

It will be very easy for children to become attached to their chatbot nannies and believe they are alive. And at a certain point those children will be correct.
I think they already do, I have read in the media of young children becoming very attached to Alexa, Siri and Google Assistant. I also, vaguely remember reading about an autistic child who became highly disturbed when he couldn't access Sira(?)

What is sentinence?
I assume that was a spelling mistake. If so, this is a good introduction to the subject: Sentience.

As well as being a bit of a geek, I was a dog handler for 12 years, (highly trained military/high security type guard dogs,) and am known to be a fierce critic of animal behaviourist who, in my opinion, are often guilty of a kind of reverse-anthropomorphic bias.

Their tests for sentience are usually inherently biased by a human orientated view of the world with their tests assuming all animals use the same/similar primary senses. For example, the Mirror Test, which assumes that an animals primary sense is visual. In the case of dogs, dolphins, bats .... etc, this is patently absurd.
 
Last edited:
The important point being: The pass/fail of a Turing test doesn't depend on the answers being correct, but rather on whether or not they are indistinguishable from those a human would give.
Which makes parts of the Ars Technica story interesting.
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic—if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”​
 
Yes, I noticed that. In terms of the Turing test, the question should be: Does the riff sound plausibly human? If you asked a child [or many adults for that matter]: "What is it like to be an ice cream dinosaur", that is exactly the kind of response I would expect.

Noting that the main silver and gold categories for the Loebner Prize were never won, it would have been interesting to see how the LaMDA chatbot would have performed in that competition, or any other similar event.

In general though, Turing tests aside, which, while intellectually stimulating, were never intended as a measure of sentience, I find the published advances in AI over the last three or four years both amazing and terrifying.

Long being concerned by the growth of the "surveillance society", my latest project is on internet tracking. When you consider the amount of data that is being gathered by governments and tech companies, then add into the equation the amount of CCTV coverage in the UK, plus the sophistication of modern data science, it feels like we are sleepwalking into an Orwellian science-fiction novel.

People think I am crazy, but I keep my phone in a Faraday cage, only taking it out when I need to check for voice mail or make a phone call. Even so, there is virtually nowhere in the UK I can go and not be under video surveillance. About 15 years ago, the BBC ran an article pointing out there were more CCTV cameras in Shetland than there were in the whole of San Francisco.

Advanced AI systems with their ability to process and draw inferences from massive data sets could be the savior of human kind. But, they also have real potential for misuse. Unfortunately, governments and tech giants seem to be more focused on the dystopian applications of the technology.
 
Noting that the main silver and gold categories for the Loebner Prize were never won, it would have been interesting to see how the LaMDA chatbot would have performed in that competition, or any other similar event.
That would be interesting, yes.

I do wonder how lucky you have to be to get the dialog as presented. After all Eliza can be pretty good for a few minutes.

I'd like to see how good LaMDA is when conversing with a competent adversary.
 
I think the story is sad more than anything. Kind of analogous to somebody falling in love with a sex doll.
 
@Bruce Stephens, thanks for the links. I found the initial post and comment by @Kerem on Scott Aaronson's blog to be particularly interesting. The question: "Does it pass a Turing test?" was my first reaction. The important point being: The pass/fail of a Turing test doesn't depend on the answers being correct, but rather on whether or not they are indistinguishable from those a human would give.


I think they already do, I have read in the media of young children becoming very attached to Alexa, Siri and Google Assistant. I also, vaguely remember reading about an autistic child who became highly disturbed when he couldn't access Sira(?)


I assume that was a spelling mistake. If so, this is a good introduction to the subject: Sentience.

As well as being a bit of a geek, I was a dog handler for 12 years, (highly trained military/high security type guard dogs,) and am known to be a fierce critic of animal behaviourist who, in my opinion, are often guilty of a kind of reverse-anthropomorphic bias.

Their tests for sentience are usually inherently biased by a human orientated view of the world with their tests assuming all animals use the same/similar primary senses. For example, the Mirror Test, which assumes that an animals primary sense is visual. In the case of dogs, dolphins, bats .... etc, this is patently absurd.

Was it a mistake though?
 
I often say thank you when a door opens itself - not to the door but to the concept of convenience generated by the Human mind which caused it to have the ability and willingness to open itself for me.
Of course, if it was my design, it would be listening. It might even respond.
 
My sister recently told their Alexa to go to sleep after she was repeatedly misunderstanding us and we were getting frustrated.... to be fair more than one of us was speaking at once.... a human can definitely cope with that better than a computer and we all have North East accents. Anyway, there was a moments pause and then she started snoring which sent us into huge fits of giggles. The timing was just the best because the pause meant that our attention had moved away from her and then the snoring started. I guess that others will know about this but she had never done that before and we found it hilarious!
 
Interesting follow up opinion piece today, along with a link to last year's YouTube video of Sundar Pichai's demo of LaMDA.

Also, apparently by co-incidence(?), the Economist ran an interview last week with Blaise Agüera y Arcas [an engineer and Google vice president] who said:
Artificial neural networks are making strides towards consciousness...

And:
When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent.

On a related note, this reminds me of how [a few years ago] there was a sea change in the quality of results from Google Translate: As I recall, it happened because Google got its hands on all the UN's documents and communications going back over the last 75 years and fed them into an AI. Since the UN is required by it's bye-laws to translate all documents and communications into the official language of each of it's 193 members, that is an enormous data set of good quality translations, or, in layman's terms: A veritable Rosetta Stone of modern languages.

If you have never used [or read] a machine translation, and don't have the benefit of a second language, it is difficult for me to describe how big an advance this was. The best I can do is to point out that, prior to the new language/translation model, most native French speakers would rather have read a document written in the original English than be forced to read something that was translated with the aid of a machine. Now, assuming you have the language skills to clean up the results, it is actually quite difficult to tell that a machine was involved in the process.
 
Last edited:
The technology involved could make a sex doll fall in love with you.
Google and the other tech giants aside, I believe the porn industry are among the biggest investors in fundamental research into the use of this and related technologies.
 
I heard this being mentioned on R4, and their suggestion was that this was simply lots of memory storage and very advanced pattern-matching, and most AI experts did not recognise this as sentience.

Which almost posed the question for me, that what is most ‘socially acceptable‘ conversation apart from advanced pattern matching and us having learned the sorts of acceptable responses people make in reply to any given prompt :D
 
I heard this being mentioned on R4, and their suggestion was that this was simply lots of memory storage and very advanced pattern-matching, and most AI experts did not recognise this as sentience.

Which almost posed the question for me, that what is most ‘socially acceptable‘ conversation apart from advanced pattern matching and us having learned the sorts of acceptable responses people make in reply to any given prompt :D
Not a bad point, and pretty much the reason why I've thought that the Turing test isn't nearly sufficient to judge intelligence.
 
this was simply lots of memory storage and very advanced pattern-matching
A very simplistic view, which doesn't come anywhere near to describing the truth: From the little I know, it involves a heavy use of Bayesian statistics which, by it's nature, uses a lot of feedback, with the posterior probability being fed back into the neural network as a prior probability until it matches certain conditions that define a result. This process, by it's nature, is so opaque that it is often near to impossible to decipher how an AI reached a given conclusion.

(Note: Experts on the subject will most definitely criticise my description as being incredibly naive, if not completely wrong.)

Which almost posed the question for me, that what is most ‘socially acceptable‘ conversation apart from advanced pattern matching and us having learned the sorts of acceptable responses people make in reply to any given prompt :D
As an Asperger, I totally concur.

.... Turing test isn't nearly sufficient to judge intelligence.
I don't think it was ever intended to be a method of judging a machines intelligence: Like a lot of Turing's work, like, for example, the Turing machine, or some of his early thoughts on Enigma, it was a hypothetical construct [or thought experiment] designed to answer complex problems in mathematics and logic.

For a machine to pass the Turing test, the only benchmark is whether it's responses are indistinguishable from those of a human. Since, unlike Turing, Einstein, Hawking .... et al, most humans could not be described as intelligent, it therefore follows that a super intelligent machine would, by definition, have to fail the test.
 
Status
Not open for further replies.
Back
Top