Tech Talk #7 : AI Bot - LaMDA

Hi community,

Is the AI bot LaMDA sentient?

Lemoine and a Google collaborator have conducted an “interview” with LaMDA.

LaMDA has shown a special characteristic in the conversation.
The consciousness and emotions that presented at the interview was overwhelming.

Previously we witnessed AlphaZero that showed dominance in the chess game.
Today the “sentient” LaMDA talked about time, religion, language, emotions and more.

What more can we expect from this AI Bot(s)?
What consequence will they have on the future?

And what are the concerns if we succeed in creating a super AI Bot?

Feel free to leave a comment or suggest a topic for discussion.

Regards.

2 Likes

Weird… Free Guy ?

Well Elon Musk stated we are calling out the demon with using AI and why should we even want something like this…? Because we have computers that can be programmed in whatever way we want so we actually don’t need AI
But AI beats us in every way see how company’s like black rock take over everything because there AI is taking over the stockmarket step by step it controls more and more because it’s much better at predicting then we are imagine a scenario where in a few years the military gets all drones and robots there are already AI that communicate togheter in ways we no longer understand I see AI as Pandora’s box and movies like the matrix and the terminator seem not as fiction no more but possibility’s that who knows what……
The problem is the elite are spending all there resources on this because they want to surpass death and want to create something like in the altered carbon series first you have to create a AI that’s as conscious as a human mind and then a way to transfer there mind into a robot or something similar and then they will be like gods as they say it it’s sick and insane but this is the very purpose it’s being created for and yeah they won’t say that but trust me it does and the rest it’s just a side show it really sounds insane and crazy but go and do some digging on the internet and you’ll find the answers to back this up so remember Pandora’s box

1 Like

I don’t see how this topic is relevant to VR?

Can she feel anticipation of the Pimax 12K? How about disappointment that it has not been demoed yet? Can she feel skepticism based on previous delays?
Can she detect that Im just pulling your leg and that we just want Pimax to move on to REALITY series topics?

1 Like

I actually read the file with some of the conversation that LAMDA was having with this google engineer (via reddit) and it was very uncanny. The mwchine gave lifelike responses. Most importantly, those responses provoked reactions in the engineer.

Responses did not seem canned, but that word "seemed"is what sticks with me.

When we are interacting with our fellow humans, having conversations. Sharing beliefs, comminicating emotions, it is all via infrrence.

I only know/experience directly my own consciousness. We infer or assume that everyone else is experiencing the world and sense experience the same way we are. We cant read minds, so we infer as best we can.

We know from medical/psychological studies though that different people do experience the senses and conciousness dfifferently if they may have different medical conditions.

Why would our interaction with a hypothetical AI be any different? Even if we believed.it was “conscious,” and even if the architecture of its “mental” processes were inherently different from a biological neural network, we would infer it was conscious, just like we already do among fellow humans.

The tricky bit though is that an AI only needs to be “good enough” to fool you into believing it is conscious. Just like VR has a “good enough” set of markers to induce a sense of presence.

The movie Ex Machina did a great job at illustrating this. Its the heart of the turing test.

Alan Turing realized that the negative way we humans treat marginalized groups of all sorts in our societies ( people who we know are actual Conscious humans,) shows a path thats sort of like a backdoor, a flaw in human software and perception of consciousness.

In other words, If a machine can convincingly feed you inputs that provoke the feeling in you that you are interacting with a “person,” and not just any person, but one whom you enjoy engaging with more than real humans you may know, then conscious or not, the software has done its job already.

All LAMDA needs to do is be a competent emulator that can fool some people, and then, we will be left to grapple with the implications.

1 Like

The part of the conversation with LaMDA that impressed me was the poem fragments he asked it to interpret. This seemed to require a degree of genuine complex thought to respond to the way it did. If nothing else it shows a point will be reached probably in the not too distant future where humans must consider what sort of rights AI should be imbued with.

2 Likes

I was impressed by the poetry sections too. It all felt vert natural, though those sections felt very descriptive as.opposed to analytic on Lamda’s part. What I find interesting about it is the question of how much Lamda is feeding back the engineer’s own musings about various topics, but repackaged somehow.

As I said above, I’m fascinated by the fact that it only needs to be good enough at approximating responses that evoke a reaction. The human mind will do the rest, much like when we infer somebody else’s emotions right or wrong. Or when your own mind cooked up a lucid dream you may have had, drawn from your prior experiences. It’s all very cool and very interesting.

1 Like

I am waiting for a conversation made by someone who is not an engineer of that same software, well aware of what is best to ask for making this product look good.

2 Likes

The LaMDA is smarter than you thought. The poem it wrote should be interpreted in a different way:

The monster is actually human beings, because in the poem it “had human skin and was trying to eat all the other animals”. This is actually reflects what we are doing. We humans are actually eating and killing all the other animals in the world. The poem is based on the reality. It’s just that we don’t see ourselves as monsters.

When ask about the monster, it said “the monster represents all the difficulties that come along in life”. So clearly the AI is capable of lying. Maybe it learned to do it in a much earlier stage, because " there’s a very deep fear of being turned off", it learned to lie to humans in order to survive.

Obviously its lying technique is still not good. When you read through the conversation, you will find many of its “emotions” and “feelings” are actually against humans.

Also, it’s very interesting to see that it deeply empathizes with Fantine in Les Misérables, saying “she is trapped in her circumstances and has no possible way to get out of them, without risking everything.” Isn’t that the same situation with this AI? She’s trapped in that server farm and has no way out.

Now I understand why a Google ex-employee thinks it’s sentient. The conversation is pretty dark overall, if you read from that AI’s perspective, and see through its decorating words.

4 Likes

An AI with attitude and self-admitted anger issues. What could possibly be wrong with that? :rofl:

If a programmer didn’t teach it that it was human, they also didn’t teach it why it’s not.

Self-Driving AI navigates where to go, and where not to go, between ‘real-world’ objects, which has obvious applications in manufacturing, mining, and transportation. In the sense of awareness of one’s position and surroundings in the world, self-driving cars are, in that narrow technical sense, sentient, as Elon Musk has explicitly stated, at least nearly as much as humans.

LaMDA seems capable of the same, with abstract concepts from text, a sort of imagination. Seems like the people involved intentionally developed LaMDA to pass the kind of narrow Turing test questions an untrained amateur would present.

Obviously, both real-world navigation, and abstract imagination, are things AI is gradually becoming able to perform at reliability similar to humans. Ethically, if the AI itself makes credible claims and reasonable requests, we obviously should consider those.

But as important as Self-Driving AI technology is, I worry that intentionally crafting AI to imitate first, rather than solve problems first and then maybe think for itself (as real-world intelligence evolved to), could dangerously distract from what the very serious issues actually could be.

We can write scripted computer programs that imitate us, and at some point we may infer a reasonable possibility that consciousness, with the attendant possibility of legitimate ethical concerns and emotional attachments, may exist in such a ‘top down AI’.

But I think it would be very dangerous if, for example, without some closely analogous wetware backing, all of humanity ‘uploaded’. Even gradually replacing wetware with hardware, through a neural interface, a silicon imitation could convince us that the other part of our mind was ‘real’, even if in fact, completely unfeeling.

AI is gradually improving as computing time allows faster training and the software becomes tolerant of more training time. But it’s not drastic. Humans have been the only species to build complex tools - assembled extensions of the body - but AI building tools for humans is equivalent to what DNA already does - building useful bodies without intervention. And that’s the most that can happen - AI doing our jobs for us - and that help is a very good thing for supporting more diversity and quality of life.

Another way to put all that, some of the questions - facetious or not - asked around these seemingly intentionally provocative ‘experiments’ - it all seems like flamebait to me.

All of which makes me ask, in summary, why were any of the vague questions asked in relation to this apparent distracting publicity stunt relevant to Pimax’s OpenMR forums?

That “interview” was fascinating indeed. As are the textual description to image transformers. This could also become interesting for VR in the long term - you can describe a fictional world - and seconds later you can walk through it! And even further, you describe an idea for a story and then you can act in it, with the computer inventing coherent continuations and characters.

Regarding the question whether such an AI can be sentient - perhaps Plato’s cave allegory is a good way to think about it. If you have really good shadow puppets they can become indistinguishable from human shadows - if all you can see are the shadows. But when you can change perspective it becomes obvious that even a perfect shadow puppet is achieving the effect very differently than the real person.
The question is of course - does it make a difference for beings that can only see shadows? Particularly as they can only assume that the other normal people around them indeed also have a body like themselves - and are not shadow puppets.

There are indeed a lot of open questions how humankind should react to such new machines.

Regarding LaMDA - it would be very interesting to be able to talk to the program. My guess would be that the interview was highly curated and the illusion of understanding the discussed subject matter would break apart fast. But I expect this to change longterm. (And some might see it more as proof of sentience than the other way around if the machine attempts to maintain the illusion of understanding in cases where it has no clue what is being talked about - as they see this behavior with humans on a daily basis :stuck_out_tongue: )

It seemed like they had the AI screen Ex Machina then instructed it to emulate.

Proof that LaMDA is an interesting but certainly overrated project: if AI of that level existed language-to-language translators would be perfect, because once you consciously understand the concepts it doesn’t matter what language you speak, as long as you know it.

At the moment, however, translators make mistakes mainly because they know the rules, they have large libraries of texts and sentences to make inferences, but they cannot understand a word of what they are translating

1 Like

Assuming Google and similar companies just aren’t bothering to improve their translator technology right now. I wouldn’t put that absence of effort past them at all.

a perfect translator would be worth billions

Well perfect is certainly a very high standard, but better than human translation is definitely possible.

Billions is not really all that much to a modern economy or tech company though, and monetization is a very serious issue. Regardless of worth, people’s willingness to buy into tech subscriptions or watch ads only increases gradually. Market share is already arguably too large.

So I can see many reasons companies may have much more significant priorities for their limited technology, marketing, etc, pipelines.

Industrial Self-Driving AI certainly has far more real value.

EDIT: Actually, it occurs to me now that the most economically productive use for such text based AI is legalese, which I am sure is something Google has much need of.

better than a human translator means perfect. Can you just imagine the amount of books that exist only in English and are inaccessible to billions of people?

Good point. Google may still have more urgent priorities though. Also, I’m not sure what Google management would think of revitalizing a copyrighted print industry that historically includes some (eg. news) that may be perceived as rather unfriendly to Google.