fbpx

What Is Google LaMDA and Why Do Some Believe It Is Intelligent?

LaMDA has been in the news after a Google engineer asserted that it possesses sentience based on the implication that its responses indicate that it understands what it is.

The engineer also suggested that LaMDA communicates its fears in a manner similar to humans.

What is LaMDA, and why do some people believe it can attain consciousness?

Language Models

LaMDA is a model of language. A language model in natural language processing analyses the use of language.

Fundamentally, it is a mathematical function (or a statistical instrument) that describes a potential outcome associated with predicting the next words in a sequence.

It is also capable of predicting the next word and even the paragraph sequence that may follow.

The GPT-3 language generator developed by OpenAI is an example of a language model.

With GPT-3, you can input a topic and instructions to write in a particular author’s style, and it will generate, for example, a short story or essay.

LaMDA differs from other language models in that it was trained on dialogue rather than text.

As GPT-3 focuses on language text generation, LaMDA focuses on dialogue generation.

 

Why It’s Important

LaMDA is a significant innovation because it can generate conversation in a manner that is not constrained by the parameters of task-based responses.

A conversational language model must comprehend Multimodal user intent, reinforcement learning, and recommendations in order for the conversation to jump between unrelated subjects.

 

Utilizing Transformer Engineering

Similar to other language models (such as MUM and GPT-3), the LaMDA language understanding model is based on the Transformer neural network architecture.

Google writes about Transformer: “This architecture produces a model that can be trained to read many words (a sentence or paragraph, for instance), pay attention to how those words relate to one another, and then predict what words it believes will follow.”

Why It Appears That LaMDA Understands Conversation

BERT is a model that has been trained to interpret ambiguous phrases.

LaMDA is a model trained to comprehend the dialogue’s context.

This capacity to comprehend context enables LaMDA to keep up with the flow of conversation and convey the impression that it is attentively listening and precisely responding to what is being said.

It is trained to determine whether a response makes sense given the context or is unique to that context.

Google describes it thus:

“…in contrast to the majority of other language models, LaMDA was trained on dialogue. During its training, it learned a number of the subtleties that distinguish open-ended conversation from other types of language. One of these nuances is practicality. Does the response make sense in the given conversational context?

“Satisfying responses also tend to be specific by clearly relating to the conversational context.”

Based on Algorithms, LaMDA

Google released its LaMDA announcement in May 2021.

The research paper describes how LaMDA was taught dialogue production using three metrics.

1.Quality
2.Safety
3.Groundedness

Quality

The Quality metric is derived from three other metrics:

1.Sensibleness
2.Specificity
3.Interestingness

According to the research paper, “We collect annotated data that describes how sensible, specific, and engaging a response is for a multiturn context. Then, we employ these annotations to fine-tune a discriminator in order to reorder candidate responses.”

Safety

The Google researchers employed diverse crowd workers to assist in labelling responses as unsafe.

These labelled data were utilised for training LaMDA:

Then, we use these labels to fine-tune a discriminator in order to detect and eliminate unsafe responses.

Groundedness

Groundedness was a training process that taught LaMDA how to conduct factually valid research, which means that answers can be verified through “known sources.”

This is significant because, according to the research paper, neural language models produce statements that appear correct but are actually incorrect and unsupported by facts from reputable information sources.

The human crowdworkers used tools such as a search engine (information retrieval system) to fact-check responses so that the AI could learn to do the same.

“We find that augmenting model outputs with the ability to use external tools, such as an information retrieval system, is a promising strategy for achieving this objective.

Therefore, we collect data from an environment in which crowdworkers can use external tools to research factual claims and train the model to imitate their behaviour.”

Trainers used human examples and raters to instruct LaMDA.

The third section of the research paper describes how LaMDA was trained using a corpus of billions of documents, dialogues, and utterances containing 1.56 quadrillion words.

Section 4.2 describes how humans evaluated the LaMDA responses. The ratings teach LaMDA what it is doing well and what it is doing poorly.

The human raters use an information retrieval system (search engine) to verify the answers and rank them according to their usefulness, accuracy, and veracity.

LaMDA Instruction Employed A Search Engine

Section 6.2 explains how LaMDA receives a question and produces an answer. After the response has been generated, it performs a search query to validate its accuracy and revises it if necessary.

The aforementioned research paper illustrates the process of receiving a question, drafting a response, researching the response, and then revising it with accurate information.

“USER: What do you think of the sculptures by Rosalie Gascoigne?

LAMDA-BASE: They’re wonderful, and I adore how her work evolved over the course of her life. I prefer her later work to her earlier work. Her influence is also extremely intriguing; did you know that she was one of the artists who influenced Miró?”

The issue with the response is that it is factually inaccurate. Therefore, LaMDA executes a search query and selects information from the top results.

It then responds with an updated factual response:

“Wow, her life trajectory is truly inspiring. Before turning to sculpture, was she a practitioner of Japanese flower arrangement?”

Note the “Oh wow” portion of the response; that is a human-learned form of speech.

It appears as though a human is speaking, but it is simply imitating a speech pattern.

Language Models Emulate Human Responses

I questioned Jeff Coyle, co-founder of MarketMuse and AI expert, about the claim that LaMDA is intelligent.

Jeff stated, “The ability of the most advanced language models to simulate sentience will continue to improve.”

Skilled operators can use chatbot technology to have a conversation that simulates text sent by a living person.

This creates a confusing situation in which something feels human and the model is able to ‘lie’ and mimic sentience.

It is able to lie. It is plausible to say, “I feel sad and happy.” Or I am in pain.

But it’s copying, imitating.”

LaMDA is intended to do one thing: provide conversational responses that make sense and are relevant to the dialogue context. As Jeff explains, this can give it the appearance of intelligence, but it is essentially lying.

Consequently, although LaMDA’s responses feel like a conversation with a sentient being, it is simply doing what it was programmed to do: responding to answers in a manner that is sensible to the context of the dialogue and is highly specific to that context.

In Section 9.6 of the research paper, titled “Impersonation and Anthropomorphization,” LaMDA’s human impersonation is stated explicitly.

This level of impersonation may cause some individuals to personify LaMDA.

“Finally, it is important to recognise that LaMDA’s learning is based on mimicking human performance in conversation, like many other dialogue systems… It is now highly probable that high-quality, engaging conversations with artificial systems will one day be indistinguishable in some ways from conversations with humans.

Humans may interact with systems without realising they are artificial, or by attributing personality to the system.

The Sentience Question

Google intends to develop an artificial intelligence (AI) model that can comprehend text and languages, recognise images, and generate conversations, stories, or images.

Google is working toward this AI model, which it describes in “The Keyword” as the Pathways AI Architecture.

“AI systems today are frequently trained from scratch for each new problem… Instead of extending existing models to learn new tasks, we train each new model from scratch to perform a single task…

Consequently, we end up creating thousands of models for thousands of distinct tasks.

Instead, we would like to train a single model that is not only capable of performing multiple distinct tasks, but can also draw upon and combine its existing skills to learn new tasks more quickly and efficiently.

In this way, what a model learns by training on one task – for instance, how aerial images can predict the elevation of a landscape – can help it learn another task – for instance, how flood waters will flow through that terrain.”

Pathways AI aims to learn untrained concepts and tasks in the same manner that humans can, regardless of the modality (vision, audio, text, dialogue, etc.).

Language models, neural networks, and language model generators typically specialise in a single task, such as text translation, text generation, or image recognition.

A system such as BERT can determine the meaning of ambiguous sentences.

Likewise, GPT-3 has only one function, which is to generate text. It can generate a story in the style of Stephen King or Ernest Hemingway, as well as a story that combines the two authors’ styles.

Some models can simultaneously process text and images, for instance (LIMoE). There are also multimodal models, such as MUM, that can provide answers from diverse types of cross-language information.

However, none of them reach the level of Pathways.

LaMDA Imitates Human Conversation

The engineer who claimed that LaMDA is sentient stated in a tweet that he cannot support those claims and that his statements regarding personhood and sentience are based on his religious beliefs.

In other words, there is no evidence to support these claims.

The evidence that we do have is stated explicitly in the research paper, which states that impersonation skill is so high that people may personify it.

The researchers also note that malicious actors could use this system to impersonate a real person and trick someone into believing they are speaking with a specific person.

“…adversaries could potentially attempt to tarnish another person’s reputation, exploit their status, or spread misinformation by impersonating their conversational style using this technology.”

According to the research paper, LaMDA is trained to imitate human dialogue, and that is about it.

Related Post

You’re Almost Done… Simply Enter Your First Name & Email Address Below To Get My $100 Million High Ticket Client Cheat Sheet Now
FREE CHEATSHEET DOWNLOAD
Enter Your Best Email Address And I’ll Send You My “$100 Million High Ticket Client CheatSheet” Immediately
At what stage is your Expert business on?

Get in Touch

Get in Touch