LaMDA Google AI Chatbot

Context

  • A senior engineer at Google claimed that the company’s artificial intelligence-based chatbot LamDA i.e. Language Model for Dialogue Applications had become “sentient”.

What is LaMDA?

  • Google first announced LaMDA at its flagship developer conference I/O in 2021 as its generative language model for dialogue applications which can ensure that the Assistant would be able to converse on any topic.
  • In the company’s own words, the tool can “engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications”.
  • In simple terms, it means that LaMDA can have a discussion based on a user’s inputs thanks completely to its language processing models which have been trained on large amounts of dialogue.

    LaMDA
    Photo Credit: Getty Images
  • At this year’s I/O, Google announced LaMDA 2.0 which further builds on these capabilities.
  • The new model can possibly take an idea and generate “imaginative and relevant descriptions”, stay on a particular topic even if a user strays off-topic, and can suggest a list of things needed for a specified activity.

How is LaMDA different from other chatbots?

  • Chatbots like ‘Ask Disha’ of the Indian Railway Catering and Tourism Corporation Limited (IRCTC) are routinely used for customer engagement.
  • The repertoire of topics and chat responses is narrow.
  • The dialogue is predefined and often goal-directed. For instance, try chatting about the weather with Ask Disha or about the Ukrainian crisis with the Amazon chat app.
  • LaMDA is Google’s answer to the quest for developing a non-goal directed chatbot that dialogues on various subjects.
  • The chatbot would respond the way a family might when they chat over the dinner table; topics meandering from the taste of the food to price rise to bemoaning war in Ukraine.
  • Such advanced conversational agents could revolutionise customer interaction and help AI-enabled internet search, Google hopes.

Which were the first chatbots to be devised?

  • ELIZA, a computer programme with which users could chat.
  • ALICE (Artificial Linguistic Internet Computer Entity), another early chatbot was capable of simulating human interaction.

What is a neural network?

  • A neural network is an AI tech that attempts to mimic the web of neurons in the brain to learn and behave like humans.
  • Early efforts in building neural networks targeted image recognition.
  • The artificial neural network (ANN) needs to be trained like a dog before being commanded.
    • For example, during the image recognition training, thousands of specific cat images are broken down to pixels and fed into the ANN.
    • Using complex algorithms, the ANN’s mathematical system extracts particular characteristics like the line that curves from right to left at a certain angle, edges or several lines that merge to form a larger shape from each cat image. The software learns to recognise the key patterns that delineate what a general ‘cat’ looks like from these parameters.
  • Early machine learning software needed human assistance.
  • The training images had to be labelled as ‘cats’, ‘dogs’ and so on by humans before being fed into the system. In contrast, access to big data and a powerful processor is enough for the emerging deep learning softwares.
  • The App learns by itself, unsupervised by humans, by sorting and sifting through the massive data and finding the hidden patterns.

Is the technology dangerous?

  • The challenges of AI metamorphosing into sentient are far in the future; however, unethical AI perpetuating historical bias and echoing hate speech are the real dangers to watch for.
  • Imagine an AI software trained with past data to select the most suitable candidates from applicants for a supervisory role.
  • Women and marginalised communities hardly would have held such positions in the past, not because they were unqualified, but because they were discriminated against.
  • While we imagine the machine to have no bias, AI software learning from historical data could inadvertently perpetuate discrimination.

Reference:

https://indianexpress.com/article/explained/explained-senior-google-engineer-ai-based-chatbot-lamda-sentient-7967054/

https://www.thehindu.com/sci-tech/technology/can-the-new-google-chatbot-be-sentient/article65526400.ece


Visit Abhiyan PEDIA (One of the Most Followed / Recommended) for UPSC Revisions: Click Here


IAS Abhiyan is now on Telegram: Click on the Below link to Join our Channels to stay Updated 

IAS Abhiyan Official: Click Here to Join

For UPSC Mains Value Edition (Facts, Quotes, Best Practices, Case Studies): Click Here to Join

Leave a Reply