Skip to main content

How Computers Understand Human Language?

How Computers Understand Human Language?

Photo by Alex Knight on Unsplash
Natural languages are the languages that we speak and understand, containing large diverse vocabulary. Various words have several different meanings, speakers with different accents and all sorts of interesting word play. But for the most part human can roll right through these challenges. The skillful use of language is a major part what makes us human and for this reason the desire for computers that understand or speak human language has been around since they were first conceived. This led to the creation of natural language processing or NLP.
Natural Language Processing is a disciplinary field combining computer science and linguistics. There is an infinite number of ways to arrange words in a sentence. We can't give computers a dictionary of all possible sentences to help them understand what humans are blabbing on about. So, an early and fundamental NLP problem was deconstructing sentences into small pieces which could be more easily processed.
In school you learned about nine fundamental types of English words.
  1.     Nouns
  2.     Pronouns
  3.     Articles
  4.     Verbs
  5.     Adjective
  6.     Adverbs
  7.     Prepositions
  8.     Conjunctions
  9.     Interjections
These are all called parts of speech. There are all sorts of sub-categories too like singular vs. plural nouns and superlative vs. comparative adverbs but we are not going into that. Knowing a word’s type is definitely useful, but unfortunately there are a lot of words that have multiple meanings like rose and leaves which can be used as nouns or verbs.
A digital dictionary alone is not enough to resolve this ambiguity so computers also need to know some grammar. Fro this, phrase structure rules were developed which encapsulate the grammar for a language. For example in english there's a rule that says a sentence can be comprised of a noun phrase followed by verb phrase. Noun phase can be an article like “the” followed by a noun or they could be an adjective followed by a noun. And you can make rules like this for an entire language. Then using these rules it is fairly easy to construct was called parse tree which not only tag every word with a likely part of speech but also reveals how the sentence is constructed.
The smaller chunks of data allow computers more easily access, process and respond to information. Equivalent processes are happening every time you do a voice search like ‘where is the nearest pizza’. The computer can recognize this is a ‘where’ question, knows that you want the noun ‘pizza’ and the dimension you care about is the ‘nearest’. The same process applies to “what is the biggest giraffe?” or “who sang thriller?” By treating language almost like legos, computers can be quite adept at natural language tasks. They can also answer questions and also process commands like ‘set alarm for to 2:20’. But as you have probably experienced they fail when you start getting fancy and they can no longer parse the sentence correctly or capture your intent.
I shall also mention that phrase structure rules and similar methods that codify language can be used by computers to generate natural language text. This works well when data is stored in a web of semantic information where entities are linked to one another in meaning for relationships, providing all the ingredient you need to craft informational sentences.
These two processes, parsing and generating text are fundamental components of natural language chat bots. Chat bot is a computer program that chat with you. Early chat bots were primarily rule based where experts would encode hundred of rules mapping what a user might say, to how a program should reply. But this was difficult to maintain and limited the possible sophistication.
A famous early example was Eliza, created in the mid 1960 at MIT. This was a chatbot that took on the role of a therapist and used basic syntactic rules to identify content in written exchanges, which it would turn around and ask the user about. Some times it felt very much like human-human communication but sometimes it would make simple and even comical mistakes. Chat bots today are more advanced. It has come a long way in the last fifty years and can be quite convincing today.
Simultaneous innovation in the algorithms for processing natural language is moving from hand crafty rules to machine learning techniques that could lead automatically from existing data sets of human language. Today the speech recognition systems with the best accuracy are using deep learning.

Related Read: Deep Learning: A Quick Overview

Comments

  1. Understanding the features of our language was difficult for computers. However, building a clear grammatical structure made it possible to overcome this barrier.

    ReplyDelete
  2. With the help of the fact that every scientist made a great contribution to writing the necessary algorithm that helped robots understand and listen to the language of people.

    ReplyDelete

Post a Comment