Skip to content

“OK Google: What is Natural Language Processing?”

How machines read, decode and understand human language

by Sandra Wartner, MSc

Not every language is created equal – while humans have created their own communication channels over thousands of years, millions of zeros and ones serve as machine code, or machine language, for computers to understand and execute commands. Natural language processing, or NLP, enables machines to read, decode and understand human language. Speech assistants, spelling correctors, email spam filters – NLP as a technology is omnipresent and already hides behind many processes and software applications deeply embedded in our everyday lives. The often hidden potential in many mountains of data is far from exhausted.

Table of contents

  • Artificial Intelligence as an interface between human and machine
  • Text analytics und use-Cases in a company
  • Conclusion
  • Author

The flood of data generated by us humans is growing day by day. For the year 2020 alone, growth statistics showed that 1.7MB of data is generated per person every second. We send photos, store documents in the cloud, stream music or videos, communicate via video conferencing tools, and use many more conveniences that the Internet offers us. In the last two years alone, approximately 90 % of the world‘s data was generated – and the numbers continue to rise. The COVID pandemic, among other things, is also contributing to a sharp rise in the growth rate due to the increased need for online communication and home offices.

A considerably large amount of the existing data consists of text data. We primarily generate these ourselves by writing emails, product reviews, tweets, or text messages, for example. At the same time, we can use the potential of the continuously growing mountains of data to create the applications that increasingly support us in our everyday lives in the first place. We use translation functions from one language to another (e.g., DeepL), programs alert us to typos when composing texts and messages, digital voice assistants such as Alexa, Cortana, Siri and co. support us in a wide range of activities, and search engines offer search completion – all these services and functions are built on one essential technology: Natural Language Processing (NLP).


Artificial Intelligence as an interface between human and machine

Machine processing of natural language is not a new field of research, however, due to the availability of higher computing power, enormous amounts of data (Big Data) as well as modern algorithms, recent years have brought a multitude of revolutionary achievements in the NLP environment: computers are able to read, understand and speak. As an interdisciplinary field of linguistics, computer science, and artificial intelligence (AI), NLP enables communication between humans and machines in different forms (written and spoken) and in a variety of languages.

If we want to ask the Google Assistant on our smartphone to have a synthesized voice explain NLP to us, a simple “OK Google” and the trailing question will suffice. Optimally, we will receive an answer that satisfies us and provides exactly the information we were looking for. While this task sounds relatively simple for execution by a human, for a machine it means breaking down language into its elementary components, understanding the question and context, and having to solve sequentially different problems.

Natural Language Understanding (NLU) focuses on the extraction of information from text and thus on the acquisition of text understanding with respect to a certain aspect. Syntax (grammatical structure) and semantics (meaning of words) play an important role. Examples of this are:

  • grammatical analysis (e.g., Part-of-Speech (POS) Tagging),
  • recognize people, places or other keywords in texts (e. g., Named Entity Recognition (NER)),
  • sentiment and opinion analysis and
  • classification of text into predefined categories.
NLP Explained

Natural Language Generation (NLG) focuses on the generation of natural language and is used, among other things, for the automated creation, summarization or translation of texts.

Since NLU and NLG work exclusively with written language, a component for speech recognition (Speech-to-Text, STT for short) and speech synthesis (Text-to-Speech, TTS for short) is often required, which then act as an interface between the NLP system and the real world.

Für das „OK Google“-Beispiel bedeutet dies, dass die Anfrage mittels STT von der gesprochenen Sprache in die geschriebene Sprache konvertiert wird. Auf die Anfrage, welche durch NLU erkannt wurde, kann reagiert werden, indem beispielsweise relevante Suchergebnisse gesammelt und ausgewertet werden. Das dabei generierte Wissen kann zumeist (abhängig von der Art des Resultats) mit Hilfe von NLG und TTS akustisch wiedergegeben oder die besten Treffer am Endgerät angezeigt werden.

NLP is considered one of the most complicated problems in computer science. Natural language in itself has no identifiable structure (often referred to as unstructured data) and is a complex system of strung together, partially interdependent characters and therefore not easy to understand. German, English, Russian, Japanese, Arabic – each language has its own complex syntax and peculiarities. In addition, there are further complications, as language often does not work in a linear way, but makes use of different stylistic devices, idioms and information between the lines. Recognizing sarcasm is not always possible even for a human. Ambiguities of single words have to be resolved by a context analysis, e.g., to associate the word “bank” with a seat or a financial institution. Mumbling, stuttering, speaking in dialect and background noises make it difficult for the voice assistant to evaluate the information and can lead to an incorrect answer. Algorithms have to face these and several other challenges in order to meet their requirements.

Older systems relied on rule-based or purely statistical approaches, whereas the breakthrough only came with machine learning (especially deep learning) and the availability of large amounts of data. Machine Learning models try to infer general patterns from a set of examples (How do people use language? Which grammar rules are applied?) and apply them to decide for an individual case – similar to a child learning human language. The more examples the system is provided with and the better they reflect reality or the future application scenario, the higher the hit rate for new, unknown tasks the system is supposed to solve. Currently, the most promising models or state-of-the-art results for tasks from the NLP domain are obtained with Deep Learning algorithms, which allow more complex modeling than conventional Machine Learning models. Deep Learning was inspired by the way a human brain works and employs multi-layered neural networks. The highly connected structures enable “deep learning,”which is essential especially for the complex construct of language.

Text analytics und use-Cases in a company

In order to exploit the often untapped potential in corporate data and solve business problems, existing (raw) data must be examined and knowledge derived from it, quantified and visualized. Text analytics can be used to map this process in order to process large volumes of unstructured text data and gain insights. Only if a uniform understanding of all stakeholders can be created for the results and the step of seamlessly integrating solutions into existing workflows and systems can be mastered, can further decisions for action be derived from this and thus the success factor for the company be increased in the long term.

More and more companies from different industries rely on NLP solutions to better manage and use the accumulated, different text forms in a variety of areas. Especially when there are recurring tasks to be done, automating these tasks can be useful.
In the following, exemplary use cases are listed to illustrate the broad applicability of NLP solutions.

Use Case Document Classification

Document Classification (Use-Case 1)

Sentiment Analysis

Sentiment Analysis (Use-Case 4)

Use-Case 1: “Automated document-classification”

You are working in controlling and would like to receive only those documents (or document types) for which you are responsible.

Use-Case 2: „Automated extraction on information from docuents such as invoice or delivery bills“

You are the manager of the receiving department and want to control the details of the deliveries instead of entering them manually.

Use-Case 3: „Customer Support“

You are an online mail order company and want to reduce customer service response times by automating the processing and answering of customer inquiries.

Use-Case 4: „Automated evaluation of customer feedback“

You are a marketing representative and would like to get an overview of the mood and reactions regarding to your new advertising campaign on social media.

Use-Case 5: „Social media analysis“

You are an employee of the federal office for the protection of the constitution and want to discover and monitor extremist, radical and violence-glorifying social media profiles and posts.

Use-Case 6: „Support in clinical documentation and organization“

You work as a specialist and would like to summarize essential information from several, extensive medical histories of individual patients in order to obtain a holistic view of the disease history.


Progress in the NLP field is unstoppable and is continuously providing new and better solutions to a wide range of problems. The precision of the models developed and their availability to the masses continue to increase and more and more developments are making the leap from research to production. In any case, it remains exciting to see what further breakthroughs the coming years will bring – one thing is clear, they will come.

RISC Software GmbH is happy to support you in the submission and implementation of (research) projects in the field of Natural Language Processing.



    Sandra Wartner, MSc

    Data Scientist