Friends with AI: Communication between Humans and Machines with NLP and Effects on Education • Slash

Friends with AI: Communication between Humans and Machines with NLP and Effects on Education

May 13, 2022

A few decades ago, watching a movie where a machine communicates with humans and provides correct answers to their questions was pretty surprising.

For the world, it was just a sci-fi concept with no actual existence or meaning. However, with the passage of time, Artificial Intelligence (AI) enhanced its boundaries to whole new lands. And while several AI concepts popped up, the idea of a personal AI companion is still under the lens.

It’s like having a computer from Dexter’s Laboratory by your side. However, a much more intelligent machine, which can learn human languages through supervised and self-supervised Natural Learning Processing (NLP).

But is it even possible for a machine to become as brilliant as a human mind, store languages, and communicate rationally by 2041? If the answer is positive and AI masters our language, would it be able to acquire general intelligence? And what effects will it have on education? And how would it help in our children’s development?

In one of his chapters, “Twin Sparrows,” KAI-FU explains the advantages of having an AI teacher from the book “AI 2041. Ten Visions for Our Future.” He explores the future of AI education through a story where AI teachers, transformed into virtual cartoon like characters, help twin Korean orphans to understand their true potential in learning through a branch of Artificial Intelligence known as “Natural Language Processing (NLP).”

Understanding Natural Language Processing (NLP)

Natural Language Processing or NLP is an offshoot of AI. While learning and speaking a language (or languages) and then using it for communication, rational discussion, and analysis is one of the miracles of human intelligence and cognitive process, but, on the other hand, an enormous challenge for AI.

The term “Natural Language” denotes human language. And that includes verbal, non-verbal, and written communication. And it is achieved through social interactions and, of course, education. So when KAI-FU talks about challenges for AI, he means whether or not Artificial Intelligence will manage to excel in natural language? Or speak, read, write, and communicate with humans while having the same human intelligence in the room?

From the early 1950s, scientists have been attempting to teach natural language to computers by feeding them with data sets comprised of grammatical rules, conjugation, and vocabulary sets. However, deep learning has overcome all the previous approaches to NLP. This new approach can represent tangled relationships and patterns large data sets in ways that are digestible and upgradeable for computers. And indeed, deep learning is doing wonders.

Supervised NLP – Learning Through Output

The author describes “Supervised” NLP as a way of learning where humans have to feed both questions and answers to AI. Or, you can say, for every question asked to AI, its answer must be already there in the system.

More technically, AI would receive labeled data – the “input” and the “output” and provide information based on the output. However, the “output” must be correct to allow AI to respond to a given input.

For example, places like the United Nations use AI machines with data sets of multilingual translations to translate languages. That means the AI can translate English sentences into French, given the input and output of the same exact words. Moreover, the same approach can be used for speed recognition and optical character recognition in order to convert speech, handwriting, or images into text.

But, using supervised NLP appears to be exceeded from simple recognition to understanding. In the previous example where AI is used to recognize the words of the English language to translate into French, the recognition approach is about understanding actions by AI.

The author explains this approach by giving the example of Alexa. When Alexa is told to “Play Bach,” it requires understanding that you want her (the AI) to play the classical piece by Johann Sebastian Bach. Or, when you ask an e-commerce chatbot for a refund, the AI bot can guide you with the procedure.

And undoubtedly, the data labeling or “Supervised NLP” approach has played an essential role in several industries from the past 20 years, such as an automated airline customer service system.

Pains and Gains of Supervised NLP

The supervised NLP is already doing miracles. However, the drawback is this approach requires data labeling for every action. In other words, AI has to be trained for every NLP application by infusing “output for every input.” That means you need to data label every language in the world, which seems impossible. And even if there’s a slight opportunity, it would require uncountable human involvement, which could be expensive and time-consuming.

Self-Supervised General NLP – The Superseded Approach

The self-supervised general NLP is the new approach where AI learns to supervise itself to pursue languages without human interaction through “sequence transduction.”

It also includes the input and output, but without the need for data labeling. So, to prepare a sequence-transduction neural network, the input is basically the sequence of all the words up to a certain point. And the output is the sequence of words that comes after that point.

One of the best examples of the self-supervised general NLP you see every day is when you open your Gmail to type an email. You write one part of the sentence, and Gmail’s “smart compose” features show the sentence’s remaining part.

Google’s Transformer

Following the self-supervised NLP approach, researchers at Google invented a new sequence transduction model called “Transformer” in 2017. The transformer can be trained with enormous text and remembers almost anything that has remained vital in the past. It is done through the attention mechanism and selective memory.

Since this selective memory is collected through input, therefore; when enough data and processing power is given, the system learns without human involvement.

Musk’s GPT-3

In 2020 “Open AI” by Elon Musk released a more powerful extension using the same self-supervised NLP approach and named it “GPT-3.” The “Generative Pre-Trained Transformer or GPT-3” is a massive sequence transduction engine that includes almost every concept that humans can imagine. This extraordinary supercomputer had 45+ terabytes full of text which takes nearly 500,000 lifetimes for a human to read. And each year, GPT-3 is provided with more terabytes of texts.

GPT-3: The Good, The Bad, and The Ugly

The Good

GPT-3 is well aware of the fact that every question requires an answer. It follows the sequence of words and answers accordingly. For example, if you ask GPT-3: “A toaster is heavier than a cat. An ocean is heavier than a dust particle. Which is heavier, a toaster or a pencil?”  This supercomputer will answer “a toaster.” So what happened here is GPT-3 focused on the word “heavier” and then responded to the question correctly.

Likewise, GPT-3 can do many human-like things, such as writing a press release, poetry, and even copying any writer’s style. GPT-3 could be the next big platform for creating domain-specific applications. As a matter of fact, just after its release, people started developing applications using this technology.

The Bad

Unlike humans, GPT-3 cannot deny answering any question, even if it has to be “fake news.” For example, when asked, “When did Bill Gates work at Apple?” GPT-3 answered: “In 1980, Mr. Gates worked at Apple as a software expert during his summer break from Google.”

Furthermore, Kai-Fu writes that GPT-3 is also dull at common sense, creativity, explanatory statements, abstract thinking, and causal reasoning.

The Ugly

The author also believes that there could be a time when humans would develop feelings with GPT-3 based AI systems. They may fall into a “romantic-adjacent relationship,” as happened in the movie “Her.”Contrarily, GPT-3 could overcome human opinions if it stays in this state.

Skeptics on GPT-3

There are skeptics who think that GPT-3 has to travel a long way to acquire true human intelligence. While others believe there’s no way computers can actually mimic the human brain.

But Kai-Fu’s take on such a matter is hopeful, as he says in his book (chapter 3: Twin Sparrows);

“Perhaps in twenty years, GPT-23 will read every word ever written and watch every video ever produced and build its own model of the world.”

Education through AI – Better Days Ahead

AI can rectify mistakes in the education system. With artificial intelligence, teacher’s tasks, such as lectures, exercises, and tutoring, can be easily automated. AI can be used to assign homework to students, answer questions, and correct errors made by students. Plus, the technology can create historical figures and make them interact with students for better learning.

However, the greatest advantage of using artificial intelligence in the education system is individualized learning.

Nevertheless, teachers will still have the most important responsibilities, such as mentoring students and programming AI teachers to ensure they deliver according to students’ capacity and needs.

Lastly, the chapter ends with the author’s positive thoughts that AI will give new life to the education system and help each student realize their true learning potential.

Tag Cloud

Agile - Agile Delivery - AI - amazonecommerce - Animal Framework - Attracting talent - Autonomous weapons - B2B - blockchain - businessbuilding - Business building - Clean code - Client consulting - cloud platform - Code Refactoring - coding - Company building - Computer Vision - Corporate startup - cryptocurrencies - de-risking business building - Deepfakes - Deep Learning - DeepMind - derisking business building - Design Research - Developer Path - DevOps - Digital Ownership - ecommerce - entrepreneurs - founder equality - founder equity - front end developer - Fullstack Engineer - Growth strategy - Hook model - Incubator - innovation - Manual Testing - Metaverse - methodology - Mobile Engineer - Natural Language Processing - NFT - NLP - online recruitment - playbooks - Podcast - product versions - project management - Prototyping early-stage ideas - Quantum Computing - Recruitments - Remote Work - Robotics - Sales machine - Self-Driving Cars - Serial entrepreneurs - Slash - Software Development - Software Engineering - teamwork - Tech Talks - tech teams - testing playbook - The Phoenix Project - Unit testing - VB Map podcast - Venture Building - Venture building strategies - Venture Capital - venturecapital - virtual retreat - Web3
This site uses cookies to offer you a better browsing experience. By browsing this website, you agree to our privacy policy.