Researchers have discovered a method that can allow AI to teach themselves any language.

How would you like to talk to anyone, wherever you go in the world, without having to learn their language yourself?

You may have seen slick language translator devices on social media or in the market.

Or, you may have noticed Google’s Pixel Buds which also come with a translation feature for 40 languages while also being a pretty good pair of headphones.

There are thousands of languages in the world, yet even the largest language translation databases out there only contain a fraction of that number. Even then, the translations are never perfect. At the moment, nothing substitutes actually learning the language yourself.

Of course, you could sidestep that if an AI could learn the language for you. That’s the idea that researchers from Universidad del País Vasco (UPV) in Spain and Carnegie Mellon University (CMU) have been working with lately.

The experiments have surprising results, but even they aren’t perfect. According to recent research in cross-linguistics, it may take a long time and a lot of languages to build a universal translator.

But we’re getting there. Star Trek, here we come.

Star Trek, here we come. Universal translators are on the horizon! #bringitonClick To Tweet

Universal Translators Aren’t Impossible

The idea that AI software could provide a translator isn’t that farfetched. The human brain has to crunch a ton of data to process a known language. To add to that workload, we often think about what we speak. This makes us add in all kinds of variables in order to speak properly in a given situation.

AI are the kings of crunching data, we just have to know how to feed it to them.

#AI are the kings of crunching data. How long until they can translate everything? #universaltranslator #goanywhereClick To Tweet

To give you an example of someone actually doing this, let’s use Google.

Currently, their machine translation uses supervised neural networks that compare parallel texts between 103 languages. Google Translate compares texts in any two languages, learns the equivalences, and voila, it can translate them.

The approach works, but it’s severely limited by the need for supervisors, parallel texts, and the sheer number of languages out there.

To avoid the limits, researchers took a different approach. In fact, they inverted the techniques previously used. Instead of heavily supervised machine learning with parallel texts, they used somewhat unsupervised learning with random texts.

The software builds a working dictionary by guessing what the word groups are in a given language. From there, they work out sentence structure, and they evaluate their guesses by trying to translate between languages.

As Microsoft AI expert Di He pointed out, the idea seems impossible and is nothing short of incredible.

When you consider how good AI is at crunching data and recognizing patterns, it makes sense. However, that begs the question of how long it would take to get a real universal translator working.

To help answer that, let’s look at a recent linguistic theory.

Applied Cross-Linguistics

Languages are connected to humankind throughout its history. That statement is important to remember because it shows how languages form over time.

It also shows how languages have similarities through common ancestors. According to Clifton Pye, an Associate Professor of Linguistics, that may be the key to understanding how languages are learned.

Pye used a new comparative method to understand three Mayan languages. What’s more, he says that the method can be used to understand any family of languages.

“You can’t compare how children acquire different languages unless you compare how they acquire the same linguistic features,” says Pye.

Let me unpack that. For those of us that have never studied linguistics, please remember that it is a science. That field is as meta as it gets when it comes to language. They are concerned with how it’s made, how you shape it with your mouth, and how the brain produces it.

In this context, Pye is saying that to understand how kids acquire new languages, you need to know how they learn the structure of those languages. We don’t often think about language structure outside of an academic setting. For native speakers of a language, the brain handles that automatically.

According to Pye, the rules around how that happens come from what is called historical linguistics. In historical linguistics, you build a family tree for languages based on their ancestral roots. That helps you to understand their structure, which can then be compared to the structure of another language.

Smarter, Better Translators

This may be a stretch, but it seems that with Pye’s method it would be just a matter of time before we have universal translators.

How long it would take is unclear, but with Pye’s method, enough historical linguistic data would provide a great understanding of how to learn a new language. Thus, AI programmed to learn languages could become very good at it given time. With just a bit of fine-tuning, we would be in business.

That future may be a ways off, though. Until then, you had better learn a bit of the local language if you plan on taking a long trip.

When you get a universal translator, where will you go?

banner ad to seo services page