Google’s AI can translate language pairs it has never seen
Google’s AI is not just better at grasping languages like Mandarin, but can now translate between two languages it hasn’t even trained on. In a research paper, Google reveals how it uses its own “interlingua” to internally represent phrases, regardless of the language. The resulting “zero-shot” deep learning lets it translate a language pair with “reasonable” accuracy, as long as it has translated them both into another common language.
The company recently switched its Translate feature to the deep-learning Google Neural Machine Translation (GNMT) system. That’s an “end-to-end learning framework that learns from millions of examples,” the company says, and has drastically improved translation quality. The problem is, Google Translate works with 103 languages, meaning there are 5,253 language “pairs” to be translated. If you multiply that by the millions of examples needed for training, it’s insanely CPU intensive.
After training the system with several language pairs like English-to-Japanese and English-to-Korean, researchers wondered if they could translate a pair that the system hadn’t learned yet. In other words, can the system do a “zero-shot” translation between Japanese and Korean? “Impressively, the answer is yes — it can generate reasonable Korean to Japanese translations, even though it has never been taught to do so,” Google says.
Even the researchers aren’t 100 percent sure of how it works, because deep learning networks are notoriously difficult to understand. However, they were able to peek into a three-language model using a 3D representation of the internal data (above). When zooming in, the researchers noticed that the system automatically groups sentences with the same meanings from three different languages.
In essence, it developed its own “interlingua” internal representation for similar phrases or sentences. “This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations,” the researchers write. “We interpret this as a sign of existence of an interlingua in the network.”
In one experiment, for instance, the team merged 12 language pairs into a model the same size as for a single pair. Despite the drastically reduced code base, they achieved “only slightly lower translation quality” than with a dedicated two-language model. “Our approach has been shown to work reliably in a Google-scale production setting and enables us to scale to a large number of languages quickly,” the team says. Bear in mind that it only started seriously working on AI for languages a short time ago, so its rapid progress is pretty scary — especially if you’re a professional translator.
(69)