Will AI Regulation Impact the Use of Neural Machine Translation?

Will AI Regulation Impact the Use of Neural Machine Translation?


Neural Machine Translation (NMT or MT), a form of specialized AI used in the translation industry, has remained unregulated and relatively ungoverned until now, with the primary focus being on improving performance. Until relatively recently, Language Service Providers (LSPs) and customers generally adopted a cautious approach to this technology, considering the risk profile of the content and often retaining a “human in the loop” of the final output.

However, the increased interest in AI and Large Language Models has led to a flurry of experimentation and some lowering of risk thresholds, with many customers keen to use either NMT or LLMs to reduce translation costs.

Despite this excitement, we sound a note of caution that impending AI regulation and changes to liability rules will require a pause to apply the necessary governance scaffolding to these promising but, by their nature, complex AI technologies.

In the first of this two-part series of articles, we’ll discuss the impact of AI regulation on Neural Machine Translation and what this means for language providers and the high-risk sectors they serve.

Around 2017, the language service industry started adopting next-generation Artificial Intelligence in the form of Neural Machine Translation, a type of Natural Language Processing specialized for translation using machine learning. Since then, NMT performance has improved exponentially, and its acceptance as a key tool in the translation process has grown.

NMT has become commonplace in many translation workflows, including raw MT (no human review), Post-Edited MT (PEMT) (MT plus a human review), and embedded in CAT tools for a human translator to accept or reject the MT suggestion.

Changing Perceptions of AI in Translation

More recently, the hype around Large Language Models (LLMs) and generative AI – which are more generalized forms of Natural Language Processing, has encouraged further interest in NMT. The remarkable performance of technologies such as ChatGPT in natural language generation tasks has led to a noticeable shift in customer attitudes toward AI in general and, by association, to NMT.

Wariness has given way to a broader presumption that AI should be part of any solution that improves performance and reduces cost in the translation process.

Anecdotally, at least, it appears that more customers want to accelerate the adoption of raw MT and PEMT models in their translation processes.

At the same time, translation technology providers have started combining LLMs with NMT to improve performance in areas such as quality estimation and assurance, a use-case showing promising results.

READ MORE: How to Implement Machine Translation into Your Translation Process


It is a golden period for innovation and hype. But hype comes with challenges. In the immediate term, providers and customers must understand and navigate the trade-off between cost and quality when using NMT across different use cases, domains, and language pairs. This is not a trivial undertaking when dealing with high-stakes content and requires the establishment of objective quality frameworks.

Moreover, in the medium term, another substantial issue will arise with the introduction of AI regulations and standards, which will bring significant financial consequences for non-compliance. Providers and customers working on NMT solutions now should keep this firmly in their sights.

AI Regulations on the Horizon

As part of the wave of AI regulation, it is highly plausible that NMT will be subject to government regulation, certification standards and the corporate governance frameworks that will undoubtedly emerge. The simple fact is that NMT models, solutions, and workflows have not been built in line with the frameworks that are only now emerging.

Some may even require a radical rework. Future compliance requirements are likely to change the equation for NMT adoption when it comes to cost, complexity, and risk and add further dimensions to the decision-making process about how and when to use it.

Let’s take a brief look at why this may be the case.

Regulations are currently being drafted in many jurisdictions, and multiple legal frameworks will emerge over time. The most advanced of these is the EU AI Act . It’s still unclear whether other jurisdictions will follow with a similar level of restrictions or opt for a lighter regulation for innovation purposes.

Nevertheless, as with GDPR, any organization that wants to operate or sell in the EU must comply. It is also important to note at this point that many of the practical implications of the legislation are still unclear and, for good measure, may eventually vary between EU states.

First, the ”why”.

The EU AI Act sets out the reasons for regulation: to ensure that AI systems are “safe, transparent, traceable, non-discriminatory and environmentally friendly” and emphasizes human supervision. Each area is a big topic, which can be interpreted in many different and subjective ways.

Furthermore, they are fundamentally challenging for the building and management of machine learning models, which learn and process information in ways that are unpredictable and difficult to audit (and are power-hungry).

In addition, AI systems are expected to comply with other laws, such as data protection and intellectual property, meaning more scrutiny in these areas.

Second, the ”what.”

Exactly which type of AI will be subject to this regulation? The EU AI Act takes a risk-based approach when deciding which systems will be subject to different compliance obligations. These range from ”unacceptable risk” to ”high risk”, ”limited risk” and ”no risk”. The legislation particularly focuses on high-risk products and applications. (Unacceptably risky AI models banned and limited risk models only lightly regulated).

High-Risk Sectors and Compliance

There is a relatively long list of areas in the high-risk category, including all products subject to product safety regulation (e.g., toys, aviation, cars, medical devices) and critical infrastructure but also education and vocational training, employment, public services, law enforcement, migration, and legal services.

These latter segments are deemed high risk in ways that go beyond traditional ”safety” and extend to areas such as the risk of discrimination or the misapplication of the law.

As the legal text states, “Businesses or public authorities that develop or use AI applications that constitute a high risk for the safety or fundamental rights of citizens would have to comply with specific requirements and obligations.”

Although there is still some time before compliance requirements are fully understood, non-compliance with the EU AI Act could result in multi-million Euro fines. Furthermore, the EU’s AI Liability Directive proposes to change civil liability laws to include harm from AI with a ”presumption of causality”.

This means that EU citizens can pursue operators of AI systems for damage (any breach of their rights), and the starting legal position is that the damage is presumed to have been caused by the AI. It is for the owner or operator to prove otherwise.

So, will NMT be caught by AI regulation? At the very least, material translated by NMT must be labeled as such for transparency, possibly in all situations. Furthermore, despite not being the intended target of the legislation, there is a strong chance that the application of NMT in any of the high-risk segments above could fall under the Act, whether used as a standalone model or integrated into other systems.

There are plenty of examples where it arguably should – such as a medical device user manual or user interface, but many others where it is a grey area, such as multi-lingual chatbots or translation of e-learning materials.

Given the potential for significant fines as well as the ”presumption of causality” under the AI Liability Directive, we expect customers will ultimately take a cautious approach. Even if the NMT models do not need to be directly regulated, we predict significantly increased governance through ISO (and equivalent) standards, customers’ AI policies, and an emerging requirement for AI insurance cover.

Want to know what’s in store for the future of NMT regulations? Stay tuned for Part Two next week!

Related posts

Get a Quote
HTML Snippets Powered By : XYZScripts.com