Quick recap: The EU AI Act will regulate artificial intelligence for the first time, particularly in relation to ‘high risk’ uses (broadly: regulated products, biometrics, critical infrastructure, education, employment, access to essential services, law enforcement, immigration, administration of justice and democratic processes).
Failure to comply with the Act could result in multi-million Euro fines. Furthermore, changes to the EU Product Liability Directive (possible implementation 2026-27) will introduce a presumption of causality in relation to AI systems, such that the burden is on the defendant to prove its AI system did not cause harm.
In Part One, we raised the question of whether Neural Machine Translation and other AI-based language technologies will be subject to emerging AI regulation. Our view is that in certain use-cases it will be – and that the complex legislative environment will lead to more caution over its general use and higher levels of governance. Here in Part Two, we outline what this might mean in practice.
READ PART 1: Will AI Regulation Impact the Use of Neural Machine Translation?
Obligations on different types of supply chain participants
The EU AI Act differentiates between types of AI supply chain participants and imposes obligations on each category. The two main categories are ‘Provider’ and ‘Deployer’. The Provider is the developer and marketer of the system under its own name.
The Deployer is the body using an AI system under its authority and in a professional capacity. In some circumstances, the Deployer can also become a Provider, if it markets the system under its own name, substantially modifies it or changes its intended purpose.
The highest burdens are placed on Providers, who must register any high-risk AI systems and conform to a broad set of obligations, including:
- High quality data sets should be used to minimise risks and discriminatory outcomes;
- Risk assessments must be undertaken and mitigations put in place;
- Activity should be logged to ensure traceability of results;
- Detailed documentation should be provided;
- Clear and adequate information should be provided to the user;
- The system should have adequate human insight, and
- High level of robustness, security, and accuracy.
Deployers are also responsible for using the AI system according to instructions, assign human oversight, monitor and keep logs and undertake a wide range of transparency and reporting-related activities towards customers and the authorities.
What will this mean in the context of the language service industry?
The main challenges we believe that the language service industry will face are as follows:
- Models will need to fit into the new governance frameworks despite the fact that NMT solutions have not been built with these guardrails designed in. There may need to be a period of cleaning, testing and documenting of these models. In some cases, retrofitting them may be difficult and model providers may stipulate a restricted set of use-cases to reduce some of these obligations. Governance may be further complicated if multiple models are used in combination, such as the example of using a third-party LLM to predict the quality level of a MT translation.
- Language service market participants will need to understand whether they are Providers or Deployers of NMT systems. Typically, the developers of NMT/AI systems are the Providers, while the Deployers are the users – including translators, language service providers or direct customers. However, Deployers might also become Providers if they make substantial changes to the models (data training) or if they deploy the system for a use-case not explicitly agreed by the original Provider.
- The precise use-case therefore becomes a critical factor in determining which risk category the system fits into and the role of the organisation as Provider or Deployer. This will mark a significant change in the relationships between systems providers, service providers and end customers as they navigate and negotiate respective responsibilities and liabilities. “Risk assessments” will be the most commonly used words in relation to NMT.
- Transparency must rise for the whole language service supply chain, about when, how and which NMT is used. This has its challenges in an industry with many supply chain layers – from freelancers to agencies and LSPs. Use of AI will also need to be flagged to end-consumers. In some cases, this may change the decision to use NMT if there is a perception that AI-generated content is lower quality.
- The impact of ‘human in the loop’ mitigations will need to be understood more clearly. ‘Human oversight’ is a key feature of the expected governance framework but is intended to be more than just a human review of output. From a liability perspective, it will likely remain a grey area as to how much human review is deemed a reasonable offset to AI risks.
- Compliance standards will include AI elements both embedded into existing certifications or as standalone certifications. Given the prevalence of NMT in the language services industry, these will be de facto obligations for any language service provider.
- We expect to see increasingly complex clauses in relation to AI in customer contracts, incorporating corporate policies and seeking to set out liability frameworks. This will likely require significant engagement from a range of stakeholders including legal, procurement, IT, data protection and IP, data scientists and domain specialists.
- AI insurance cover will become a pre-requisite for customers in helping manage their risk. However, these products will be in their infancy for some while as insurers build the expertise and data sets to create and price them.
Wrap up
NMT is an exciting technology that has been driving productivity gains in the language services sector. The introduction of LLMs has given a further performance boost to NMT, appearing to enable it to stay on its exponential improvement curve and continue to close the gap with human translation.
However, NMT deployment must be considered in terms of risk and quality, not just the cost benefits. The emergence of AI regulation and governance will introduce far greater complexity and new costs and obligations. In turn, this will raise the financial and reputational stakes for customers and providers when choosing to deploy NMT. As a result, we predict that in the next two years, the industry will enter a period of reflection and re-set about how and when it is used.