Content in Translation

Accessing new markets the right way

post image
February 11, 2026

AGI v1.0 Will Be Harness + Skills

Consider the trajectory of LLMs so far. Starting in Google’s translation research departments, following on from their groundbreaking neural network models, transformers were first and foremost a “language” model. They have grown to be able to convince people of their utility, much like the market stall owner would do pre-2000s, holding groups of up to 50 people in awe because of their fast talking, confidence and skill demoing whatever they were selling. The same works on shopping channels and more recently Youtube. Speak fast and confidently, cut all the dead air, and you got yourself the makings of a popular channel.

Read More
post image
February 11, 2026

AGI v1.0 Will Be Harness + Skills

Consider the trajectory of LLMs so far. Starting in Google’s translation research departments, following on from their groundbreaking neural network models, transformers were first and foremost a “language” model. They have grown to be able to convince people of their utility, much like the market stall owner would do pre-2000s, holding groups of up to 50 people in awe because of their fast talking, confidence and skill demoing whatever they were selling. The same works on shopping channels and more recently Youtube. Speak fast and confidently, cut all the dead air, and you got yourself the makings of a popular channel.

Read More
post image
January 21, 2026

Council Translation: How Multi-Model Consensus Beats Single-Engine Output

Every machine translation engine has blind spots. DeepL struggles with certain idioms. Google Translate sometimes loses register. LLMs occasionally hallucinate. When your translation workflow relies on a single engine, you inherit all of its weaknesses.

The localization industry has accepted this as an unavoidable cost of automation. Post-editors spend hours fixing the same predictable errors, project after project. Quality teams develop mental checklists of “things to watch for” with each engine. It’s become so normalized that most platforms don’t even question it.

Read More
post image
January 20, 2026

Cross-Lingual QA: Catching Errors Without Reading the Target Language

Translation quality assurance has a staffing problem. Finding reviewers who are native speakers of the target language, fluent in the source language, and expert in the subject matter is difficult. Finding them for 40 language pairs is nearly impossible.

Most organizations solve this by accepting lower review coverage. High-value languages get thorough QA. Lower-volume languages get spot checks or statistical sampling. The unstated assumption: some markets will receive lower-quality translations than others.

Read More
post image
January 19, 2026

Video Localization Beyond Subtitles: The Dubbing Revolution

Subtitles are a compromise. They’re how the localization industry has handled video content for decades because real dubbing—with voice actors, recording studios, and audio engineering—costs too much for most content.

The numbers tell the story. Professional dubbing runs $75-150 per minute of finished video, assuming you already have a script. A 10-minute product demo costs $750-1,500 to dub into one language. Multiply by 10 languages and you’re looking at five figures for a single video asset.

Read More
post image
January 18, 2026

The Hidden Cost of Manual File Extraction Rules

Before you can translate a file, you have to extract the translatable content from it. This sounds simple until you encounter the reality of enterprise file diversity.

A single client project might include Word documents with tracked changes, Excel spreadsheets with formulas in some cells and translatable content in others, InDesign files with locked layers, XML exports from a CMS with custom namespaces, JSON files with mixed translatable and code elements, and PowerPoint decks with content in text boxes, notes, and embedded charts.

Read More
post image
January 17, 2026

Beyond DeepL: Why LLM Translation Changes Everything

DeepL is good. So is Google Translate. So is Microsoft Translator. The neural machine translation revolution of the mid-2010s genuinely improved translation quality, and these engines handle straightforward content remarkably well.

But they all share a fundamental limitation: they translate text. You give them source content, they give you target content, and that’s the entire interaction.

Large language models do something different. They understand context, follow instructions, and adapt their output to requirements. The distinction sounds subtle until you see it in practice.

Read More