Content in Translation

Accessing new markets the right way

post image
January 21, 2026

Council Translation: How Multi-Model Consensus Beats Single-Engine Output

Every machine translation engine has blind spots. DeepL struggles with certain idioms. Google Translate sometimes loses register. LLMs occasionally hallucinate. When your translation workflow relies on a single engine, you inherit all of its weaknesses.

The localization industry has accepted this as an unavoidable cost of automation. Post-editors spend hours fixing the same predictable errors, project after project. Quality teams develop mental checklists of “things to watch for” with each engine. It’s become so normalized that most platforms don’t even question it.

Read More
post image
January 14, 2026

XLIFF: The Universal Translation Standard Your Team Should Know

Every translation tool has its own internal format. Phrase uses one structure, Lokalise another, MemoQ another still. When you need to move content between tools—or work with external translators who use different tools—these incompatibilities create friction.

XLIFF exists to solve this problem.

What XLIFF actually is

XLIFF—XML Localization Interchange File Format—is an OASIS standard designed specifically for exchanging localization data between tools. Think of it as the PDF of translation: a format that preserves content and structure regardless of what tool created it or what tool will use it.

Read More
post image
January 13, 2026

Stop Losing Formatting in Translation: The Placeholder Revolution

The translated document arrives. You open it. Immediately you see the problems: bold text that’s no longer bold, links that point nowhere, bullet lists that became plain paragraphs.

The translator didn’t make these mistakes. The translation itself is fine. What failed was format handling—the tags and codes that carry formatting got stripped or corrupted somewhere in the process.

This happens constantly in localization workflows, and it’s almost always preventable.

Why formatting breaks

Modern documents store formatting as codes embedded in the text. A bolded word isn’t just “stored as bold”—it’s wrapped in tags:

Read More
post image
January 12, 2026

From Scanned PDFs to Translated Docs in Minutes

The document arrives as a scanned PDF. The text you need to translate is locked in images of pages, not extractable text. In a traditional workflow, someone would need to retype the entire document before translation could even begin.

This scenario plays out constantly. Legacy documents, signed contracts, historical records, printed materials that were never digital—all exist only as scans. The need to translate them doesn’t go away because they’re inconveniently formatted.

Read More
post image
January 11, 2026

The Post-MT Enhancement Pipeline Your Competitors Don't Have

Machine translation output in 2026 is good. It’s not good enough.

Raw MT from DeepL, Google, or any major engine produces text that’s usually comprehensible but rarely polished. The translations are technically correct but feel off: wrong register, inconsistent terminology, awkward phrasing that no native speaker would use.

Most organizations address this through human post-editing. Translators review MT output, fix problems, and produce final text. This works, but it’s essentially using humans to compensate for systematic MT limitations—the same limitations, segment after segment, project after project.

Read More
post image
January 10, 2026

Context-Aware Translation: Why Surrounding Segments Matter

Translation tools divide content into segments—usually sentences. Each segment gets translated independently. Segment 47 is processed without awareness of segments 46 or 48.

This architecture is efficient. It parallelizes well. It maps cleanly to translation memory matching. It’s been the industry standard for decades.

It also produces translations that feel fragmented.

The coherence problem

Read a professionally translated document closely. Often you’ll notice:

Pronoun inconsistency. “It” refers to different things in adjacent sentences, but the translations don’t maintain the reference chain.

Read More
post image
January 9, 2026

Translation Memory Meets AI: The Hybrid Approach

Translation memory is a solved problem. You translate a sentence once, store it, and reuse it when the same sentence appears again. 100% matches translate instantly, at zero cost, with guaranteed consistency.

The trouble is with everything that isn’t a 100% match.

The fuzzy match problem

Real content evolves. A sentence that was “Contact our support team” in version 1 becomes “Contact our customer support team” in version 2. The TM has a 90% match. What do you do with it?

Read More