← Back to blog

One API Key, Many Tenants: How We Isolate DeepL Translations Across Customers

Rasepi uses a single DeepL API key for all tenants. Here's how we handle per-customer glossaries, style rules, cached translations, and block-level isolation without anything leaking.

Under the Hood
One API Key, Many Tenants: How We Isolate DeepL Translations Across Customers

There's a question that comes up every time I explain Rasepi's translation architecture to another developer: "Wait, so all your tenants share one DeepL API key? How do you keep their glossaries and style rules from leaking into each other?"

It's a fair question. And the answer involves more design work than you'd expect.

We covered the full translation pipeline in a previous post, the block-level hashing, the orchestrator, the whole flow from document save to translated output. This post zooms into the specific problem of multi-tenancy. How you take a third-party API that has no concept of tenants and build tenant isolation on top of it.

The problem: DeepL doesn't know about your customers

DeepL's API authenticates with a single API key. Everything created under that key, glossaries, style rule lists, translation history, belongs to the same account. There's no concept of "this glossary belongs to Tenant A" on DeepL's side.

When you call GET /v2/glossaries, you get all glossaries from all tenants. When you create a style rule list, it lives in the same namespace as every other tenant's style rules. The API is flat.

For a self-hosted product where every customer runs their own instance with their own DeepL key, this is fine. For a multi-tenant SaaS where we manage the infrastructure? You need an isolation layer.

The database is the source of truth

Our core design decision: the database owns all glossary content and style rule configuration. DeepL is a runtime execution target, nothing more.

Every TenantGlossary and TenantStyleRuleList entity implements ITenantScoped, which means EF Core global query filters automatically scope all reads to the current tenant. A query for glossaries in Tenant A's request context will never return Tenant B's entries. This is the same isolation pattern we use everywhere in Rasepi, enforced at the ORM level.

Here's what makes this interesting. When a tenant edits a glossary term, we do not immediately call DeepL. We update the database row and set IsDirty = true. That's it. The actual DeepL glossary gets created (or recreated) lazily, right before the next translation needs it.

public async Task<string?> GetOrSyncDeepLGlossaryIdAsync(
    string sourceLanguage, string targetLanguage)
{
    var glossary = await _db.TenantGlossaries
        .Include(g => g.Entries)
        .FirstOrDefaultAsync(g =>
            g.SourceLanguage == sourceLanguage &&
            g.TargetLanguage == targetLanguage);

    if (glossary?.Entries.Count == 0) return null;

    if (!glossary.IsDirty && glossary.DeepLGlossaryId is not null)
        return glossary.DeepLGlossaryId;

    // Dirty: delete old, create new
    if (glossary.DeepLGlossaryId is not null)
        await _deepL.DeleteGlossaryAsync(glossary.DeepLGlossaryId);

    var entries = glossary.Entries
        .ToDictionary(e => e.SourceTerm, e => e.TargetTerm);

    var created = await _deepL.CreateGlossaryAsync(
        $"rasepi-{glossary.Id}",
        glossary.SourceLanguage,
        glossary.TargetLanguage,
        entries);

    glossary.DeepLGlossaryId = created.GlossaryId;
    glossary.IsDirty = false;
    glossary.LastSyncedAt = DateTime.UtcNow;
    await _db.SaveChangesAsync();

    return glossary.DeepLGlossaryId;
}

The query filter on TenantGlossaries does the isolation. The IsDirty flag does the lazy sync. And the naming convention (rasepi-{glossary.Id}) is only for debugging in the DeepL dashboard, it has no functional purpose.

Why lazy? Because DeepL v2 glossaries are immutable. You cannot edit them. Any change means delete and recreate. If a team imports a CSV with 200 terms and then fixes a typo in one entry, we do not want to delete and recreate the DeepL glossary twice. We just set IsDirty both times and the single recreate happens when the next translation runs.

Style rules: same pattern, different API

DeepL's style rules are newer (v3 API) and actually mutable, which is nicer. You can update configured rules in place with PUT /v3/style_rules/{style_id}/configured_rules, and custom instructions can be individually added or removed.

We still use the same IsDirty pattern though. A TenantStyleRuleList has a DeepLStyleId that maps to DeepL's runtime identifier, plus ConfiguredRulesJson for the formatting rules and a collection of TenantCustomInstruction entries for free-text translation directives.

The real power is in those custom instructions. Each one is a plain-language directive, up to 300 characters, that shapes how DeepL translates. Real examples from our tenants:

  • "Always use 'Sie' form, never 'du'" for a German law firm
  • "Translate 'deployment' as 'Bereitstellung', never 'Deployment'" for context-dependent terms that go beyond simple glossary mappings
  • "Use British English spelling (colour, organisation, licence)" for a UK company translating between English variants
  • "Put currency symbols after the numeric amount" for European conventions

Each tenant can have completely different instructions per target language, all behind the same API key. The isolation comes from the fact that every translation call includes only the glossary_id and style_id belonging to the requesting tenant. Other tenants' DeepL resources are never referenced.

The translation call: everything composes

When the orchestrator translates a block, it assembles all tenant-specific settings into a single request:

var glossaryId = await _glossaryService
    .GetOrSyncDeepLGlossaryIdAsync(sourceLang, targetLang);
var styleId = await _styleRuleService
    .GetOrSyncStyleIdAsync(targetLang);
var formality = langConfig.Formality ?? "default";

var options = new TranslationOptions
{
    GlossaryId = glossaryId,
    StyleId = styleId,
    Formality = formality,
    Context = documentContext,
    ModelType = styleId != null ? "quality_optimized" : null
};

Every parameter here is tenant-scoped. The glossaryId was resolved through a tenant-filtered query. The styleId was resolved the same way. The formality comes from TenantLanguageConfig, also tenant-scoped. Even the context (surrounding paragraphs sent to improve translation quality, not billed) comes from the tenant's own document.

One thing I want to highlight: when style_id is set, DeepL automatically uses their quality_optimized model. You can not combine style rules with latency_optimized. That's a DeepL constraint, but honestly a reasonable one. If you're investing in custom style rules, you probably want the best quality output.

Block-level caching: the database as translation memory

We don't call DeepL for blocks that haven't changed. The caching mechanism is the TranslationBlock table itself.

Every source EntryBlock has a ContentHash, a SHA256 of its semantic content (with metadata attributes like blockId and deleted stripped out). Every TranslationBlock stores the SourceContentHash that was current when the translation was made. When the source block changes, its hash changes. The orchestrator compares hashes and only queues blocks with mismatches.

The decision tree for each block looks like this:

  1. Hash matches, translation exists = skip (cached, up-to-date)
  2. Hash changed, machine-translated, not locked = retranslate automatically
  3. Hash changed, human-edited or locked = mark as Stale, do not overwrite

That third case is crucial. If your German translator manually refined a paragraph, we do not blow it away just because the English source changed. We flag it as stale so they know it needs review, but the translated text stays intact.

The practical result: editing one paragraph in a 30-paragraph document triggers exactly one DeepL API call (well, one batch that includes one block). The other 29 paragraphs, across all languages, are already cached and don't cost anything.

Why not use a separate key per tenant?

I seriously considered it. Give each tenant their own DeepL API key, eliminate the isolation problem entirely.

Three reasons we didn't:

  1. Billing complexity. Every tenant would need their own DeepL subscription or a way to provision sub-accounts. DeepL doesn't offer multi-tenant key management natively.
  2. Cost efficiency. Shared infrastructure means shared rate limits and volume discounts. Our aggregate usage gets better pricing.
  3. Operational simplicity. One key to rotate, one quota to monitor, one integration to maintain.

The tradeoff is that we need the isolation layer I described. But given that we already have tenant-scoped EF Core queries for everything else in the system, adding it to glossaries and style rules was straightforward. The pattern was already there.

What actually protects you

To summarize the isolation guarantees:

  • Glossary entries are stored in TenantGlossary (implements ITenantScoped), filtered by EF Core global query filters. DeepL glossary IDs are opaque references that only get resolved within tenant context.
  • Style rules and custom instructions follow the same pattern through TenantStyleRuleList.
  • Translated content lives in TranslationBlock, scoped via its parent EntryHub chain, which is also tenant-scoped.
  • The SaveChanges guard sets TenantId automatically on new entities and throws on cross-tenant writes.
  • No IgnoreQueryFilters() in production code. Ever.

The design principle is simple: DeepL sees resource IDs. Rasepi sees tenant-scoped entities. The mapping between them never crosses tenant boundaries because the query that resolves the mapping is physically incapable of returning another tenant's data.

If you're building a multi-tenant SaaS that integrates with third-party APIs without native tenant support, this pattern works well. Treat the external API as a stateless execution engine, keep all configuration in your own tenant-scoped database, sync lazily, and never trust external resource listings for isolation.

Keep your docs fresh. Automatically.

Rasepi enforces review dates, tracks content health, and publishes to 40+ languages.

Get started for free →