Skip to content
fluenti

Performance

Fluenti’s core performance advantage is compile-time message transformation. Unlike runtime-interpreted i18n libraries that parse ICU MessageFormat strings on every render, Fluenti compiles messages to JavaScript functions during the build step.

Compile-time (Fluenti)Runtime (traditional)
Parse ICU syntaxOnce, at build timeEvery render cycle
Bundle contentPre-compiled functionsRaw message strings + parser
Parser in bundleNoYes (~5-15 KB gzipped)
First renderInstant — function callParse → compile → execute

This means Fluenti has zero runtime parsing overhead and ships a smaller bundle (no parser needed in production).

Use fluenti stats to see translation coverage and catalog sizes:

Terminal window
fluenti stats

For detailed bundle analysis, use your bundler’s built-in tools:

Terminal window
# Vite
npx vite-bundle-visualizer
# Next.js
ANALYZE=true next build # with @next/bundle-analyzer
ComponentSize (gzipped)
Core runtime (@fluenti/core)~3 KB
Framework adapter (vue/react/solid)~1-2 KB
Per-locale compiled messagesVaries by content

The per-locale message size depends on your catalog. Compiled messages are typically 30-50% smaller than raw ICU strings because static messages compile to plain strings (no function wrapper needed).

Choose the right strategy based on your app’s needs:

StrategyBundle behaviorBest for
false (default)All locales bundled togetherSmall apps, ≤ 3 locales
'static'Each locale in its own chunk, all loaded upfrontMedium apps, fast locale switching
'dynamic'Locales loaded on demandLarge apps, many locales
fluenti.config.ts
export default defineConfig({
splitting: 'dynamic', // Only load the active locale
})

For a deep dive into splitting mechanics, see Code Splitting.

Fluenti maintains several LRU caches for Intl.* formatter instances and compiled messages. The defaults work well for most applications.

The compiled-message cache stores the result of parsing + compiling ICU messages. Default size: 500 entries.

import { setMessageCacheSize } from '@fluenti/core'
// Increase for apps with many unique messages
setMessageCacheSize(2000)
// Decrease for memory-constrained environments
setMessageCacheSize(100)

Each Intl.* formatter type has its own unbounded cache (keyed by locale:options). In practice these stay small — most apps use a handful of locales and format styles.

For long-running servers, clear caches periodically to bound memory:

import { clearAllCaches } from '@fluenti/core'
// Every 4 hours
setInterval(() => clearAllCaches(), 4 * 60 * 60 * 1000)

For fine-grained control, clear individual caches:

import {
clearInterpolationCache,
clearNumberFormatCache,
clearDateFormatCache,
clearPluralCache,
clearRelativeTimeFormatCache,
clearCompileCache,
} from '@fluenti/core'

See the Cache Management API reference for details.

Clear caches between tests for deterministic behavior:

import { clearAllCaches } from '@fluenti/core'
afterEach(() => clearAllCaches())

Reduce perceived latency by preloading locales the user is likely to switch to:

const { preloadLocale } = useI18n()
// Preload on hover over language switcher
function onLanguageHover(locale: string) {
preloadLocale(locale)
}

preloadLocale() loads messages in the background without switching the active locale. It silently ignores errors and skips already-loaded locales.

APIPurposeWhen to use
chunkLoader (config)Async function for build-time code splittingStandard lazy loading with splitting: 'dynamic'
loadMessages()Synchronous merge of messages into a localeRuntime message injection (user-contributed, API-fetched)
// Config-based chunk loader (recommended)
<I18nProvider
locale="en"
messages={{ en }}
lazyLocaleLoading
chunkLoader={(locale) => import(`./locales/compiled/${locale}.js`)}
>

Import compiled catalogs at module level, not per-request:

// ✅ Module-level import — shared across requests
import en from './locales/compiled/en'
import ja from './locales/compiled/ja'
export function handleRequest(req: Request) {
const locale = detectLocale(req)
const i18n = createFluentiCore({ locale, messages: { en, ja } })
// ...
}
// ❌ Per-request import — unnecessary overhead
export async function handleRequest(req: Request) {
const messages = await import(`./locales/compiled/${locale}.js`)
// This re-evaluates the module on every request
}

Never use a singleton i18n instance in SSR — concurrent requests with different locales will interfere with each other:

// ❌ Shared global — locale race conditions in SSR
const i18n = createFluentiCore({ locale: 'en', messages })
// ✅ Per-request instance
function handleRequest(req: Request) {
const i18n = createFluentiCore({ locale: detectLocale(req), messages })
}

For projects with 5+ target locales, enable worker-thread compilation:

fluenti.config.ts
export default defineConfig({
parallelCompile: true,
})

This uses Node.js worker threads to compile multiple locales simultaneously. The overhead of spawning workers is only worthwhile for 5+ locales.

Fluenti caches extraction and compilation results. On subsequent runs, only changed files are re-processed:

Terminal window
# Normal run (uses cache)
fluenti extract && fluenti compile
# Force full rebuild (bypass cache)
fluenti extract --no-cache && fluenti compile --no-cache

Cache files are stored alongside your catalog directory. They are safe to delete and will be regenerated on the next run.

Track these metrics to ensure i18n doesn’t regress performance:

  • Largest Contentful Paint (LCP) — should not increase after adding i18n
  • Total bundle size — compare before/after with fluenti stats
  • Time to Interactive (TTI) — watch for impact from lazy locale loading
  • Missing translation rate — use fluenti check --json in CI to track coverage
Terminal window
# CI coverage gate — fail if any locale is below 95%
fluenti check --threshold 95

See Scaling & Enterprise for CI/CD integration patterns.