All GraalVM templates

GraalVM JIT vs Native — When to Use Which

Trade-offs: peak throughput vs cold start, memory footprint, and ops cost.

DevZone Tools920 copiesUpdated Apr 21, 2026GraalVMJava
# CLAUDE.md — GraalVM JIT vs Native: When to Use Which

## The trade-off in one sentence

Native gives fast startup and low memory. JIT (HotSpot or GraalVM JIT) gives higher peak throughput. Pick based on workload shape, not preference.

## Use native when

- **Cold start matters**: AWS Lambda, Cloud Run, scale-to-zero, batch jobs that boot per invocation.
- **Memory is constrained**: edge runtimes, sidecars, dense container packing.
- **Single-binary distribution**: CLIs, tools shipped to users without a JRE.
- **Short-lived processes**: 60-second batch jobs that never reach JIT warmup don't benefit from JIT.

## Use JIT (HotSpot or GraalVM JIT) when

- **Peak throughput is the SLO**: long-running services with steady traffic.
- **The codebase relies heavily on reflection / dynamic class loading**: ORMs with proxy generation, frameworks doing runtime weaving.
- **Memory is not the bottleneck**: pods sized 1 GB+ where the JIT compiler's overhead is noise.
- **The team has more JVM ops experience than native**: tooling, profilers, debugging — JVM is the better-known path.

## Hybrid: Tiered approach

- Some services run native in **dev/staging** for fast feedback, JVM in **production** for throughput.
- Or: native for the **gateway / edge** (cold-start sensitive), JVM for the **monolith** behind it.
- Don't optimize prematurely. Most services are fine on JVM with `-XX:+AlwaysActAsServerClassMachine` and a sane GC.

## Numbers (rough, your mileage will vary)

| Metric | JVM (HotSpot) | GraalVM JIT | Native |
|--------|---------------|-------------|--------|
| Cold start | 1–5s | 1–5s | 50–200 ms |
| Memory (small app) | 200–400 MB | 200–400 MB | 50–100 MB |
| Peak throughput | 100% (baseline) | 105–115% | 70–85% |
| Build time | 30s | 30s | 2–5 min |

## Migration path

If you might want native later:

1. Start with Spring Boot 3 / Quarkus / Micronaut. They generate AOT-friendly code by default.
2. Avoid runtime reflection in your code. Use generated code (records, sealed types) where possible.
3. Run native compilation in CI from day one — even if you don't deploy native, you'll catch incompatibilities early.

## Cost considerations

- Native is cheaper per invocation on serverless (faster cold start = less wall time).
- Native is **more expensive to build** in CI (longer pipelines).
- For 24/7 services, the JIT throughput advantage usually wins on cost.

## Don't

- Don't go native because "it's faster". Cold start ≠ peak throughput.
- Don't go native to save 100 MB if your container is 1 GB. Premature optimization.
- Don't build native in every PR. Build JVM with AOT processing for fast feedback; build native nightly or on release branches.
- Don't assume third-party libraries work in native. Test integrations early.

Other GraalVM templates