Project Loom has been in development for years, and with Java 21, virtual threads are finally a production-ready feature. For backend developers, this is the most significant change to the Java platform since lambdas in Java 8.
What Are Virtual Threads?
Traditional Java threads are platform threads — thin wrappers around OS threads. They are expensive: each one consumes about 1MB of stack memory, and the OS limits how many you can create (typically a few thousand).
Virtual threads are lightweight threads managed by the JVM. You can create millions of them with negligible overhead:
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
IntStream.range(0, 100_000).forEach(i ->
executor.submit(() -> {
Thread.sleep(Duration.ofSeconds(1));
return i;
})
);
}
This code creates 100,000 concurrent tasks. With platform threads, your system would run out of memory. With virtual threads, it completes in about one second.
Impact on Spring Boot
Spring Boot 3.2+ has built-in support for virtual threads. Enable them with a single property:
spring.threads.virtual.enabled=true
This switches Tomcat's thread pool to use virtual threads. Every incoming HTTP request gets its own virtual thread, which means:
- No more thread pool tuning — you do not need to worry about
server.tomcat.threads.max - Blocking is cheap — JDBC calls, HTTP client calls, and file I/O no longer waste precious threads
- Simpler code — you can write synchronous-looking code that scales like reactive
When NOT to Use Virtual Threads
Virtual threads are not a silver bullet:
- CPU-bound tasks do not benefit — virtual threads help with I/O-bound workloads
- Synchronized blocks can still pin virtual threads to platform threads — prefer
ReentrantLock - Thread-local variables work but can cause memory issues with millions of threads — use scoped values instead
My Recommendation
For most Spring Boot backend services, virtual threads are a straightforward upgrade. They eliminate the reactive complexity (Project Reactor, WebFlux) while delivering similar throughput for I/O-heavy workloads.
Start by enabling them in a non-critical service, monitor the behavior, and gradually roll them out across your stack.