• Uncategorised
  • 0

Building middleware in Java with LangChain4j

Building middleware in Java with LangChain4j

Cross-cutting behavior around assistants—logging, retries, safety rails, and memory compaction—without rewriting your business logic.


If you build agents with LangChain4j, you already wire a ChatModel, optional tools, and memory. What you often still want is a consistent place to observe calls, retry flaky APIs, cap tool loops, fall back to another model, or compress long histories. In other stacks that idea shows up as agent middleware. This post introduces a small Java library that brings the same pattern to LangChain4j 1.9.x, with a single interface and a fluent entry point.

Repository: https://github.com/udayogra/langchain4j-middleware


Why middleware?

Assistant code tends to mix three concerns:

  1. Product logic — prompts, tools, RAG, memory.
  2. Operational behavior — structured logs, metrics, retries, budgets.
  3. Reliability and cost — alternate models when the primary fails, summarization when context grows.

Middleware keeps (2) and (3) in reusable layers that wrap (1). You register a list of small components; the runtime applies them around each user turn and around model/tool calls. That matches how many teams structure LangChain-style agents in Python or TypeScript.


One interface: AgentMiddleware

The library centers on AgentMiddleware: one Java interface with default no-op methods. You implement only what you need—agent lifecycle hooks, LLM hooks, tool hooks, or static wrapping of the ChatModel / ToolExecutor map.

Order matters. The first middleware in the list is the outermost: it sees the turn first and sits on the outside of the composed model and tools. That is important when you combine logging, retries, and model fallback: outer layers run first on the way in (and the library defines how “after” hooks unwind).


Getting started: MiddlewareAi in three steps

The simplest path is MiddlewareAi:

  1. MiddlewareAi.create(chatModel) — start from your LangChain4j ChatModel.
  2. .middleware(…) — add built-in or custom middleware (chain calls or pass several at once).
  3. .build() — you get AgentChat, a thin API: chat(memoryId, userMessage) returns the assistant String.

Under the hood the builder still uses LangChain4j AiServices, memory, tools(...), contentRetriever(...), and friends—you do not have to declare a custom assistant interface for the common “one chat method” case.

import dev.langchain4j.middleware.AgentChat;
import dev.langchain4j.middleware.MiddlewareAi;
import dev.langchain4j.middleware.logging.LoggingMiddleware;
import dev.langchain4j.middleware.retry.RetryMiddleware;

AgentChat client =
    MiddlewareAi.create(chatModel)
        .middleware(
            new LoggingMiddleware("[app]", 200),
            new RetryMiddleware())
        .systemMessage("You are a helpful assistant.")
        .build();

String reply = client.chat("session-1", "Hello");

What ships in the box?

The repo includes ready-made middleware you can compose:

  • Logging — SLF4J lifecycle logs for agent, LLM, and tool phases, with optional string previews so huge payloads do not flood logs.
  • Retry — exponential backoff around failed model and tool invocations, with a configurable list of non-retryable exception names.
  • Max tool calls — per-chat cap on real tool executions; either return a synthetic tool result to the model or throw when the limit is exceeded.
  • Model fallback — if the primary model fails in the LLM hook path, try ordered fallback ChatModel instances.
  • Summarization — when a trigger (message count and/or approximate tokens) fires, summarize older turns with a separate small model and keep a verbatim tail; with MiddlewareAi, compaction is applied to stored memory so it survives the next turns.

Each piece is documented in the project README with constructor and builder parameters. Runnable main demos live under dev.langchain4j.middleware.demo for quick experiments without production keys (stubs) or with optional OPEN_AI_KEY where noted.


Maven coordinates

The project is published as a normal Maven artifact (version will be finalized before release):

Field Value
GroupId dev.langchain4j.contrib
ArtifactId langchain4j-middleware

See the README in the repository for the current version string and dependency notes (langchain4j artifact is required so ToolExecutor is on the classpath for tool middleware).


When to dig deeper

If you need a custom assistant interface, streaming, or every AiServices knob, the same composition rules are available through MiddlewareComposition plus wrapping your chat method—documented in the library Javadoc. The README stays focused on MiddlewareAi as the default story.


Summary

Middleware in Java with LangChain4j means: one AgentMiddleware contract, a clear outermost-first ordering rule, and MiddlewareAi to turn a ChatModel plus a middleware list into an AgentChat you can call like a minimal assistant. The implementation is open source and aimed at 1.9.x:

https://github.com/udayogra/langchain4j-middleware

Clone it, run mvn test, try the demos, and fold the pieces into your own agent stack.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *