• Uncategorised
  • 0

Deep agents in Java with LangChain4j

Deep agents in Java with LangChain4j

A ready-made harness for planning, workspace files, sub-agents, and optional skills—so you ship a tool-using orchestrator instead of hand-wiring prompts and schemas.


If you follow LangChain’s deepagents idea—planning, a filesystem backend, and sub-agents—you get a productive shape for long-horizon work. This post introduces a LangChain4j-only Java library (Java 17+) that implements that pattern: no Spring, no LangGraph dependency, just AiServices with bounded memory and a curated tool set.

Repository: https://github.com/udayogra/langchain4j-deepagents


What you get out of the box

The orchestrator exposes tools the model is steered to use well via built-in prompts:

Area Tools / behavior
Planning write_todos — task checklist per chat session (memoryId).
Filesystem list_dir, read_file, write_file, edit_file — all paths relative to a configured workspace root (sandboxed inside that tree).
Sub-agents task — delegate to named specialists you define; a general-purpose handoff is always available.
Optional skills Folders under skill roots with SKILL.md; a compact catalog goes in the system prompt, full text via read_file (“progressive disclosure”).
Optional memory Markdown files (e.g. AGENTS.md) injected into the system prompt for stable project context.

Under the hood it is a LangChain4j AiServices facade: one user message per turn, @MemoryId for MessageWindowChatMemory, and a cap on sequential tool invocations per turn (configurable).


Three steps to a running agent

  1. Build DeepAgentConfigrequired: workspace(Path) and exactly one of chatModel(ChatModel) or openAi(OpenAiChatModelConfig).
  2. DeepAgent.create(config)DeepAgent.Orchestrator.
  3. agent.chat(memoryId, userMessage) — returns the assistant String. memoryId scopes chat memory and todos; the workspace directory is shared across sessions unless you use different config roots per tenant.
import com.deepagents.langchain4j.DeepAgent;
import com.deepagents.langchain4j.config.DeepAgentConfig;

import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.openai.OpenAiChatModel;

import java.nio.file.Path;

Path workspace = Path.of("/tmp/my-agent-workspace");

ChatModel model =
        OpenAiChatModel.builder()
                .apiKey(System.getenv("OPENAI_API_KEY"))
                .modelName("gpt-4o")
                .build();

DeepAgentConfig config =
        DeepAgentConfig.builder()
                .workspace(workspace)
                .chatModel(model)
                .build();

DeepAgent.Orchestrator agent = DeepAgent.create(config);
String reply = agent.chat("session-1", "List the workspace root, then summarize what you see.");

For OpenAI from environment variables only, OpenAiChatModelConfig.fromRequiredEnvironment() plus openAi(...) on the builder avoids duplicating key/model wiring in your code.


Sub-agents and isolation

Specialists are SubAgentDefinition instances: name, description (for the task tool schema), prompt (sub-agent system message; it does not inherit the orchestrator prompt), optional extraTools, and builtInFileTools (default true). Only the orchestrator has write_todos and task; sub-agents do not chain further delegation.

Merge app-wide tools with additionalTools on DeepAgentConfig; they apply to the orchestrator and every sub-agent unless you scope tools per specialist via extraTools.


Skills vs agent memory

  • Skills — optional playbooks; the model sees a catalog first and read_file when depth is needed.
  • Memory — files like AGENTS.md always contribute to the system prompt when configured (paths must live inside the workspace). Content is read at DeepAgent.create time; refresh by rebuilding the orchestrator after disk changes.

Observability

  • toolInvocationLogModeNONE, INFO, or DEBUG for tool-call logging.
  • flowListener or recordFlowTraceToStderr(true) — timeline of prompt readiness, user turns, tools, and task delegations (see DeepAgentFlowRecorder in the repo).

Maven coordinates

Field Value
GroupId com.deepagents
ArtifactId langchain4j-deepagents
Version 0.1.0-SNAPSHOT (set before publish)

You need a chat model with tool calling and a LangChain4j version aligned with this project’s pom.xml (e.g. 1.9.x).


What this is not (yet)

The README is explicit: no LangGraph checkpointing, no built-in shell execute, no automatic thread summarization—only MessageWindowChatMemory and chatMemoryMaxMessages. Those are extension points for your app or other libraries.


Summary

Deep agents in Java with LangChain4j here means: DeepAgentConfig + DeepAgent.create, a bounded tool loop, workspace sandbox, todos, task sub-agents, and optional skills and AGENTS.md-style memory—faithful to the spirit of upstream deepagents while staying idiomatic LangChain4j.

https://github.com/udayogra/langchain4j-deepagents

Clone it, run mvn test, set OPENAI_API_KEY, and try the bundled demos under com.deepagents.langchain4j.demos.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *