Letting Your LLM Discover Data Dynamically with MCP Resources
When building AI-powered applications, one of the most common challenges is:
👉 How do we give the LLM access to fresh, structured data without stuffing the entire dataset into its prompt?
This is where Model Context Protocol (MCP) resources shine.
Instead of preloading everything, you can expose a pair of simple tools:
listResources→ lets the LLM explore what data sources are available.readResource→ lets the LLM fetch the specific resource it needs, on demand.
In this post, we’ll walk through how this pattern works and why it’s a powerful way to keep your LLMs both lean and dynamic.
Why not preload resources?
One tempting option is to call listResources once at startup, dump the entire catalog into the LLM’s system prompt, and let it use those directly.
But this breaks down quickly when:
- You have thousands of users, each with their own resources (lab results, documents, reports, etc.).
- Resources are dynamic (new results come in every hour).
- Prompts become bloated with irrelevant metadata.
Instead, let the LLM discover resources only when it needs them.
The Pattern: Expose Resource Access as Tools
In this setup, the LLM doesn’t get the resource catalog up front.
Instead:
- You expose
listResourcesandreadResourceas tools in your MCP client. - The LLM decides when to call them, based on user queries.
This gives you a nice balance of flexibility and control.
Example Walkthrough
👤 User:
Show me today’s blood test results for patient 123.
🤖 LLM (reasoning):
I don’t know what resources exist. Let me check.
Step 1: LLM calls listResources
{
"action": "listResources",
"arguments": {}
}
Step 2: MCP server responds with available resources
[
{
"uriTemplate": "healthcare://patients/{id}/labs/today",
"description": "Fetch today’s lab test results for a patient by ID",
"mimeType": "application/json"
},
{
"uriTemplate": "healthcare://patients/{id}/medications/current",
"description": "List current medications for a patient by ID",
"mimeType": "application/json"
}
]
🤖 LLM (reasoning):
Perfect, I see a resource for today’s lab results. Let me read it for patient 123.
Step 3: LLM calls readResource
{
"action": "readResource",
"arguments": {
"uri": "healthcare://patients/123/labs/today"
}
}
Step 4: MCP server returns the actual data
{
"date": "2025-09-03",
"patientId": "123",
"tests": [
{ "name": "Hemoglobin", "value": 13.8, "unit": "g/dL" },
{ "name": "WBC", "value": 6500, "unit": "/µL" }
]
}
Step 5: LLM responds to the user
👩‍⚕️ Assistant:
Patient 123’s blood tests for today are all within normal ranges.
- Hemoglobin: 13.8 g/dL
- WBC: 6500 /µL
Why This Works Well
- Dynamic discovery: The LLM doesn’t need to know about resources in advance — it can explore when needed.
- Scalability: You don’t flood prompts with thousands of URIs.
- Security: Resource access still flows through the MCP client, so the LLM never touches data directly.
- Freshness: Every
readResourcecall pulls the latest version.
When to Use This Approach
This pattern is ideal when:
- You have per-user resources (like healthcare data, personal files, or private reports).
- Your resources are dynamic or time-sensitive.
- You don’t want to blow up the system prompt with a giant catalog.
If instead your resource list is small and static, you can preload them into the prompt. But for most real-world apps, letting the LLM discover and fetch resources on demand is the safer, more flexible choice.
✅ By letting the LLM call listResources and readResource as tools, you give it the power to discover and fetch the data it needs — only when it actually needs it.
This keeps your system clean, scalable, and ready for real-world data access.