Swiftorial Logo
Home
Swift Lessons
AI Tools
Learn More
Career
Resources

2. How do LLM Agents use tools and APIs?

LLM Agents use tools and APIs by generating structured calls (typically in JSON format) that represent external actions — such as querying a database, performing a calculation, calling a weather API, or running code. These tools extend the agent’s core abilities beyond pure text generation and allow it to interact with the outside world in meaningful ways.

The agent decides which tool to use and what input to provide based on the task and context. Most modern frameworks (e.g., LangChain, OpenAI’s tool calling, CrewAI) expose functions through predefined schemas, which the LLM selects from dynamically.

🧠 Example: Tool Use in Action

Imagine the user asks: “What’s the temperature in Tokyo right now?”

  • The agent selects the getWeather function
  • It generates a tool call like:
{
  "tool": "getWeather",
  "input": { "location": "Tokyo" }
}

The agent’s runtime calls the actual API, retrieves the data, and passes it back to the LLM for final formatting into a natural language response: “It’s currently 26°C and partly cloudy in Tokyo.”

🧩 Tool Architecture

  • Tool Registry: A list of tools available to the agent, often described in JSON schemas.
  • Function Handler: A middleware that receives tool requests and calls the appropriate API, script, or function.
  • Tool Feedback Loop: Results are passed back into the LLM context for follow-up reasoning or explanation.

🛠️ Common Types of Tools Used by LLM Agents

  • Search Tools: SERP APIs, web scraping tools
  • Math Tools: Code execution, equation solvers, finance calculators
  • File Tools: Read from/write to PDFs, CSVs, DOCs, or internal databases
  • Web/HTTP Tools: Custom API calls (e.g., /getUserData, /submitReport)
  • Code Interpreters: Python execution sandboxes (e.g., GPT-4 Code Interpreter)
  • Internal Tools: CRM lookups, Slack integrations, ticket search, custom data access layers

📘 Tool Use Protocols

  • OpenAI Function Calling: Define functions with a JSON schema. GPT chooses tool and fills parameters.
  • LangChain Agents: Wrap tools with Python/JS interfaces; LLM chooses via text plan or symbolic logic.
  • MCP (Model Context Protocol): Anthropic's experimental standard to expose tools via secure APIs.
  • CrewAI / AutoGen: Tools are treated as agent actions that flow through conversation-like interfaces.

🔄 Tool Chaining

Some agents chain tools dynamically — e.g., get data from a file, run analysis, then email the result. This requires internal memory of previous steps or a task planner module.

🔐 Security Considerations

  • Scoped Access: Agents should only access tools they’re allowed to use (e.g., no file deletion).
  • Validation: Inputs from LLM should be checked for correctness or safety before API execution.
  • Rate Limiting: Prevent tool misuse (e.g., infinite API calls in a planning loop).

🚀 Summary

LLM Agents extend their capabilities by integrating with external tools and APIs. Whether it’s querying live data, running code, or completing a workflow, tool use transforms the LLM from a passive generator into an active, goal-oriented agent capable of interacting with real-world systems.