Why Don’t Agents Use Tools When They Should?
First, we need to correct a fundamental misconception.
In the past, software was built to perform deterministic tasks.
getcurrent_time("NYC") — no matter how many times you call it, it will always reliably return New York’s local time.
AI, however, comes with inherent uncertainty. When you ask it “What day is tomorrow?”, it might call a time tool and give you the correct answer, or rely on its general knowledge and get it wrong, or even ask you what time zone you’re in first.
Whether an agent understands a tool, knows how to use it, and uses it effectively is full of unpredictability.
Building agents with the mindset of traditional software APIs is the root of the problem. What we need are tools tailored specifically for agents and business needs.
Approach 1: From API Porter to Workflow Designer
A common mistake is handing over raw API endpoints as tools for agents.
For example: a contacts list. If you provide a list_contacts tool that lists everyone in the address book, then for the agent to find “Jane,” it may print out thousands of tokens of contacts and search one by one—slow, token-expensive, and still prone to mistakes.
A better approach is to think like a human: provide a direct search tool, such as search_contacts. Don’t make the agent do grunt work—make the tool serve its “intent.”
Forget low-level APIs and think in terms of high-level workflows. Combine multiple steps (like checking a calendar, finding free time, and sending an invite) into a single, more powerful tool (e.g., schedule_event). That’s the right path forward.
Approach 2: A Good Name Is Half the Battle
With dozens or even hundreds of tools, agents face choice overload. Vague names like a generic search_order confuse the agent: is this really the right tool for searching Shopify orders?
Names should make tool boundaries explicit. For instance, shopify_order_search and amazon_order_search. Clear naming greatly improves the odds the agent will pick correctly at the right time.
Approach 3: Clear Output Results
Tool outputs aren’t meant for humans or raw code—they’re for the LLM. LLMs prefer natural language, not messy IDs.
The model doesn’t care about your error codes—they don’t help it answer the user.
Replace things like user_uuid: "a1b2c3d4-..." with user_name: "Jane Doe". Cutting unnecessary output parameters will sharply reduce hallucinations and error rates.
Of course, if an ID is genuinely needed for the next tool call or to pass back to the user, then include it in the output.
Approach 4: The Most Important—Tool Descriptions
This is the most effective yet most overlooked point: a tool’s description and parameter notes are, in essence, the most important prompts you’re giving the agent.
Explain them as if you’re onboarding a new colleague: write down all background, terminology, scenarios, and caveats clearly. Avoid vague parameter names like user—use explicit ones like user_id or user_name.
You can also reinforce in the agent’s settings when and how each tool should be used to solve user problems.
In short, make sure your agent knows exactly when to use a tool and what outcomes to expect from it.
Summary
Building tools for agents is like giving a full onboarding and training to an intern—and shifting your mindset from execution to planning.
Effective tools should be clearly described, have defined boundaries, and be designed to “communicate” well with the agent.