Tool learning aims to augment large language models (LLMs) with diverse tools, enabling them to act as agents for solving practical tasks. Due to the limited context length of tool-using LLMs, adopting information retrieval (IR) models to select useful tools from large toolsets is a critical initial step. However, the performance of IR models in tool retrieval tasks remains underexplored and unclear. Most tool-use benchmarks simplify this step by manually pre-annotating a small set of relevant tools for each task, which is far from the real-world scenarios. In this paper, we propose ToolRet, a heterogeneous tool retrieval benchmark comprising 7.6k diverse retrieval tasks, and a corpus of 43k tools, collected from existing datasets. We benchmark six types of models on ToolRet. Surprisingly, even the models with strong performance in conventional IR benchmarks, exhibit poor performance on \ours. This low retrieval quality degrades the task pass rate of tool-use LLMs. As a further step, we contribute a large-scale training dataset with over 200k instances, which substantially optimizes the tool retrieval ability of IR models.
In this work, we focus on two research questions: (i) Are existing information retrieval models are good at tool retrieval? and (ii) To what extent does the tool retrieval quality affect the downstream task pass rate?
To comprehensively evaluate IR models on various tool retrieval scenarios, we introduce ToolRet, the first large-scale tool retrieval benchmark that comprises 7.6k diverse retrieval tasks and a corpus of 43k tools, collected from existing dataset resources. We explain the three key processes in building the ToolRet.
Statistics of the ToolRet.
We evaluate a wide range of retrieval models on ToolRet. Our evaluation also supports two main settings, including `w/ inst.` and `w/o inst.`.
We also qualitatively evaluate and analyze the impact of retrieval quality on the downstream task-solving pass rate. We conduct different experiments on ToolBench where the tool-use LLM is paired with toolset retrieved by various retrieval model or pre-annotated by offical datasets.