Agents

Estimated reading: 8 minutes 263 views

Robility flow’s Agent and MCP Tools components are critical for building agent flows. These components define the behavior and capabilities of AI agents in your flows.

How do agents work?

Agents extend Large Language Models (LLMs) by integrating tools, which are functions that provide additional context and enable autonomous task execution. These integrations make agents more specialized and powerful than standalone LLMs.

Whereas an LLM might generate acceptable, inert responses to general queries and tasks, an agent can leverage the integrated context and tools to provide more relevant responses and even take action. For example, you might create an agent that can access your company’s knowledge base, repositories, and other resources to help your team with tasks that require knowledge of your specific products, customers, and code.

Agents use LLMs as a reasoning engine to process input, determine which actions to take to address the query, and then generate a response. The response could be a typical text-based LLM response, or it could involve an action, like editing a file, running a script, or calling an external API.

In an agentic context, tools are functions that the agent can run to perform tasks or access external resources. A function is wrapped as a Tool object with a common interface that the agent understands. Agents become aware of tools through tool registration, which is when the agent is provided a list of available tools typically at agent initialization. The Tool object’s description tells the agent what the tool can do so that it can decide whether the tool is appropriate for a given request.

Examples of agent flows

For examples of flows using the Agent and MCP Tools components, see the following:

1. Robility flow QuickStart: Start with the Simple Agent template, modify its tools, and then learn how to use an agent flow in an application.

The Simple Agent template creates basic agent flow with an Agent component that can use two other Robility flow components as tools. The LLM specified in the Agent component’s settings can use its own built-in functionality as well as the functionality provided by the connected tools when generating responses.

2. Use an agent as a tool: Create a multi-agent flow.
3. Use Robility flow as an MCP client and Use Robility flow as an MCP server: Use the Agent and MCP Tools components to implement the Model Context Protocol (MCP) in your flows.

Agent component

The Agent component is the primary agent actor in your agent flows. This component uses LLM integration to respond to input, such as a chat message or file upload.

The agent can use the tools already available in the base LLM as well as additional tools that you connect to the Agent component’s Tools port. You can connect any Robility flow component as a tool, including other Agent components and MCP servers through the MCP Tools component.

For more information about using this component, see Use Robility flow agents.

MCP Tools component

The MCP Tools component connects to a Model Context Protocol (MCP) server and exposes the MCP server’s functions as tools for Robility flow agents to use to respond to input.

In addition to publicly available MCP servers and your own custom-built MCP servers, you can connect Robility flow MCP servers, which allow your agent to use your Robility flow flows as tools. To do this, use the MCP Tools component’s SSE mode to connect to your Robility flow MCP server at the /api/v1/mcp/sse endpoint.

For more information about using this component and serving flows as MCP tools, see Use Robility flow as an MCP client and Use Robility flow as an MCP server.

Legacy Agent components

The following components are legacy components. You can still use these components in your flows, but they are no longer maintained, and they can be removed in future releases.

Replace these components with the Agent component or other Robility flow components, depending on your use case.

a. CrewAI Hierarchical Task
b. CrewAI Sequential Task

CrewAI Agent

This component represents CrewAI agents, allowing for the creation of specialized AI agents with defined roles goals and capabilities within a crew. For more information, see the CrewAI agents documentation.

This component accepts the following parameters:

Name Display Name Info
role Role Input parameter. The role of the agent.
goal Goal Input parameter. The objective of the agent.
backstory Backstory Input parameter. The backstory of the agent.
tools Tools Input parameter. The tools at the agent's disposal.
llm Language Model Input parameter. The language model that runs the agent.
memory Memory Input parameter. This determines whether the agent should have memory or not.
verbose Verbose Input parameter. This enables verbose output.
allow_delegation Allow Delegation Input parameter. This determines whether the agent is allowed to delegate tasks to other agents.
allow_code_execution Allow Code Execution Input parameter. This determines whether the agent is allowed to execute code.
kwargs kwargs Input parameter. Additional keyword arguments for the agent.
output Agent Output parameter. The constructed CrewAI Agent object.

CrewAI Hierarchical Crew

This component represents a group of agents managing how they should collaborate and the tasks they should perform in a hierarchical structure. This component allows for the creation of a crew with a manager overseeing the task execution. For more information, see the CrewAI hierarchical crew documentation.

It accepts the following parameters:

Name Display Name Info
agents Agents Input parameter. The list of Agent objects representing the crew members.
tasks Tasks Input parameter. The list of HierarchicalTask objects representing the tasks to be executed.
manager_llm Manager LLM Input parameter. The language model for the manager agent.
manager_agent Manager Agent Input parameter. The specific agent to act as the manager.
verbose Verbose Input parameter. This enables verbose output for detailed logging.
memory Memory Input parameter. The memory configuration for the crew.
use_cache Use Cache Input parameter. This enables caching of results.
max_rpm Max RPM Input parameter. This sets the maximum requests per minute.
share_crew Share Crew Input parameter. This determines if the crew information is shared among agents.
function_calling_llm Function Calling LLM Input parameter. The language model for function calling.
crew Crew Output parameter. The constructed Crew object with hierarchical task execution.

CrewAI Sequential Crew

This component represents a group of agents with tasks that are executed sequentially. This component allows for the creation of a crew that performs tasks in a specific order. For more information, see the CrewAI sequential crew documentation.

It accepts the following parameters:

Name Display Name Info
tasks Tasks Input parameter. The list of SequentialTask objects representing the tasks to be executed.
verbose Verbose Input parameter. This enables verbose output for detailed logging.
memory Memory Input parameter. The memory configuration for the crew.
use_cache Use Cache Input parameter. This enables caching of results.
max_rpm Max RPM Input parameter. This sets the maximum requests per minute.
share_crew Share Crew Input parameter. This determines if the crew information is shared among agents.
function_calling_llm Function Calling LLM Input parameter. The language model for function calling.
crew Crew Output parameter. The constructed Crew object with sequential task execution.

CrewAI Sequential Task Agent

This component creates a CrewAI Task and its associated agent allowing for the definition of sequential tasks with specific agent roles and capabilities. For more information, see the CrewAI sequential agents documentation.

It accepts the following parameters:

Name Display Name Info
role Role Input parameter. The role of the agent.
goal Goal Input parameter. The objective of the agent.
backstory Backstory Input parameter. The backstory of the agent.
tools Tools Input parameter. The tools at the agent's disposal.
llm Language Model Input parameter. The language model that runs the agent.
memory Memory Input parameter. This determines whether the agent should have memory or not.
verbose Verbose Input parameter. This enables verbose output.
allow_delegation Allow Delegation Input parameter. This determines whether the agent is allowed to delegate tasks to other agents.
allow_code_execution Allow Code Execution Input parameter. This determines whether the agent is allowed to execute code.
agent_kwargs Agent kwargs Input parameter. The additional kwargs for the agent.
task_description Task Description Input parameter. The descriptive text detailing the task's purpose and execution.
expected_output Expected Task Output Input parameter. The clear definition of the expected task outcome.
async_execution Async Execution Input parameter. Boolean flag indicating asynchronous task execution.
previous_task Previous Task Input parameter. The previous task in the sequence for chaining.
task_output Sequential Task Output parameter. The list of SequentialTask objects representing the created tasks.
Share this Doc

Agents

Or copy link

CONTENTS