Legacy Components
Legacy components are longer supported and can be removed in a future release. You can continue to use them in existing flows, but it is recommended that you replace them with supported components as soon as possible. Suggested replacements are included in the Legacy banner on components in your flows.
The component may have been deprecated in favor of a completely new component, a similar component, or a new version of the same component in a different category.
Legacy Data components
The following Data components are in legacy status:
Load CSV
Load JSON
Replace these components with the Read File component, which supports loading CSV and JSON files, as well as many other file types.
Legacy Helper components
The following Helper components are in legacy status:
Message Store: Replaced by the Message History component
Create List: Replace with Processing components
ID Generator: Replace with a component that executes arbitrary code to generate an ID or embed an ID generator script your application code.
Output Parser: Replace with the Structured Output component and Parser component. The components you need depend on the data types and complexity of the parsing task.
The Output Parser component transformed the output of a language model into comma-separated values (CSV) format, such as ["item1", "item2", "item3"], using CommaSeparatedListOutputParser. The Structured Output component is a good alternative for this component because it also formats LLM responses with support for custom schemas and more complex parsing.
Parsing components only provide formatting instructions and parsing functionality. They don’t include prompts. You must connect parsers to Prompt Template components to create prompts that LLMs can use.
Legacy Logic components
Legacy components are longer supported and can be removed in a future release. You can continue to use them in existing flows, but it is recommended that you replace them with supported components as soon as possible. Suggested replacements are included in the Legacy banner on components in your flows. They are also given in release notes and Robilityflow documentation whenever possible.
If you are not sure how to replace a legacy component, Search for components by provider, service, or component name. The component may have been deprecated in favor of a completely new component, a similar component, or a latest version of the same component in a different category.
If there is no obvious replacement, consider whether another component can be adapted to your use case. For example, many Core components provide generic functionality that can support multiple providers and use cases, such as the API Request component.
If neither of these options are viable, you could use the legacy component’s code to create your own custom component or start a discussion about the legacy component.
To discourage use of legacy components in new flows, these components are hidden by default. In the visual editor, you can click Component settings to toggle the Legacy filter.
The following Logic components are in legacy status:
Condition
As an alternative to this legacy component, see the If-Else component.
The Condition component routes JSON objects based on a condition applied to a specified key, including Boolean validation. It supports true_output and false_output for routing the results based on the condition evaluation.
This component is useful in workflows that require conditional routing of complex data structures, enabling dynamic decision-making based on data content.
It can process either a single JSON object or a list of JSON objects. The following actions occur when processing a list of JSON objects:
Each object in the list is evaluated individually.
Objects meeting the condition go to true_output.
Objects not meeting the condition go to false_output.
If all objects go to one output, the other output is empty.
The Condition component accepts the following parameters:
| Name | Description |
|---|---|
| data_input | The Data object or list of Data objects to be processed. Supports both single items and collections. |
| key_name | The key within the Data object whose value will be evaluated. |
| operator | The comparison operator to apply. Supported options include: equals, not equals, contains, starts with, ends with, and boolean validator. Default is equals. |
| compare_value | The value used for comparison. This field is not required when using the boolean validator operator. |
The operator options have the following behaviors:
1. equals: Exact match comparison between the key’s value and compare_value.
2. not equals: Inverse of exact match.
3. contains: Checks if compare_value is found within the key’s value.
4. starts with: Checks if the key’s value begins with compare_value.
5. ends with: Checks if the key’s value ends with compare_value.
6. boolean validator: Treats the key’s value as a Boolean. The following values are considered true:
Booleantrue
Strings:true,1,yes,y,on (case-insensitive)
Any other value is converted using Python’s bool() function
Pass
The Pass component forwards the input message without modification.
It accepts the following parameters:
| Name | Type | Description |
|---|---|---|
| input_message | Input Message | The message that is forwarded for processing. |
| ignored_message | Ignored Message | A secondary message that is intentionally ignored and used as a workaround to maintain continuity. |
| output_message | Output Message | Returns result message after forwarding the input message. |
Flow As Tool
This component constructed a tool from a function that ran a loaded flow.
It was deprecated in Robilityflow and replaced by the Run Flow component
Sub Flow
This component integrated entire flows as components within a larger workflow. It dynamically generated inputs based on the selected flow and executed the flow with provided parameters.
It was deprecated in Robilityflow and replaced by the Run Flow component.
Legacy Processing components
The following Processing components are in legacy status
Alter Metadata
Replace this legacy component with the JSON Operations component.
This component modifies metadata of input objects. It can add new metadata, update existing metadata, and remove specified metadata fields. The component works with both Message and JSON objects, and can also create a new JSON object from user-provided text.
It accepts the following parameters:
| Name | Type | Description |
|---|---|---|
| input_value | Input | Objects that will receive additional metadata. |
| text_in | User Text | Text input value stored in the text field of the JSON object; empty entries are ignored. |
| metadata | Metadata | Metadata key-value pairs to be added to each object. |
| remove_fields | Fields to Remove | Specifies metadata fields that should be removed from the objects. |
| data | JSON | Array of input objects after metadata has been applied. |
Combine Data
Replace this legacy component with the Data Operations component or the Loop component.
This component combines multiple data sources into a single unified JSON object.
The component iterates through a list of JSON objects, merging them into a single JSON object (merged_data). If the input list is empty, it returns an empty data object. If there’s only one input data object, it returns that object unchanged.
The merging process uses the addition operator to combine data objects.
Combine Text
This component concatenates two text inputs into a single text chunk using a specified delimiter, outputting a Message object with the combined text.
Create Data
This component dynamically creates a Data object with a specified number of fields and a text key.
It accepts the following parameters:
| Name | Type | Description |
|---|---|---|
| number_of_fields | Number of Fields | Defines how many fields will be added to the record. |
| text_key | Text Key | Specifies the key used to identify the field containing the text content. |
| text_key_validator | Text Key Validator | Validates whether the provided Text Key exists in the input JSON when enabled. |
Data to DataFrame/Data to Message
These components converted one or more JSON objects into a Table or Message object.
For the Data to DataFrame component, each JSON object corresponds to one row in the resulting Table. Fields from the .data attribute become columns, and the .text field (if present) is placed in a text column.
Extract Key
This component extracts a specific key from a data object and returns the value associated with that key.
Filter Data
This component filters a data object based on a list of keys (filter_criteria), returning a new data object (filtered_data) that contains only the key-value pairs that match the filter criteria.
Filter Values
The Filter values component filters a list of data items based on a specified key, filter value, and comparison operator.
It accepts the following parameters:
| Name | Type | Description |
|---|---|---|
| input_data | Input data | A collection of items provided as input for filtering. |
| filter_key | Filter Key | The specific key used to evaluate and apply the filter condition. |
| filter_value | Filter Value | The value used to compare against the selected key. |
| operator | Comparison Operator | The operator that defines how the comparison between values is performed. |
| filtered_data | Filtered data | The resulting set of items that match the defined filter conditions. |
JSON Cleaner
This component cleans JSON strings to ensure they are fully compliant with the JSON specification.
It accepts the following parameters:
| Name | Type | Description |
|---|---|---|
| json_str | JSON String | A JSON string to be cleaned, typically raw or malformed, often generated by language models or external sources that may not fully comply with JSON standards. |
| remove_control_chars | Remove Control Characters | When enabled, removes control characters (ASCII 0–31 and 127) to eliminate invisible or problematic characters that may break JSON parsing. |
| normalize_unicode | Normalize Unicode | When enabled, normalizes all Unicode characters to their canonical composition form (NFC) to ensure consistent encoding across systems. |
| validate_json | Validate JSON | When enabled, validates the JSON structure before repair by attempting to parse it; raises an error if the JSON is structurally invalid. |
| output | Cleaned JSON String | The final output JSON string after cleaning, repairing, and validation, fully compliant with JSON specifications. |
Message to Data
This component converts Message objects into JSON objects within the flow.
Parse DataFrame
Replace this legacy component with the DataFrame Operations component or Parser component.
This component converts DataFrame objects into plain text using templates.
It accepts the following parameters:
| Name | Type | Description |
|---|---|---|
| df | Table | The DataFrame that will be converted into formatted text rows. |
| template | Template | A formatting template used to define how each row is rendered, using {column_name} placeholders. |
| sep | Separator | The string used to join individual formatted rows in the final output. |
| text | Text | The final combined text containing all formatted rows. |
Parse JSON
This component converts and extracts fields from Message and Data objects using JQ queries, then returns filtered_data, which is a list of Data objects.
Regex Extractor
This component extracts patterns in text using regular expressions. It can be used to find and extract specific patterns or information in text.
Select Data
This component selects a single Data object from a list.
It accepts the following parameters:
| Name | Type | Description |
|---|---|---|
| data_list | Data List | A list of available data items to select from. |
| data_index | Data Index | The index used to identify and select a specific item from the list. |
| selected_data | Selected Data | The JSON object retrieved from the data list at the specified index. |
Update Data
This component dynamically updates or appends data with specified fields.
It accepts the following parameters:
| Name | Type | Description |
|---|---|---|
| old_data | JSON | The original set of records that will be updated. |
| number_of_fields | Number of Fields | Specifies how many fields will be added to each record, with a maximum limit of 15. |
| text_key | Text Key | The key used to identify and access the text content within each record. |
| text_key_validator | Text Key Validator | Validates whether the specified text key exists in the input data before processing. |
| data | JSON | The resulting set of records after applying all updates. |