Robility Ollama

Estimated reading: 1 minute

The Robility Ollama integration is a secure and centralized component designed to run local Large Language Models (LLMs) within automation projects. It enables users to execute open-source AI models directly within the Robility ecosystem, ensuring that intelligent processing remains private and infrastructure-controlled.

It allows automation processes to leverage local LLMs efficiently—whether for generating responses, creating embeddings, or powering intelligent agents.

Pre-requisites

Robility Ollama supports the following models for chat and embeddings.

Chat Models (Text Generation)
These models are used to generate conversational responses and natural language text.
1. gpt-oss:latest — An open-source, GPT-style model for general conversational tasks
2. llama3.1:latest — Meta’s latest Llama model, optimized for following instructions and chat use cases
3. llama3:latest — An earlier version of Meta’s Llama chat model, suitable for standard conversations

Embedding Models (Vector Search / RAG)
These models convert text into numerical vectors for search, similarity matching, and retrieval-augmented generation (RAG).
1. mxbai-embed-large:latest — A high-quality embedding model designed for accurate search and retrieval
2. nomic-embed-text:latest — A fast and efficient embedding model for large-scale text indexing

Refer to the documentation below to learn more in detail:
1. Robility Ollama

Share this Doc

Robility Ollama

Or copy link

CONTENTS