Glossary
Definitions for vocabulary used throughout the documentation
Chunk
A chunk is a wrapper around a piece of text that includes an optional ID and an optional metadata dictionary.
Embedding
Embeddings are vector representations of text or other types of data, and are used to capture semantic meaning. Generally, the “closer” embeddings are, the “closer” in semantic meaning the underlying text should be. For example, the vector representation for “cat” should be generally much closer to that for “kitten” than it would be to the vector representation for “dog”.
Message
A Message represents an interaction between the user and the AI agent. Messages are the basic building blocks of a continuous chat dialog.
Key aspects of a Message:
- A Message contains the raw text input by either the user or the LLM at a particular point in the conversation.
- Messages are wrapped with metadata such as speaker role (“user”, “system”, “agent”, “tool”) to provide context.
- Different message types indicate distinct roles in the dialog flow, such as user input, AI responses, tool usage info, etc.
- Messages are passed sequentially between the user interface and LLM to enable a natural back-and-forth conversation.
- Previous Messages provide contextual history and continuity for both the user and the AI agent.
In summary, a Message encapsulates a single interaction turn consisting of raw text and a label indicating the message’s role, and a history of previous Message objects is often transmitted with each request as context.
SystemMessage
A SystemMessage is a message that specifies objectives or constraints for the AI to follow when generating its response. SystemMessages are provided along with user prompts to guide the AI’s behavior.
Key functions of SystemMessages:
- Set framing and context - “You are an AI assistant…”
- Define objectives - “Explain this concept simply” or “Summarize the key points”
- Limit scope - “Focus only on the plot and characters”
- Constrain tone - “Be professional and respectful”
- Manage capabilities - “Do not make speculative claims”
SystemMessages allow conversation designers to precisely shape the AI’s responses by establishing objectives, boundaries and tone upfront. They provide a mechanism to control behavior in line with use cases.
UserMessage
A UserMessage represents an input prompt sent to the large language model (LLM) by the user during a conversation. It contains the natural language text that the user enters in order to query the LLM.
AgentMessage
An AgentMessage is the output returned by the large language model (LLM) in response to a user’s input. AgentMessages serve two main purposes:
- Providing Content - The majority of AgentMessages will contain message content, which is the LLM’s natural language response to the user’s prompt. This generated text serves as the main conversational reply from the AI agent.
- Requesting Tools - In some cases, an AgentMessage may specify a tool request rather than directly outputting text. Here, the LLM asks to invoke a particular tool or function, passing along relevant arguments. This allows the agent to leverage external capabilities as part of the conversation flow.
AgentMessages allow the LLM to communicate with the user, either by generating natural language responses or by invoking applicable tools through requests.
ToolMessage
A ToolMessage is a type of Message used to indicate that a tool or external function has been invoked, usually in response to an AgentMessage. Specifically:
- The LLM first determines when and which tools to execute based on the dialog flow.
- When a tool is executed on the client-side, the system sends a ToolMessage to the large language model (LLM) describing the tool execution.
- The ToolMessage contains details like the tool name, parameters, and output generated.
- The LLM receives ToolMessages to understand when external capabilities have been leveraged.
AssistantMessage
An AssistantMessage is a type of message sent from the AI agent to the user containing a direct response. Unlike AgentMessages, AssistantMessages cannot include tool requests and simply provide a text-based reply.
AssistantMessages serve as natural language responses from the AI to the user’s queries or conversation turns.
Tool
A Tool is a modular capability that can be leveraged by a conversational AI agent to execute logic on the client side. Tools allow the agent to tap into external functions beyond just server-side text generation.
Key attributes of a Tool:
- Name - A unique identifier that will be recognized by the agent, like “google_search”.
- Description - Plain text explaining what the Tool does.
- Arguments - A JSON string containing data the Tool needs to execute, like search keywords.
When a Tool is referenced in an agent response, the client runs it locally and passes outputs back to the agent.
The flow works as follows:
- The agent response references a Tool name and arguments.
- The client parses this and executes the Tool logic locally.
- Outputs are sent back and injected into the conversation flow.
- The agent seamlessly incorporates Tool outputs into subsequent responses.
This allows for more sophisticated conversational experiences by mixing agent-generated text with modular client-side capabilities. The agent simply needs to name the Tool and provide arguments, while execution happens externally.
Tools enable agents to leverage reusable client-side functions as building blocks, improving conversation flexibility.