The LLM (Large Language Model) Node utilizes the Large Language Model of your choice, such as ChatGPT, Claude, and more for handling natural conversations.

LLM node can be placed anywhere in the conversation flow and it works by using AI Tokens to generate text, allowing for dynamic and engaging interactions with users.

Essentially, it’s like having a smart assistant that can understand and respond to users in a human-like way.

This node is important for creating conversational experiences that feel natural and intuitive for your users without needing extensive coding knowledge

Options in the LLM Node

1

Model

2

Brain Vault

3

Chat History

4

System Prompt

5

User Prompt

6

Temperature

7

Max Tokens

8

Top P

1. Model

The model displays a selection of available options on the platform, allowing you to pick your preferred one for use with the LLM Node.

You can easily select models from OpenAI, Anthropic Claude, Mistral, Cohere, Google Gemini, and more.

2. Brain Vault

Enabling the “Use Brain Vault” option on the LLM Node allows the node to utilize the data you have uploaded to your Brain Vault.

3.Chat History

Enabling the “Use Chat History” option on the LLM Node allows the node to utilize the chat history, enhancing the assistant’s understanding and response quality.

By enabling this option, the assistant becomes aware of previous interactions, enabling it to tailor responses more effectively.

4. System Prompt

A System Prompt is giving instructions to the Model. It tells the model how to respond to users inputs.

For Example, if you give system prompt as, “You are an Expert UI Designer” the model will generate responses as if it were an expert UI designer.

The effectiveness of the responses your assistant provides relies heavily on the clarity and specificity of the system prompt you provide. It essentially sets the tone and direction for the interaction between users and your Assistant.

5. User Prompt

A User Prompt is asking questions to the Model. It is a way for the user to interact with the model by asking questions or providing input.

When using the LLM node, you can reference other nodes in the User Prompt like listen-0

For Example, if you reference a listen node in the system prompt, the output of the listen node is sent to the LLM Node as the User Prompt.

6. Temperature

Temperature refers to how creative or varied the responses from the model will be.

A higher temperature means more diverse responses, while a lower temperature leads to more predictable answers.

7. Max Tokens

Max Tokens refers to the maximum number of words the AI can generate in a response. When this limit is reached, the AI stops generating text.

Increasing the max token limit can help ensure that responses are complete and not cut off prematurely, allowing for smoother conversation flow.

8. Top P

Top P refers to selecting the most probable tokens for generating responses. By adjusting the value of “Top P”, you can control the diversity of tokens in the generated responses.

A higher value means more variety, while a lower value results in fewer variations.