LLM
The LLM (Large Language Model) Node utilizes the Large Language Model of your choice, such as ChatGPT, Claude, and more for handling natural conversations.
LLM node can be placed anywhere in the conversation flow and it works by using AI Tokens to generate text, allowing for dynamic and engaging interactions with users.
Essentially, it’s like having a smart assistant that can understand and respond to users in a human-like way.
Options in the LLM Node
Model
Brain Vault
Chat History
System Prompt
User Prompt
Temperature
Max Tokens
Top P
1. Model
The model displays a selection of available options on the platform, allowing you to pick your preferred one for use with the LLM Node.
2. Brain Vault
Enabling the “Use Brain Vault” option on the LLM Node allows the node to utilize the data you have uploaded to your Brain Vault.
3.Chat History
Enabling the “Use Chat History” option on the LLM Node allows the node to utilize the chat history, enhancing the assistant’s understanding and response quality.
By enabling this option, the assistant becomes aware of previous interactions, enabling it to tailor responses more effectively.
4. System Prompt
A System Prompt is giving instructions to the Model. It tells the model how to respond to users inputs.
For Example, if you give system prompt as, “You are an Expert UI Designer” the model will generate responses as if it were an expert UI designer.
The effectiveness of the responses your assistant provides relies heavily on the clarity and specificity of the system prompt you provide. It essentially sets the tone and direction for the interaction between users and your Assistant.
5. User Prompt
A User Prompt is asking questions to the Model. It is a way for the user to interact with the model by asking questions or providing input.
When using the LLM node, you can reference other nodes in the User Prompt like listen-0
For Example, if you reference a listen node in the system prompt, the output of the listen node is sent to the LLM Node as the User Prompt.
6. Temperature
Temperature refers to how creative or varied the responses from the model will be.
7. Max Tokens
Max Tokens refers to the maximum number of words the AI can generate in a response. When this limit is reached, the AI stops generating text.
8. Top P
Top P refers to selecting the most probable tokens for generating responses. By adjusting the value of “Top P”, you can control the diversity of tokens in the generated responses.
Was this page helpful?