Name | Description |
---|---|
Continue if max output tokens reached | If set to yes, the generation will continue beyond the maximum token limit by sending the prompt and generated output back to the model. |
Output language | Output language of the prompt |
Model | Select the Perplexity model to use for generating responses. Available models: llama-3.1-sonar-small-128k-online llama-3.1-sonar-small-128k-chat llama-3.1-sonar-large-128k-online llama-3.1-sonar-large-128k-chat llama-3.1-sonar-huge-128k-online llama-3.1-8b-instruct llama-3.1-70b-instruct |
Prompt | |
Temperature | Temperature is like adding spice to cooking. Imagine making a story more exciting or calmer. When it's hot like 0.8, the story becomes full of surprises. When it's cold like 0.2, the story becomes more focused and predictable. Just remember, it's best to change either this or Top P, but not both. |
Top P | Top P is like having a box of chocolates and picking the ones you like most. When it's small like 0.1, we only pick the best ones, the ones that most people like. It makes the story exciting but not too crazy. Just remember, it's best to change either this or Temperature, but not both. |