Perplexity

Input fields / Configuration

These are the fields which can be used for configuring the AI system request.

Name Description
Continue if max output tokens reachedIf set to yes, the generation will continue beyond the maximum token limit by sending the prompt and generated output back to the model.
Output languageOutput language of the prompt
ModelSelect the Perplexity model to use for generating responses.

Available models:
llama-3.1-sonar-small-128k-online
llama-3.1-sonar-small-128k-chat
llama-3.1-sonar-large-128k-online
llama-3.1-sonar-large-128k-chat
llama-3.1-sonar-huge-128k-online
llama-3.1-8b-instruct
llama-3.1-70b-instruct
Prompt
TemperatureTemperature is like adding spice to cooking. Imagine making a story more exciting or calmer. When it's hot like 0.8, the story becomes full of surprises. When it's cold like 0.2, the story becomes more focused and predictable. Just remember, it's best to change either this or Top P, but not both.
Top PTop P is like having a box of chocolates and picking the ones you like most. When it's small like 0.1, we only pick the best ones, the ones that most people like. It makes the story exciting but not too crazy. Just remember, it's best to change either this or Temperature, but not both.

Result Fields

These result fields are available in the final data and can be used in additional AI System Requests.

Name Description
Prompt resultResult of the prompt
Start creating an APP using this AI System and combine it with other available AI Systems and external data.