Perplexity

Input fields / Configuration

These are the fields which can be used for configuring the AI system request.

Name Description
Continue if max output tokens reachedIf set to yes, the generation will continue beyond the maximum token limit by sending the prompt and generated output back to the model.
Output languageOutput language of the prompt
ModelSelect the Perplexity model to use for generating responses.

Available models:
llama-3.1-sonar-small-128k-online
llama-3.1-sonar-small-128k-chat
llama-3.1-sonar-large-128k-online
llama-3.1-sonar-large-128k-chat
llama-3.1-sonar-huge-128k-online
llama-3.1-8b-instruct
llama-3.1-70b-instruct
llama-3-sonar-small-32k-chat
llama-3-sonar-small-32k-online
llama-3-sonar-large-32k-chat
llama-3-sonar-large-32k-online
mixtral-8x7b-instruct
Prompt
TemperatureControls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.
Top PControls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered.

Result Fields

These result fields are available in the final data and can be used in additional AI System Requests.

Name Description
Prompt resultResult of the prompt
Start creating an APP using this AI System and combine it with other available AI Systems and external data.