Name | Description |
---|---|
Continue if max output tokens reached | The models have a max. token limit of 4096 tokens. If the output is cut off, and this is set to yes the generation will continue by sending prompt & generated output. |
Frequency Penalty | Frequency Penalty is like telling the model not to repeat the same words too much. When it's positive, it likes to use new words and not repeat too often. If it's negative, it doesn't care much and may say the same things again and again. Just remember, this helps keep the story fresh and less repetitive. |
Image URL | |
Output language | Output language of the prompt |
Presence Penalty | Presence Penalty is like telling the model to talk about new things. When it's positive, it makes the model think more and try to say different stuff. If it's negative, it doesn't care much about new things. Just remember, this helps make the story more interesting and unique. |
Prompt | |
Temperature | Temperature is like adding spice to cooking. Imagine making a story more exciting or calmer. When it's hot like 0.8, the story becomes full of surprises. When it's cold like 0.2, the story becomes more focused and predictable. Just remember, it's best to change either this or Top P, but not both. |
Top P | Top P is like having a box of chocolates and picking the ones you like most. When it's small like 0.1, we only pick the best ones, the ones that most people like. It makes the story exciting but not too crazy. Just remember, it's best to change either this or Temperature, but not both. |
Model | GPT is a digital brain that can generate text, answer questions, analyze text or create summaries of it. The 'context' like 8k or 32k indicates how much information it can remember and use at once, with 32k being a longer memory span than 8k. Available models: GPT-4o GPT 4 Vision |