A shared folder with AI prompts and code snippets
From workspace: Perplexity
Team: Main
Total snippets: 6
6 snippets
When using structured outputs with reasoning models like sonar-reasoning-pro, the response will include a <think> section containing reasoning tokens, immediately followed by the structured output. The response_format parameter does not remove these reasoning tokens from the output, so the final response will need to be parsed manually. Sample Response:
<think> I need to provide information about France in a structured JSON format with specific fields: country, capital, population, official_language. For France: - Country: France - Capital: Paris - Population: About 67 million (as of 2023) -...
###Supported Regex - Characters: \d, \w, \s , . - Character classes: [0-9A-Fa-f] , [^x] - Quantifiers: *, ? , +, {3}, {2,4} , {3,} - Alternation: | - Group: ( ... ) - Non-capturing group: (?: ... ) - Positive lookahead: (?= ... ) - Negative...
Recursive JSON schema is not supported. As a result of that, unconstrained objects are not supported either. Here’s a few example of unsupported schemas:
# UNSUPPORTED! from typing import Any class UnconstrainedDict(BaseModel): unconstrained: dict[str, Any] class RecursiveJson(BaseModel): value: str child: list["RecursiveJson"]
Request
import requests url = "https://api.perplexity.ai/chat/completions" headers = {"Authorization": "Bearer YOUR_API_KEY"} payload = { "model": "sonar", "messages": [ {"role": "system", "content": "Be precise and concise."}, ...
Request
import requests from pydantic import BaseModel class AnswerFormat(BaseModel): first_name: str last_name: str year_of_birth: int num_seasons_in_nba: int url = "https://api.perplexity.ai/chat/completions" headers =...
We currently support two types of structured outputs: JSON Schema and Regex. LLM responses will work to match the specified format, except for the following cases: The output exceeds max_tokens Enabling the structured outputs can be done by...