OpenAI Responses API
The Responses API is a recent addition to the OpenAI platform. It allows developers to build conversational tools by referencing the previous_response_id rather than sending the full message history on every request. TeXRA supports this API as an alternative to the Chat Completions API.
Key differences
- Continuations: Provide
previous_response_idfrom the prior response to continue a conversation. Only the new messages are sent. - Input types: Message parts use
input_textorinput_imageobjects. - Output format: Instead of
choices, text is found inresponse.output[0].content[0].text(oroutput_text). - Instructions: The
instructionsparameter applies only to the current request. When usingprevious_response_id, you must resend any system instructions you want applied. - No stop sequences: The Responses API does not accept a
stopparameter. If your agent requires an end tag, handle it in post-processing rather than sending it to the API.
See the OpenAI Responses documentation for full details.
Using with TeXRA
When "texra.model.useOpenAIResponsesAPI" is enabled, the extension automatically:
- Converts chat message parts into
input_text/input_imageobjects. - Tracks the last
response.idand sends it asprevious_response_idfor subsequent rounds. - Reads the returned text from
output_textor theoutputarray.
This keeps requests small and simplifies conversation management.
The open-weight models gpt-oss-120b and gpt-oss-20b (available only via OpenRouter) also use the Responses API automatically, even if the setting isn't enabled.