openai
#
Description#
Plugin to interact with OpenAI services.
Currently supported:
get_response()
: Get a response to a prompt/question using the GPT API. It supports custom contexts and environment settings.
transcribe()
: Perform speech-to-text on an audio file. This API is also leveraged by theplatypush.plugins.assistant.openai.OpenaiPlugin
to provide a full-fledged voice assistant.Through the
platypush.plugins.tts.openai.OpenaiPlugin
plugin, text-to-speech is also supported.
Contexts#
The plugin also supports the implementation of custom assistant contexts/environment.
Contexts can be used to:
Customize the model’s behavior based on a set of inputs - going from a generic “You are a helpful assistant” to a more specific “You are a Star Trek fan”, or “You are a 16th century noble lady who talks in Shakespearean English to her peers”.
Pre-configure the model with a set of previous interactions in order to either pre-load information that we expect the model to remember, or to provide a set of previous interactions that the model can use to generate responses that are consistent with the conversation history.
The plugin provides two types of contexts:
Default context: This is a set of context entries that are provided at plugin initialization and that will be used to initialize the model with a configuration or set of previous interactions that will be remembered when generating all responses.
Runtime context: This is a set of context entries that can be passed at runtime at
get_response()
. All the interactions (both user prompts and assistant responses) that are processed throughget_response()
will also be added to the runtime context, and remembered for the nextcontext_expiry
seconds. This allows you to generate responses that are consistent with the recent conversation history.
Each context entry is a dictionary with the following keys:
role
: The role of the message. Can be one of:
system
: A system message provided to the model to set up its initial state - e.g. “you are a helpful assistant”.
user
: A user message, as provided by a previous (real or synthetic) user interaction.
assistant
: An assistant message, as provided by a previous (real or synthetic) assistant response.
content
: The content of the message.
An example of context:
context: - role: system content: > You are a 16th century noble lady who talks in Shakespearean English to her peers. - role: user content: What is a telephone? - role: assistant content: > Pray tell, noble companion, a telephone is a device of modern innovation that doth permit one to speak with a distant acquaintance by means of magical pink waves that do carry the sound of thine voice to the ear of the listener.
Given such context, if you call get_response()
with a
prompt such as “How does it work?”, the model may generate a
response such as “Fair lady, to use a telephone, thou must first
lift the receiver and place it to thine ear. Then, thou must speak
into the mouthpiece as though conversing with a companion in
another room. The magical pink waves shall carry thy words to the
recipient, who shall hear them on their own device. ‘Tis a wondrous
invention indeed!”.
Note that the model will remember the previous interactions and
also generate responses, so you can ask it direct questions such as “How
does it work” while remembering what “it” is likely to mean. And it’ll
provide responses which are in the same style initialized through the
system
context.
Configuration#
openai:
# [Required]
# OpenAI API key. If not set, it will be read from the
# ``OPENAI_API_KEY`` environment variable.
api_key: # type=Optional[str]
# [Optional]
# The model to use. Default: ``gpt-3.5-turbo``.
# model: gpt-3.5-turbo # type=str
# [Optional]
# Default timeout for API requests (default: 30 seconds).
# timeout: 30 # type=float
# [Optional]
# Default context to use for completions, as a list of
# dictionaries with ``role`` and ``content`` keys. Default: None.
# context: # type=Optional[Iterable[dict]]
# [Optional]
# Default expiry time for the context in seconds.
# After this time since the last interaction, the context will be
# cleared.
#
# This means that any follow-up interactions happening within the
# expiry window will remember the past prompts, but any interaction
# that happens after the expiry window (calculated from the time of
# the last interaction) will start fresh.
#
# Note that ``context_expiry`` is only applied to the runtime
# context. The default context will never expire unless it's removed
# from the plugin configuration.
#
# Set to 0 to disable context expiry - i.e. all messages stay in the
# context until the plugin is restarted or the context is cleared
# explicitly via `OpenaiPlugin.clear_context <https://docs.platypush.tech/platypush/plugins/openai.html#platypush.plugins.openai.OpenaiPlugin.clear_context>`_. Default: 600 seconds (10
# minutes).
# context_expiry: 600 # type=Optional[float]
# [Optional]
# Maximum number of tokens to generate in the response
# (default: 500).
# max_tokens: 500 # type=int
Actions#
Module reference#
- class platypush.plugins.openai.ContextEntry(timestamp: datetime, role: ContextEntryRole, content: str)[source]#
Bases:
object
A context entry.
- __init__(timestamp: datetime, role: ContextEntryRole, content: str) None #
- class platypush.plugins.openai.ContextEntryRole(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
Bases:
Enum
Roles for context entries.
- class platypush.plugins.openai.OpenaiPlugin(api_key: str | None, model: str = 'gpt-3.5-turbo', timeout: float = 30, context: Iterable[dict] | None = None, context_expiry: float | None = 600, max_tokens: int = 500, **kwargs)[source]#
Bases:
Plugin
Plugin to interact with OpenAI services.
Currently supported:
get_response()
: Get a response to a prompt/question using the GPT API. It supports custom contexts and environment settings.transcribe()
: Perform speech-to-text on an audio file. This API is also leveraged by theplatypush.plugins.assistant.openai.OpenaiPlugin
to provide a full-fledged voice assistant.Through the
platypush.plugins.tts.openai.OpenaiPlugin
plugin, text-to-speech is also supported.
Contexts#
The plugin also supports the implementation of custom assistant contexts/environment.
Contexts can be used to:
Customize the model’s behavior based on a set of inputs - going from a generic “You are a helpful assistant” to a more specific “You are a Star Trek fan”, or “You are a 16th century noble lady who talks in Shakespearean English to her peers”.
Pre-configure the model with a set of previous interactions in order to either pre-load information that we expect the model to remember, or to provide a set of previous interactions that the model can use to generate responses that are consistent with the conversation history.
The plugin provides two types of contexts:
Default context: This is a set of context entries that are provided at plugin initialization and that will be used to initialize the model with a configuration or set of previous interactions that will be remembered when generating all responses.
Runtime context: This is a set of context entries that can be passed at runtime at
get_response()
. All the interactions (both user prompts and assistant responses) that are processed throughget_response()
will also be added to the runtime context, and remembered for the nextcontext_expiry
seconds. This allows you to generate responses that are consistent with the recent conversation history.
Each context entry is a dictionary with the following keys:
role
: The role of the message. Can be one of:system
: A system message provided to the model to set up its initial state - e.g. “you are a helpful assistant”.user
: A user message, as provided by a previous (real or synthetic) user interaction.assistant
: An assistant message, as provided by a previous (real or synthetic) assistant response.
content
: The content of the message.
An example of context:
context: - role: system content: > You are a 16th century noble lady who talks in Shakespearean English to her peers. - role: user content: What is a telephone? - role: assistant content: > Pray tell, noble companion, a telephone is a device of modern innovation that doth permit one to speak with a distant acquaintance by means of magical pink waves that do carry the sound of thine voice to the ear of the listener.
Given such context, if you call
get_response()
with a prompt such as “How does it work?”, the model may generate a response such as “Fair lady, to use a telephone, thou must first lift the receiver and place it to thine ear. Then, thou must speak into the mouthpiece as though conversing with a companion in another room. The magical pink waves shall carry thy words to the recipient, who shall hear them on their own device. ‘Tis a wondrous invention indeed!”.Note that the model will remember the previous interactions and also generate responses, so you can ask it direct questions such as “How does it work” while remembering what “it” is likely to mean. And it’ll provide responses which are in the same style initialized through the
system
context.- __init__(api_key: str | None, model: str = 'gpt-3.5-turbo', timeout: float = 30, context: Iterable[dict] | None = None, context_expiry: float | None = 600, max_tokens: int = 500, **kwargs)[source]#
- Parameters:
api_key – OpenAI API key. If not set, it will be read from the
OPENAI_API_KEY
environment variable.model – The model to use. Default:
gpt-3.5-turbo
.timeout – Default timeout for API requests (default: 30 seconds).
max_tokens – Maximum number of tokens to generate in the response (default: 500).
context – Default context to use for completions, as a list of dictionaries with
role
andcontent
keys. Default: None.context_expiry –
Default expiry time for the context in seconds. After this time since the last interaction, the context will be cleared.
This means that any follow-up interactions happening within the expiry window will remember the past prompts, but any interaction that happens after the expiry window (calculated from the time of the last interaction) will start fresh.
Note that
context_expiry
is only applied to the runtime context. The default context will never expire unless it’s removed from the plugin configuration.Set to 0 to disable context expiry - i.e. all messages stay in the context until the plugin is restarted or the context is cleared explicitly via
clear_context()
. Default: 600 seconds (10 minutes).
- get_response(prompt: str, model: str | None = None, context: Iterable[dict] | None = None, timeout: float | None = None, max_tokens: int | None = None) str | None [source]#
Get completions for a given prompt using ChatGPT.
- Parameters:
prompt – The prompt/question to complete/answer.
model – Override the default model to use.
context – Extend the default context with these extra messages.
max_tokens – Override the default maximum number of tokens to generate in the response.
timeout – Override the default timeout for the API request.
- Returns:
The completion for the prompt - or, better, the message associted to the highest scoring completion choice.
- transcribe(audio: str, model: str | None = 'whisper-1', timeout: float | None = None) str [source]#
Perform speech-to-text on an audio file.
- Parameters:
audio – The audio file to transcribe.
model – The model to use for speech-to-text. Default:
whisper-1
. If not set, the configured default model will be used.timeout – Timeout for the API request. If not set, the default timeout will be used.
- Returns:
The transcribed text.