assistant.google
#
Description#
Google Assistant plugin.
This plugin allows you to run the Google Assistant _directly_ on your device. It requires you to have an audio microphone and a speaker connected to the device.
If you have multiple sound devices, you can specify which one(s) to use for
input and output through a ~/.asoundrc
configuration file like this:
pcm.!default { type asym playback.pcm { type plug slave.pcm "hw:0,0" } capture.pcm { type plug slave.pcm "hw:1,0" } }
You can use aplay -l
and arecord -l
to respectively list the
detected audio output and input devices with their indices.
If you are using PulseAudio instead of bare ALSA, then you can:
Use the
pavucontrol
(GUI) tool to select the audio input and output devices and volumes for the assistant.Use a program like
pamix
(ncurses) orpamixer
(CLI).Run the
pactl list sources
andpactl list sinks
commands to respectively list the detected audio input and output devices. Take note of their name, and specify which ones the assistant should use by starting the application with the rightPULSE_SOURCE
andPULSE_SINK
environment variables.
Warning
The Google Assistant library used by this backend has been deprecated by Google: https://developers.google.com/assistant/sdk/reference/library/python/. This integration still works on all of my devices, but its future functionality is not guaranteed - Google may decide to turn off the API, the library may no longer be built against new architectures and it’s unlikely to be updated.
Note
Since the Google Assistant library hasn’t been updated in several years, some of its dependencies are quite old and may break more recent Python installations. Please refer to the comments in the manifest file. for more information on how to install the required dependencies, if the automated ways fail.
Configuration#
assistant.google:
# [Optional]
# Path to the Google OAuth credentials file.
# See
# https://developers.google.com/assistant/sdk/guides/library/python/embed/install-sample#generate_credentials
# for instructions to get your own credentials file.
# By default, it will search for the credentials file under:
#
# * ``~/.config/google-oauthlib-tool/credentials.json``: default
# location supported by the Google Assistant library
#
# * ``<PLATYPUSH_WORKDIR>/credentials/google/assistant.json``:
# recommended location, under the Platypush working directory.
# credentials_file: # type=Optional[str]
# [Optional]
# The device model ID that identifies the device
# where the assistant is running (default: Platypush). It can be a
# custom string.
# device_model_id: Platypush # type=str
# [Optional]
# If set, the assistant will use this plugin (e.g.
# ``tts``, ``tts.google`` or ``tts.mimic3``) to render the responses,
# instead of using the built-in assistant voice.
# tts_plugin: # type=Optional[str]
# [Optional]
# Optional arguments to be passed to the TTS
# ``say`` action, if ``tts_plugin`` is set.
# tts_plugin_args: # type=Optional[Dict[str, Any]]
# [Optional]
# If set, the assistant will play this
# audio file when it detects a speech. The sound file will be played
# on the default audio output device. If not set, the assistant won't
# play any sound when it detects a speech.
# conversation_start_sound: # type=Optional[str]
# [Optional]
# If set, the plugin will
# prevent the default assistant response when a
# `SpeechRecognizedEvent <https://docs.platypush.tech/platypush/events/assistant.html#platypush.message.event.assistant.SpeechRecognizedEvent>`_
# matches a user hook with a condition on a ``phrase`` field. This is
# useful to prevent the assistant from responding with a default "*I'm
# sorry, I can't help you with that*" when e.g. you say "*play the
# music*", and you have a hook that matches the phrase "*play the
# music*" and handles it with a custom action. If set, and you wish
# the assistant to also provide an answer if an event matches one of
# your hooks, then you should call the ``render_response`` method
# in your hook handler. If not set, then the assistant will always try
# and respond with a default message, even if a speech event matches
# the phrase of one of your hooks. In this case, if you want to prevent
# the default response, you should call ``stop_conversation``
# explicitly from your hook handler. Default: True.
# stop_conversation_on_speech_match: True # type=bool
# [Optional]
# How often the `RunnablePlugin.loop <https://docs.platypush.tech/platypush/plugins/.html#platypush.plugins.RunnablePlugin.loop>`_ function should be
# executed (default: 15 seconds). *NOTE*: For back-compatibility
# reasons, the `poll_seconds` argument is also supported, but it's
# deprecated.
# poll_interval: 15 # type=Optional[float]
# [Optional]
# How long we should wait for any running
# threads/processes to stop before exiting (default: 5 seconds).
# stop_timeout: 5 # type=Optional[float]
# [Optional]
# If set to True then the plugin will not monitor
# for new events. This is useful if you want to run a plugin in
# stateless mode and only leverage its actions, without triggering any
# events. Defaults to False.
# disable_monitor: False # type=bool
Dependencies#
pip
pip install sounddevice google-auth google-assistant-library
Alpine
apk add ffmpeg py3-tenacity py3-numpy py3-urllib3 py3-google-auth py3-grpcio py3-cachetools py3-pathlib2 portaudio-dev
Debian
apt install ffmpeg python3-monotonic python3-urllib3 python3-cachetools python3-tenacity python3-google-auth portaudio19-dev python3-grpcio
Fedora
yum install ffmpeg python-cachetools python-grpcio python-urllib3 python-google-auth python-tenacity python-monotonic python-numpy portaudio-devel
Arch Linux
pacman -S ffmpeg python-cachetools portaudio python-grpcio python-sounddevice python-google-auth python-tenacity python-monotonic python-numpy python-urllib3
Post-install
yes | pip uninstall --break-system-packages enum34 click urllib3 requests google-auth
pip install -U --no-input --break-system-packages click urllib3 requests google-auth
Triggered events#
Actions#
Module reference#
- class platypush.plugins.assistant.google.AssistantGooglePlugin(*_, **__)[source]#
Bases:
AssistantPlugin
,RunnablePlugin
Google Assistant plugin.
This plugin allows you to run the Google Assistant _directly_ on your device. It requires you to have an audio microphone and a speaker connected to the device.
If you have multiple sound devices, you can specify which one(s) to use for input and output through a
~/.asoundrc
configuration file like this:pcm.!default { type asym playback.pcm { type plug slave.pcm "hw:0,0" } capture.pcm { type plug slave.pcm "hw:1,0" } }
You can use
aplay -l
andarecord -l
to respectively list the detected audio output and input devices with their indices.If you are using PulseAudio instead of bare ALSA, then you can:
Use the
pavucontrol
(GUI) tool to select the audio input and output devices and volumes for the assistant.Use a program like
pamix
(ncurses) orpamixer
(CLI).Run the
pactl list sources
andpactl list sinks
commands to respectively list the detected audio input and output devices. Take note of their name, and specify which ones the assistant should use by starting the application with the rightPULSE_SOURCE
andPULSE_SINK
environment variables.
Warning
The Google Assistant library used by this backend has been deprecated by Google: https://developers.google.com/assistant/sdk/reference/library/python/. This integration still works on all of my devices, but its future functionality is not guaranteed - Google may decide to turn off the API, the library may no longer be built against new architectures and it’s unlikely to be updated.
Note
Since the Google Assistant library hasn’t been updated in several years, some of its dependencies are quite old and may break more recent Python installations. Please refer to the comments in the manifest file. for more information on how to install the required dependencies, if the automated ways fail.
- __init__(credentials_file: str | None = None, device_model_id: str = 'Platypush', **kwargs)[source]#
- Parameters:
credentials_file –
Path to the Google OAuth credentials file. See https://developers.google.com/assistant/sdk/guides/library/python/embed/install-sample#generate_credentials for instructions to get your own credentials file. By default, it will search for the credentials file under:
~/.config/google-oauthlib-tool/credentials.json
: default location supported by the Google Assistant library<PLATYPUSH_WORKDIR>/credentials/google/assistant.json
: recommended location, under the Platypush working directory.
device_model_id – The device model ID that identifies the device where the assistant is running (default: Platypush). It can be a custom string.
- mute(*_, **__)[source]#
Mute the microphone. Alias for
set_mic_mute()
withmuted=True
.
- pause_detection(*_, **__)#
Put the assistant on pause. No new conversation events will be triggered.
- publish_entities(entities: Collection[Any] | None, callback: Callable[[Entity], Any] | None = None, **kwargs) Collection[Entity] #
Publishes a list of entities. The downstream consumers include:
The entity persistence manager
The web server
- Any consumer subscribed to
platypush.message.event.entities.EntityUpdateEvent
events (e.g. web clients)
It also accepts an optional callback that will be called when each of the entities in the set is flushed to the database.
You usually don’t need to override this class (but you may want to extend
transform_entities()
instead if your extension doesn’t natively handle Entity objects).
- render_response(text: str, *_, with_follow_on_turn: bool | None = None, **__) bool #
Render a response text as audio over the configured TTS plugin.
- Parameters:
text – Text to render.
with_follow_on_turn – If set, the assistant will wait for a follow-up. By default,
with_follow_on_turn
will be automatically set to true if thetext
ends with a question mark.
- Returns:
True if the assistant is waiting for a follow-up, False otherwise.
- resume_detection(*_, **__)#
Resume the assistant hotword detection from a paused state.
- send_text_query(*_, query: str, **__)[source]#
Send a text query to the assistant.
This is equivalent to saying something to the assistant.
- Parameters:
query – Query to be sent.
- set_mic_mute(muted: bool)[source]#
Programmatically mute/unmute the microphone.
- Parameters:
muted – Set to True or False.
- start()#
Start the plugin.
- status(*_, **__)#
- Returns:
The current assistant status:
{ "last_query": "What time is it?", "last_response": "It's 10:30 AM", "conversation_running": true, "is_muted": false, "is_detecting": true }
- stop_conversation(*_, **__)#
Programmatically stops a conversation.
- toggle_mic_mute()[source]#
Deprecated alias for
toggle_mute()
.
- transform_entities(entities: Collection[AssistantPlugin], **_)#
This method takes a list of entities in any (plugin-specific) format and converts them into a standardized collection of Entity objects. Since this method is called by
publish_entities()
before entity updates are published, you may usually want to extend it to pre-process the entities managed by your extension into the standard format before they are stored and published to all the consumers.
- unmute(*_, **__)[source]#
Unmute the microphone. Alias for
set_mic_mute()
withmuted=False
.
- wait_stop(timeout=None)#
Wait until a stop event is received.