Skip to content

VisualizerState In Voice UI #79

@ChrisFeldmeier

Description

@ChrisFeldmeier

Describe the problem

How can I call this functions on the playground app? On the agent side it works very well, but I can not access at the agents client? What can I do, I need it urgent :-/ .. or what do I have to change get it done?

Do I have to add it to the protocol code if yes, where?

I want to fill the <AgentMultibandAudioVisualizer with the state for

type VisualizerState = "listening" | "idle" | "speaking" | "thinking";

- user_started_speaking: the user started speaking
- user_stopped_speaking: the user stopped speaking
- agent_started_speaking: the agent started speaking
- agent_stopped_speaking: the agent stopped speaking
- user_speech_committed: the user speech was committed to the chat context
- agent_speech_committed: the agent speech was committed to the chat context
- agent_speech_interrupted: the agent speech was interrupted
- function_calls_collected: received the complete set of functions to be executed
- function_calls_finished: all function calls have been completed

https://github.com/livekit/agents/blob/39a59595c870d8822fdbf4e271b352b0521a573a/livekit-agents/livekit/agents/voice_assistant/assistant.py#L90

Describe the proposed solution

Adding VisualizerState to the Voice UI like Chat GPT App. How can I receive this State on the playground code from the Agent?

Alternatives considered

No response

Importance

I cannot use LiveKit without it

Additional Information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions