feat: add real-time agent activity status streaming to Slack assistant interface #31
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Overview
This PR demonstrates a pattern for streaming real-time AI agent activity to Slack's assistant status interface, implemented using Google's Agent Development Kit (ADK) with OpenAI models. This is primarily for informational and discussion purposes to share the approach with the community.
Note: I understand this diverges from the project's current direction and may not be merged. The goal is to share the real-time status update pattern and provide an alternative implementation for those interested in Google ADK.
What Changed
Core Implementation
ai/llm_caller.py: Refactored to emit status events alongside content events from the LLMlisteners/assistant/message.py: Captures status events and streams them toassistant_threads_setStatusSupporting Files
ai/agents.py: Example multi-agent system (coordinator + specialized agents)ai/tools.py: Example tools for agentsdocs/MIGRATION_GUIDE.md: Migration guiderequirements.txt: Updated dependenciesKey Features
Real-Time Agent Activity Status Updates
The Problem:
The Slack AI example showed generic status messages like "is thinking..." that don't reflect what's actually happening. Users have no visibility into tool calls, agent reasoning, or processing steps.
This Solution:
Stream real-time updates about what the AI is actually doing directly to Slack's assistant status interface.
How It Works (
listeners/assistant/message.py:71-77):The LLM caller emits two types of events:
statusevents: What the AI is doing (tool calls, agent delegations, processing steps)contentevents: The actual response textStatus events are captured and streamed to
assistant_threads_setStatus:Why This Matters:
Pattern Applicability:
This pattern works with any LLM framework - you just need to:
assistant_threads_setStatuswith theloading_messagesparameterchat_streamAdditional Notes
docs/MIGRATION_GUIDE.mdRequirements
Discussion Points
I'd love to hear thoughts.
Thank you for considering this contribution!