Skip to content

v1.26.0

Latest
Compare
Choose a tag to compare
@zimeg zimeg released this 07 Oct 02:17
· 2 commits to main since this release
5f6196f

AI-Enabled Features: Loading States, Text Streaming, and Feedback Buttons

🍿 Preview

2025-10-06-loading-state-text-streaming-feedback.mov

📚 Changelog

⚡ Getting Started

Try the AI Agent Sample app to explore the AI-enabled features and existing Assistant helper:

# Create a new AI Agent app
$ slack create slack-ai-agent-app --template slack-samples/bolt-python-assistant-template
$ cd slack-ai-agent-app/

# Initialize Python Virtual Environment
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install -r requirements.txt

# Add your OPENAI_API_KEY
$ export OPENAI_API_KEY=sk-proj-ahM...

# Run the local dev server
$ slack run

After the app starts, send a message to the "slack-ai-agent-app" bot for a unique response.

⌛ Loading States

Loading states allows you to not only set the status (e.g. "My app is typing...") but also sprinkle some personality by cycling through a collection of loading messages.

Bolt Assistant Class usage:

@assistant.user_message
def respond_in_assistant_thread(
    client: WebClient,
    context: BoltContext,
    get_thread_context: GetThreadContext,
    logger: Logger,
    payload: dict,
    say: Say,
    set_status: SetStatus,
):
    set_status(
        status="thinking...",
        loading_messages=[
            "Teaching the hamsters to type faster…",
            "Untangling the internet cables…",
            "Consulting the office goldfish…",
            "Polishing up the response just for you…",
            "Convincing the AI to stop overthinking…",
        ],
    )

Web Client SDK usage:

@app.message()
def handle_message(client, context, event, message):
    client.assistant_threads_setStatus(
        channel_id=channel_id,
        thread_ts=thread_ts,
        status="thinking...",
        loading_messages=[
            "Teaching the hamsters to type faster…",
            "Untangling the internet cables…",
            "Consulting the office goldfish…",
            "Polishing up the response just for you…",
            "Convincing the AI to stop overthinking…",
        ],
    )

    # Start a new message stream

🔮 Text Streaming Helper

The chat_stream() helper utility can be used to streamline calling the 3 text streaming methods:

# Start a new message stream
streamer = client.chat_stream(
    channel=channel_id,
    recipient_team_id=team_id,
    recipient_user_id=user_id,
    thread_ts=thread_ts,
)

# Loop over OpenAI response stream
# https://platform.openai.com/docs/api-reference/responses/create
for event in returned_message:
    if event.type == "response.output_text.delta":
        streamer.append(markdown_text=f"{event.delta}")
    else:
        continue

feedback_block = create_feedback_block()
streamer.stop(blocks=feedback_block)

🔠 Text Streaming Methods

Alternative to the Text Streaming Helper is to call the individual methods.

1) client.chat_startStream

First, start a chat text stream to stream a response to any message:

@app.message()
def handle_message(message, client, context, event, message):
    # Start a new message stream
    stream_response = client.chat_startStream(
        channel=channel_id,
        recipient_team_id=team_id,
        recipient_user_id=user_id,
        thread_ts=thread_ts,
    )
    stream_ts = stream_response["ts"]

2) client.chat_appendStream

After starting a chat text stream, you can then append text to it in chunks (often from your favourite LLM SDK) to convey a streaming effect:

for event in returned_message:
    if event.type == "response.output_text.delta":
        client.chat_appendStream(
            channel=channel_id, 
            ts=stream_ts, 
            markdown_text=f"{event.delta}"
        )
    else:
        continue

3) client.chat_stopStream

Lastly, you can stop the chat text stream to finalize your message:

client.chat_stopStream(
    channel=channel_id, 
    ts=stream_ts,
    blocks=feedback_block
)

👍🏻 Feedback Buttons

Add feedback buttons to the bottom of a message, after stopping a text stream, to gather user feedback:

def create_feedback_block() -> List[Block]:
    blocks: List[Block] = [
        ContextActionsBlock(
            elements=[
                FeedbackButtonsElement(
                    action_id="feedback",
                    positive_button=FeedbackButtonObject(
                        text="Good Response",
                        accessibility_label="Submit positive feedback on this response",
                        value="good-feedback",
                    ),
                    negative_button=FeedbackButtonObject(
                        text="Bad Response",
                        accessibility_label="Submit negative feedback on this response",
                        value="bad-feedback",
                    ),
                )
            ]
        )
    ]
    return blocks

@app.message()
def handle_message(message, client):
    # ... previous streaming code ...
    
    # Stop the stream and add feedback buttons
    feedback_block = create_feedback_block()
    client.chat_stopStream(
        channel=channel_id, 
        ts=stream_ts, 
        blocks=feedback_block
    )

Ⓜ️ Markdown Text Support

chat_postMessage supports markdown_text

response = client.chat_postMessage(
    channel="C111",
    markdown_text=markdown_content,
)

Learn more in slackapi/python-slack-sdk#1718

🧩 Markdown Block

📚 https://docs.slack.dev/reference/block-kit/blocks/markdown-block/

from slack_sdk.models.blocks import MarkdownBlock
...

@app.message("hello")
def message_hello(say):
    say(
        blocks=[
            MarkdownBlock(text="**lets's go!**"),
        ],
        text="let's go!",
    )

Learn more in slackapi/python-slack-sdk#1748

🎞️ Workflows Featured Methods

Add support for the workflows.featured.{add|list|remove|set} methods:

app.client.workflows_featured_add(channel_id="C0123456789", trigger_ids=["Ft0123456789"])
app.client.workflows_featured_list(channel_ids="C0123456789")
app.client.workflows_featured_remove(channel_id="C0123456789", trigger_ids=["Ft0123456789"])
app.client.workflows_featured_set(channel_id="C0123456789", trigger_ids=["Ft0123456789"])

Learn more in slackapi/python-slack-sdk#1712

What's Changed

👾 Enhancements

  • feat: add ai-enabled features text streaming methods, feedback blocks, and loading state in #1387 - Thanks @zimeg!

📚 Documentation

  • docs: add ai provider token instructions in #1340 - Thanks @zimeg!
  • docs: updates for combined quickstart in #1378 - Thanks @haleychaas!
  • docs: replace links from api.slack.com to docs.slack.dev redirects in #1383 - Thanks @zimeg!

🤖 Dependencies

  • chore(deps): update pytest-cov requirement from <7,>=3 to >=3,<8 in #1365 - Thanks @dependabot[bot]!
  • chore(deps): bump actions/setup-python from 5.6.0 to 6.0.0 in #1363 - Thanks @dependabot[bot]!
  • chore(deps): bump actions/checkout from 4.2.2 to 5.0.0 in #1362 - Thanks @dependabot[bot]!
  • chore(deps): bump actions/stale from 9.1.0 to 10.0.0 in #1361 - Thanks @dependabot[bot]!
  • chore(deps): bump codecov/codecov-action from 5.4.3 to 5.5.1 in #1364 - Thanks @dependabot[bot]!
  • build: require cheroot<11 with adapter test dependencies in #1375 - Thanks @zimeg!
  • build(deps): remove pytest lower bounds from testing requirements in #1333 - Thanks @zimeg!
  • chore(deps): bump mypy from 1.17.1 to 1.18.2 in #1379 - Thanks @dependabot[bot]!

🧰 Maintenance

  • ci: post regression notifications if scheduled tests do not succeed in #1376 - Thanks @zimeg!
  • build: install dependencies needed to autogenerate reference docs in #1377 - Thanks @zimeg!
  • version 1.26.0 in #1388 - Thanks @zimeg!

Milestone: https://github.com/slackapi/bolt-python/milestone/94?closed=1
Full Changelog: v1.25.0...v1.26.0
Package: https://pypi.org/project/slack-bolt/1.26.0/