Skip to content

feat: AI-powered features (translation, summary, reformulation, suggested replies)#9127

Open
sk7n4k3d wants to merge 2 commits intoelement-hq:developfrom
sk7n4k3d:feature/ai-integration
Open

feat: AI-powered features (translation, summary, reformulation, suggested replies)#9127
sk7n4k3d wants to merge 2 commits intoelement-hq:developfrom
sk7n4k3d:feature/ai-integration

Conversation

@sk7n4k3d
Copy link

Summary

Full integration of AI-powered features using a local LLM (Ollama) via OpenAI-compatible API. Privacy-first: all AI processing happens on the user's own infrastructure, no data sent to third-party services.

Features

  • Auto-translation — Async background translation of timeline messages with LRU memory + disk cache, reply quote translation, "Show original" toggle, translated room list previews
  • Conversation summary — Summarize last 50 messages via toolbar menu, bottom sheet with copy button
  • Message reformulation — Long-press context menu in composer with 4 styles (Formal, Casual, Concise, Fix Grammar)
  • Suggested replies — 2-3 contextual reply chips above the composer
  • Notification summary — Auto-summarize missed notifications after 5min+ absence, leverages translation cache

Architecture

  • All features use TranslationService.complete() → single OpenAI-compatible API endpoint
  • Shared translation cache (5000 entries LRU + JSON disk persistence)
  • Rate limiter (3 concurrent requests max)
  • Debounced timeline cache invalidation (200ms batching)
  • Dedicated settings page with per-feature toggles
  • Configurable API URL, key, model, and target language

Files changed (38 files, +2668 lines)

features/ai/
    reformulate/ReformulationBottomSheet.kt
    summary/ConversationSummaryBottomSheet.kt
    suggestedreply/SuggestedReplyService.kt, SuggestedReplyView.kt
    notificationsummary/NotificationSummaryService.kt, NotificationSummaryBottomSheet.kt
features/translation/
    TranslationService.kt, TranslateConfig.kt, TranslateApi.kt
    TranslationCache.kt, TranslationRateLimiter.kt
    TimelineTranslationManager.kt, VectorSettingsTranslationFragment.kt

Key design decisions

  • Privacy-first: No cloud AI services — works with Ollama, llama.cpp, or any OpenAI-compatible local endpoint
  • Cache-based room list previews: No extra API calls, reuses translations already in cache
  • Reply quote translation: Extracts quoted text from Matrix reply fallback, translates separately, injects into blockquote HTML
  • Debounced invalidation: Groups multiple translation completions into a single Epoxy model rebuild

Test plan

  • Enable translation in Settings → Translation & AI
  • Configure Ollama URL and model
  • Verify auto-translation of messages in a room timeline
  • Verify "Show original ▾" toggle below translated messages
  • Verify translated previews in room list
  • Test conversation summary via ⋮ menu
  • Test reformulation via long-press in composer
  • Verify suggested reply chips above composer
  • Test notification summary after 5min+ absence

🤖 Generated with Claude Code

…ation, suggested replies)

Add comprehensive AI features powered by local LLM (Ollama) via OpenAI-compatible API:

## Translation
- Auto-translate timeline messages with async background translation
- Translation cache (memory LRU + disk persistence) with debounced invalidation
- Reply quote translation (extracts and translates quoted text in replies)
- Preserve reply formatting (blockquote HTML) after translation
- Strip HTML tags, Matrix IDs, and HTML entities before translation
- "Show original" toggle below translated messages
- Translated room list previews (cache-based, no extra API calls)

## Conversation Summary
- Summarize last 50 messages via toolbar menu
- Bottom sheet with loading state and copy button

## Message Reformulation
- Long-press context menu in composer ("Reformuler")
- 4 styles: Formal, Casual, Concise, Fix Grammar
- Bottom sheet with style selection and result preview

## Suggested Replies
- 2-3 contextual reply chips above composer
- Auto-hide when user starts typing
- JSON parsing with line-by-line fallback

## Notification Summary
- Auto-summarize missed notifications after 5min+ absence
- Uses translation cache for already-translated messages
- Bottom sheet with dismiss option

## Settings
- Full settings page under "Translation & AI"
- Per-feature toggles
- API configuration (URL, key, model, target language)
- Cache management (size display, clear button)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Security (Critical):
- Add signature permission on ElementAppFunctionService (was exported without protection)
- Use EncryptedSharedPreferences (MasterKey) for API key storage
- Enforce HTTPS for non-localhost API URLs + input length limit (4000 chars)

Security (Medium):
- Atomic write for translation cache (temp file + rename)
- Remove message content from error logs
- Add URL validation in settings

Performance:
- Notification translation now cache-only (no network blocking)
- Replace delay(500) with polling loop in AppFunctions readMessages

Architecture:
- Replace lambda callbacks with FragmentResult API in bottom sheets
- Use WeakReference for singleton listener (prevent Activity leak)
- Add destroy() to TranslationCache and TimelineTranslationManager
- Add withTimeout(30s) to AppFunctions suspendCoroutine
- Truncate Bundle data to 500KB to prevent TransactionTooLargeException

Code Quality:
- Change all Timber.w(TRANSLATION_DEBUG) to Timber.d()
- Use dynamic targetLanguage instead of hardcoded French
- Move hardcoded string to string resources

New: AppFunctions (Android 16 inter-app API):
- ElementAppFunctionService: searchMessages, readMessages, sendMessage, listRooms, summarizeRoom, getUnreadSummary
- Intent-based fallback API for non-AppFunctions consumers
- Data models with Moshi JSON serialization

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants