Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the application's eventing capabilities by introducing a Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces EasyPanel deployment support with new configuration files and documentation, alongside a new ChatNew event type to detect new chat initiations. The ChatNew event is triggered by incoming/outgoing messages or calls, utilizing a knownChatsCache and database lookups. Review comments suggest addressing potential memory exhaustion in knownChatsCache due to NoExpiration, improving the efficiency of preloadKnownChats and isNewChat functions which currently query the database, and handling database query errors in isNewChat to prevent missed events. Additionally, there's a recommendation to sanitize extraData in fireChatNewEvent to prevent injection vulnerabilities and to optimize the new chat detection logic to avoid performance bottlenecks.
| func fireChatNewEvent(mycli *MyClient, chatJID string, triggerEvent string, extraData map[string]interface{}) { | ||
| chatNewPostmap := make(map[string]interface{}) | ||
| chatNewPostmap["type"] = "ChatNew" | ||
| chatNewPostmap["chat"] = chatJID | ||
| chatNewPostmap["trigger"] = triggerEvent | ||
| for k, v := range extraData { | ||
| chatNewPostmap[k] = v | ||
| } |
There was a problem hiding this comment.
The fireChatNewEvent function directly passes extraData to the chatNewPostmap. Ensure that all values in extraData are properly sanitized and validated to prevent potential injection attacks or data corruption. Consider defining a strict schema for extraData to enforce type safety and prevent unexpected values.
| killchannel = make(map[string](chan bool)) | ||
| userinfocache = cache.New(5*time.Minute, 10*time.Minute) | ||
| lastMessageCache = cache.New(24*time.Hour, 24*time.Hour) | ||
| knownChatsCache = cache.New(cache.NoExpiration, cache.NoExpiration) // Tracks known chat JIDs per user for ChatNew event detection |
There was a problem hiding this comment.
The knownChatsCache is initialized with cache.NoExpiration. This means the cache will grow indefinitely, potentially leading to memory exhaustion. Consider setting a reasonable expiration time and/or a maximum size for the cache to prevent memory leaks. Also, consider adding a mechanism to evict entries based on least-recently-used (LRU) or other eviction policies.
| func preloadKnownChats(s *server, userID string) { | ||
| var chatJIDs []string | ||
| var query string | ||
| if s.db.DriverName() == "sqlite" { | ||
| query = `SELECT DISTINCT chat_jid FROM message_history WHERE user_id = ?` | ||
| } else { | ||
| query = `SELECT DISTINCT chat_jid FROM message_history WHERE user_id = $1` | ||
| } | ||
| err := s.db.Select(&chatJIDs, query, userID) | ||
| if err != nil { | ||
| log.Error().Err(err).Str("userID", userID).Msg("Failed to preload known chats from message_history") | ||
| return | ||
| } | ||
| for _, chatJID := range chatJIDs { | ||
| knownChatsCache.Set(userID+":"+chatJID, true, cache.NoExpiration) | ||
| } | ||
| log.Info().Str("userID", userID).Int("count", len(chatJIDs)).Msg("Preloaded known chats into cache") |
There was a problem hiding this comment.
This function preloadKnownChats directly queries the database, which can be slow, especially with a large message history. Consider using pagination or streaming to process the results in smaller chunks, preventing potential memory issues and improving performance. Also, consider adding a configurable option to disable preloading of known chats, as this might not be needed in all deployments.
| func isNewChat(mycli *MyClient, chatJID string) bool { | ||
| cacheKey := mycli.userID + ":" + chatJID | ||
|
|
||
| // Check if already known in cache | ||
| if _, found := knownChatsCache.Get(cacheKey); found { | ||
| return false | ||
| } | ||
|
|
||
| // Check if chat exists in message_history (persistent storage) | ||
| var count int | ||
| var query string | ||
| if mycli.db.DriverName() == "sqlite" { | ||
| query = `SELECT COUNT(*) FROM message_history WHERE user_id = ? AND chat_jid = ? LIMIT 1` | ||
| } else { | ||
| query = `SELECT COUNT(*) FROM message_history WHERE user_id = $1 AND chat_jid = $2 LIMIT 1` | ||
| } | ||
| err := mycli.db.Get(&count, query, mycli.userID, chatJID) | ||
| if err != nil { | ||
| log.Error().Err(err).Str("userID", mycli.userID).Str("chatJID", chatJID).Msg("Failed to check if chat exists in history") | ||
| // On error, register it and don't fire the event to avoid false positives | ||
| knownChatsCache.Set(cacheKey, true, cache.NoExpiration) | ||
| return false | ||
| } | ||
|
|
||
| if count > 0 { | ||
| // Chat already exists in DB, register in cache and return false | ||
| knownChatsCache.Set(cacheKey, true, cache.NoExpiration) | ||
| return false | ||
| } | ||
|
|
||
| // Chat is new! Register it in cache | ||
| knownChatsCache.Set(cacheKey, true, cache.NoExpiration) | ||
| return true |
There was a problem hiding this comment.
The isNewChat function checks for existing chats by querying the database, which can be inefficient. Consider implementing a more efficient caching mechanism (e.g., using a Bloom filter) to reduce database load. Also, consider adding metrics to track the cache hit rate to evaluate its effectiveness.
| err := mycli.db.Get(&count, query, mycli.userID, chatJID) | ||
| if err != nil { | ||
| log.Error().Err(err).Str("userID", mycli.userID).Str("chatJID", chatJID).Msg("Failed to check if chat exists in history") | ||
| // On error, register it and don't fire the event to avoid false positives | ||
| knownChatsCache.Set(cacheKey, true, cache.NoExpiration) | ||
| return false |
There was a problem hiding this comment.
In the isNewChat function, if an error occurs during the database query, the code registers the chat and returns false to avoid false positives. However, this could lead to missed 'ChatNew' events if the database is temporarily unavailable. Consider adding retry logic or a mechanism to re-evaluate these chats later.
| chatJIDStr := evt.Info.Chat.String() | ||
| if !evt.Info.IsGroup && strings.HasSuffix(chatJIDStr, "@s.whatsapp.net") && isNewChat(mycli, chatJIDStr) { | ||
| extraData := map[string]interface{}{ | ||
| "phone": evt.Info.Chat.User, | ||
| "sender": evt.Info.Sender.String(), | ||
| "pushName": evt.Info.PushName, | ||
| "isGroup": false, | ||
| "isFromMe": evt.Info.IsFromMe, | ||
| "timestamp": evt.Info.Timestamp, | ||
| "messageID": evt.Info.ID, | ||
| } | ||
| go fireChatNewEvent(mycli, chatJIDStr, "message", extraData) | ||
| } |
There was a problem hiding this comment.
The logic to detect new chats is performed inline within the message processing logic. This could introduce performance bottlenecks, especially under high load. Consider moving this logic to a separate goroutine or using a more efficient event-driven architecture to avoid blocking the main message processing flow.
No description provided.