-
Notifications
You must be signed in to change notification settings - Fork 518
Description
Hi,
I am working on a dataset that has a huge chunk of duplicate/near duplicate data, i am using HDBSCAN for clustering but i am getting multiple clusters of the duplicate data.
Ideally this shouldn't have happened and the whole chunk should have been in a single cluster, also i understand that it might be an affect of using UMAP on the BERT embeddings, but dimensionality reduction is required and can't be bypassed.
Also the idea of deduplication by dropping duplicates, perform HDBSCAN and then remap them to their designated clusters does works to a good extent but i am bound to NOT drop any datapoints for clustering.
I also came up with a method to not drop and remap the duplicates to the right cluster, but this feels just like a workaround and not a real fix.
Is there a proper fix for this issue, or is it a bug that HDBSCAN has based on how its designed...?
Thanks for your time and efforts.
Regards,
Sayyam