Welcome to the Daily Paper Reading Tracker! Stay on top of your research by keeping track of what you're reading, what's on your radar, and what you've found insightful. π
- β Plan to Read β Papers on the reading list
- β Good Paper β Recommended reads
- β Completed β Finished this week
- π Discussion β Group meeting topic
Diffusion-Based Planning for Autonomous Driving with Flexible Guidance
Surrogate Gap Minimization Improves Sharpness-Aware Training
β Open-Set Recognition: A Good Closed-Set Classifier is All You Need
A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts
Revisiting Test Time Adaptation under Online Evaluation
Evaluating Continual Test-Time Adaptation for Contextual and Semantic Domain Shifts
ActMAD: Activation Matching to Align Distributions for Test-Time-Training
Test-Time Training with Masked Autoencoders
TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?
Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
β Better Aggregation in Test-Time Augmentation
Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models
Improved Test-time Adaptation for Domain Generalization
βSODA: Robust Training of Test-Time Data Adaptors
ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation
Towards Open-Set Test-Time Adaptation Utilizing the Wisdom of Crowds in Entropy Minimization
Label Shift Adapter for Test-Time Adaptation under Covariate and Label Shifts
Uncovering Adversarial Risks of Test-Time Adaptation
Gradual Test-Time Adaptation by Self-Training and Style Transfer
β Introducing Intermediate Domains for Effective Self-Training during Test-Time
Rethinking Precision of Pseudo Label: Test-Time Adaptation via Complementary Learning Done but haven't been updated
βοΈ Adaptive Domain Generalization via Online Disagreement Minimization
βοΈ Covariance-aware Feature Alignment with Pre-computed Source Statistics for Test-time Adaptation
βοΈ SATA: Source Anchoring and Target Alignment Network for Continual Test Time Adaptation
βοΈ CAFA: Class-Aware Feature Alignment for Test-Time Adaptation
β Towards Understanding GD with Hard and Conjugate Pseudo-labels for Test-Time Adaptation
βοΈ Benchmarking Test-time Unsupervised Deep Neural Network Adaptation on Edge Devices
βοΈ Learning to Adapt to Online Streams with Distribution Shifts
βοΈ A Simple Test-time Adaptation Method for Source-free Domain Generalization
On Pitfalls of Test-Time Adaptation
Feature Alignment and Uniformity for Test Time Adaptation
A Probabilistic Framework for Lifelong Test-Time Adaptation
TIPI: Test time adaptation with transformation invariance
EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization
Robust mean teacher for continual and gradual test-time adaptation
Multi-step test-time adaptation with entropy minimization and pseudo-labeling
Robust Test-Time Adaptation in Dynamic Scenarios
TeSLA: Test-time self-learning with automatic adversarial augmentation
DELTA: degradation-free fully test-time adaptation
MECTA: Memory-Economic Continual Test-Time Model Adaptation
Parameter-free Online Test-time Adaptation
Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation
MixNorm: Test-Time Adaptation Through Online Normalization Estimation
Domain Alignment Meets Fully Test-Time Adaptation
Test-time Adaptation via Conjugate Pseudo-Labels
Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation
Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization
Test-time Batch Statistics Calibration for Covariate Shift
Domain-agnostic Test-time Adaptation by Prototypical Training with Auxiliary Data
Test time Adaptation through Perturbation Robustness
Continual Test-Time Domain Adaptation
MEMO: Test Time Robustness via Adaptation and Augmentation
β Extrapolative Continuous-time Bayesian Neural Network for Fast Training-free Test-time Adaptation
TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation
The Norm Must Go On: Dynamic Unsupervised Domain Adaptation by Normalization
Online Adaptation to Label Distribution Shift
Improving robustness against common corruptions by covariate shift adaptation
MM-TTA: Multi-Modal Test-Time Adaptation for 3D Semantic Segmentation
Efficient Test-Time Model Adaptation without Forgetting
β Back to the Source: Diffusion-Driven Test-Time Adaptation
Test-Time Adaptation via Self-Training with Nearest Neighbor Information
NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation
Towards Stable Test-time Adaptation in Dynamic Wild World
π Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation
β medical Test-Time Unsupervised Domain Adaptation
Learning to Re-weight Examples with Optimal Transport for Imbalanced Classification
β ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot
βοΈ Posterior Re-calibration for Imbalanced Datasets
β Does Robustness on ImageNet Transfer to Downstream Tasks?
Category Contrast for Unsupervised Domain Adaptation in Visual Tasks
Graph-Relational Domain Adaptation
Taskonomy: Disentangling Task Transfer Learning
Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
f-Domain-Adversarial Learning: Theory and Algorithms
β Dirichlet-based Uncertainty Calibration for Active Domain Adaptation
β Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation
Cycle Self-Training for Domain Adaptation
Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning
Toalign: Task-oriented alignment for unsupervised domain adaptation
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
β Upcycling Models under Domain and Category Shift
HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks
Unsupervised Domain Adaptation for Semantic Segmentation using Depth Distribution
Pixel-by-Pixel Cross-Domain Alignment for Few-Shot Semantic Segmentation
TACS: Taxonomy Adaptive Cross-Domain Semantic Segmentation
β Domain Transfer through Deep Activation Matching
GIPSO: Geometrically Informed Propagation for Online Adaptation in 3D LiDAR Segmentation
4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks
Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
Segmenter: Transformer for Semantic Segmentation
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
β Denoising Pretraining for Semantic Segmentation
β Language-driven Semantic Segmentation
Active Boundary Loss for Semantic Segmentation
Segfix: Model-agnostic boundary refinement for segmentation
β Segment Anything
SWAD: Domain Generalization by Seeking Flat Minima
Domain Generalization by Learning and Removing Domain-specific Features
Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization
β Sparse Mixture-of-Experts are Domain Generalizable Learners
Assaying Out-Of-Distribution Generalization in Transfer Learning
Visual Prompting via Image Inpainting
Uncertainty Modeling for Out-of-Distribution Generalization
Delving Deep into the Generalization of Vision Transformers under Distribution Shifts
A Fine-Grained Analysis on Distribution Shift
β Generalization to Out-of-Distribution transformations
β Agree to Disagree: Diversity through Disagreement for Better Transferability
Mitigating Neural Network Overconfidence with Logit Normalization
βBeyond AUROC & Co. for Evaluating Out-of-Distribution Detection Performance
βDecoupling MaxLogit for Out-of-Distribution Detection
Positive-Unlabeled Learning with Non-Negative Risk Estimator
Learning To Prompt for Continual Learning
SOLO: Segmenting Objects by Locations
SOLOv2: Dynamic and Fast Instance Segmentation
Freesolo: Learning to segment objects without annotations
Dense Contrastive Learning for Self-Supervised Visual Pre-Training
β Cut and Learn for Unsupervised Object Detection and Instance Segmentation
β Transformers are Sample-Efficient World Models
Revisiting the Calibration of Modern Neural Networks
β Mitigating Bias in Calibration Error Estimation
β LogME: Practical Assessment of Pre-trained Models for Transfer Learning
π Transferability Estimation Using Bhattacharyya Class Separability
π LEEP: A new measure to evaluate transferability of learned representations
β Scalable Diverse Model Selection for Accessible Transfer Learning
β Transferability Metrics for Selecting Source Model Ensembles
β Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model Hubs
Dataset Distillation by Matching Training Trajectories
A Simple Framework for Open-Vocabulary Segmentation and Detection