Dequan Wang 王德泉
Dequan Wang 王德泉
Home
Publications
CV
Light
Dark
Automatic
adaptation
Back to the Source: Diffusion-Driven Adaptation to Test-Time Corruption
Our diffusion-driven adaptation method, DDA, shares its models for classification and generation across all domains. Both models are trained on the source domain, then fixed during testing. We augment diffusion with image guidance and self-ensembling to automatically decide how much to adapt.
Jin Gao
,
Jialing Zhang
,
Xihui Liu
,
Trevor Darrell
,
Evan Shelhamer
,
Dequan Wang
PDF
Cite
Code
Contrastive Test-time Adaptation
We introduce AdaContrast, a novel test-time adaptation strategy that uses self-supervised contrastive learning on the target domain to exploit the pair-wise information among target samples, which is optimized jointly with pseudo labeling.
Dian Chen
,
Dequan Wang
,
Trevor Darrell
,
Sayna Ebrahimi
PDF
Cite
Code
Project
On-target Adaptation
As for most adaptation methods, most of the parameter updates for the model representation and the classifier are derived from the source and not the target. However, target accuracy is the goal, so we argue for optimizing as much as possible on the target data.
Dequan Wang
,
Shaoteng Liu
,
Sayna Ebrahimi
,
Evan Shelhamer
,
Trevor Darrell
PDF
Cite
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks
Dent improves adversarial/robust accuracy (%) by more than 30 percent (relative) against AutoAttack on CIFAR-10 while preserving natural/clean accuracy. The static defenses alter training, while dent alters testing, and so this separation of concerns makes dent compatible with many existing models and defenses.
Dequan Wang
,
An Ju
,
Evan Shelhamer
,
David Wagner
,
Trevor Darrell
PDF
Cite
Code
Tent: Fully Test-time Adaptation by Entropy Minimization
Tent equips a model to adapt itself to new and different data during testing ☀️ 🌧 ❄️. Tented models adapt online and batch-by-batch to reduce error on dataset shifts like corruptions, simulation-to-real discrepancies, and other differences between training and testing data.
Dequan Wang
,
Evan Shelhamer
,
Shaoteng Liu
,
Bruno Olshausen
,
Trevor Darrell
PDF
Cite
Code
Video
Dynamic Scale Inference by Entropy Minimization
Dynamic receptive field scale is optimized according to the output at test time. We optimize receptive field scales and filter parameters to minimize the output entropy. This gives a modest refinement for training and testing at the same scale, and generalization improves for testing at different scales.
Dequan Wang
,
Evan Shelhamer
,
Bruno Olshausen
,
Trevor Darrell
PDF
Cite
VisDA: The Visual Domain Adaptation Challenge
It is well known that the success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. Unfortunately, performance often drops significantly when the model is presented with data from a new deployment domain which it did not see in training, a problem known as dataset shift. The VisDA challenge aims to test domain adaptation methods’ ability to transfer source knowledge and adapt it to novel target domains.
Xingchao Peng
,
Ben Usman
,
Neela Kaushik
,
Judy Hoffman
,
Dequan Wang
,
Kate Saenko
PDF
Cite
Dataset
Project
FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation
While performance is improving for segmentation models trained and evaluated on the same data source, there has yet been limited research exploring the applicability of these models to new related domains. We propose the first unsupervised domain adaptation method for transferring semantic segmentation FCNs across image domains.
Judy Hoffman
,
Dequan Wang
,
Fisher Yu
,
Trevor Darrell
PDF
Cite
Cite
×