Dequan Wang 王德泉
Dequan Wang 王德泉
Home
Publications
CV
Light
Dark
Automatic
3
On-target Adaptation
As for most adaptation methods, most of the parameter updates for the model representation and the classifier are derived from the source and not the target. However, target accuracy is the goal, so we argue for optimizing as much as possible on the target data.
Dequan Wang
,
Shaoteng Liu
,
Sayna Ebrahimi
,
Evan Shelhamer
,
Trevor Darrell
PDF
Cite
arXiv
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks
Dent improves adversarial/robust accuracy (%) by more than 30 percent (relative) against AutoAttack on CIFAR-10 while preserving natural/clean accuracy. The static defenses alter training, while dent alters testing, and so this separation of concerns makes dent compatible with many existing models and defenses.
Dequan Wang
,
An Ju
,
Evan Shelhamer
,
David Wagner
,
Trevor Darrell
PDF
Cite
Code
arXiv
Dynamic Scale Inference by Entropy Minimization
Dynamic receptive field scale is optimized according to the output at test time. We optimize receptive field scales and filter parameters to minimize the output entropy. This gives a modest refinement for training and testing at the same scale, and generalization improves for testing at different scales.
Dequan Wang
,
Evan Shelhamer
,
Bruno Olshausen
,
Trevor Darrell
PDF
Cite
arXiv
Blurring the Line Between Structure and Learning to Optimize and Adapt Receptive Fields
We compose free-form filters and structured Gaussian filters by convolution to define a more general family of semi-structured filters than can be learned by either alone. Our composition makes receptive field scale, aspect, and orientation differentiable in a low-dimensional parameterization for efficient end-to-end learning.
Evan Shelhamer
,
Dequan Wang
,
Trevor Darrell
PDF
Cite
arXiv
Objects as Points
We represent objects by a single point at their bounding box center. Other properties, such as object size, dimension, 3D extent, orientation, and pose are then regressed directly from image features at the center location.
Xingyi Zhou
,
Dequan Wang
,
Philipp Krähenbühl
PDF
Cite
Code
Video
arXiv
VisDA: The Visual Domain Adaptation Challenge
It is well known that the success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. Unfortunately, performance often drops significantly when the model is presented with data from a new deployment domain which it did not see in training, a problem known as dataset shift. The VisDA challenge aims to test domain adaptation methods’ ability to transfer source knowledge and adapt it to novel target domains.
Xingchao Peng
,
Ben Usman
,
Neela Kaushik
,
Judy Hoffman
,
Dequan Wang
,
Kate Saenko
PDF
Cite
Dataset
Project
arXiv
FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation
While performance is improving for segmentation models trained and evaluated on the same data source, there has yet been limited research exploring the applicability of these models to new related domains. We propose the first unsupervised domain adaptation method for transferring semantic segmentation FCNs across image domains.
Judy Hoffman
,
Dequan Wang
,
Fisher Yu
,
Trevor Darrell
PDF
Cite
arXiv
Cite
×