Low-Resource Vision Challenges for Foundation Models

Yunhua Zhang1
Hazel Doughty2
Cees G.M. Snoek1

1VIS Lab, University of Amsterdam
2Leiden University

CVPR 2024

[Paper]
[Code]
[Datasets]


Figure 1. High-Resource vs Low-Resource Vision.


Abstract

Low-resource settings are well-established in natural language processing, where many languages lack sufficient data for machine learning at scale. However, low-resource problems are under-explored in computer vision. In this paper, we strive to address this gap and explore the challenges of low-resource image tasks with vision foundation models. Thus, we first collect a benchmark of genuinely low-resource image data, covering historic maps, circuit diagrams, and mechanical drawings. These low-resource settings all share the three challenges of data scarcity, fine-grained differences, and the distribution shift from natural images to the specialized domain of interest. While existing foundation models have shown impressive generalizability, we find they cannot transfer well to our low-resource tasks. To begin to tackle the challenges of low-resource vision, we introduce one simple baseline per challenge. Specifically, we propose to i) enlarge the data space by generative models, ii) adopt the best sub-kernels to encode local regions for fine-grained difference discovery and iii) learn attention for specialized domains. Experiments on the three low-resource data sources in our benchmark demonstrate our proposals already provide a better baseline than common transfer learning, data augmentation, and fine-grained methods. This highlights the unique characteristics and challenges of low-resource vision for foundation models that warrant further investigation.

Tasks


Figure 2. Low-Resource Image Transfer Evaluation Benchmark.

Our three benchmark tasks are: (a) classifying circuit diagrams with the correct function, (b) retrieving the modern satellite map given an old map of a city, and (c) retrieving the mechanical drawing corresponding to a 3D photo of a component and vice versa.


Table 1. Benchmark Statistics

Low-Resource Vision Challenges

Challenge I: Data Scarcity. The data available for training models for low-resource scenarios is extremely limited. This is demonstrated through the small amount of data we were able to find online for each low-resource task (see Table 1). Challenge II: Fine-Grained. Data that is low-resourceis also highly specialized, meaning differences between images are incredibly subtle and attention to fine-grained details is necessary to solve the task. For example, the component symbols are key to a circuit’s purpose, not its layout. Similarly, in mechanical drawings, the components may only vary in the number of holes.
Challenge III:Specialized Domain. Not only is the available data severely limited, but it has a significantly different appearance and comes from an entirely different domain to the natural images commonly used in vision tasks. This means it is both difficult to bootstrap the training data for low-resource tasks with existing datasets and models that are successful on natural images cannot be easily applied to the specialized domains of low-resource images.

Baselines for the Low-Resource Challenges

Our goal is to adapt foundation models, pre-trained on large-scale datasets, to low-resource tasks. To better handle adaptation in low-resource vision, we introduce one baseline for each challenge.

Baseline I: Generated Data for Data Scarcity

We augment images with generative models, obtaining images close to the input image where the label is preserved as well as more diverse images which break the label. We use label-preserving images in the task loss and augment the label-breaking images for use in a contrastive loss.


Figure 3. Generated Data for Data Scarcity.

Baseline II: Tokenization for Fine-Grained

We divide the original linear projection of a pre-trained foundation model into sub-kernels. These sub-kernels can be applied to smaller areas of the image patch to attend to fine-grained details. We learn a weighting to combine the resulting features into patch-level features.


Figure 4. Tokenization for Fine-Grained.

Baseline III: Attention for Specialized Domains

We learn a set of global attention maps with common attention patterns particular to the specialized domain such as vertical and horizontal directions for circuit diagrams. For each token, we crop the corresponding region from the global attention map according to the location.


Figure 5. Attention for Specialized Domains.



Paper and Supplementary Material

Yunhua Zhang, Hazel Doughty, Cees G.M. Snoek
Low-Resource Vision Challenges for Foundation Models.
(hosted on ArXiv)
[Bibtex]


Contact
[Email]
[Twitter]



Acknowledgements

This work is financially supported by the Inception Institute of Artificial Intelligence, the University of Amsterdam and the allowance Top consortia for Knowledge and Innovation (TKIs) from the Netherlands Ministry of Economic Affairs and Climate Policy.

This website template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.