FairDD: Fair Dataset Distillation via Synchronized Matching
Condensing large datasets into smaller synthetic counterparts has demonstrated its promise
for image classification. However, previous research has overlooked a crucial concern in …
for image classification. However, previous research has overlooked a crucial concern in …
Going Beyond Feature Similarity: Effective Dataset distillation based on Class-aware Conditional Mutual Information
Dataset distillation (DD) aims to minimize the time and memory consumption needed for
training deep neural networks on large datasets, by creating a smaller synthetic dataset that …
training deep neural networks on large datasets, by creating a smaller synthetic dataset that …
DRUPI: Dataset Reduction Using Privileged Information
Dataset reduction (DR) seeks to select or distill samples from large datasets into smaller
subsets while preserving performance on target tasks. Existing methods primarily focus on …
subsets while preserving performance on target tasks. Existing methods primarily focus on …