FairDD: Fair Dataset Distillation via Synchronized Matching

Q Zhou, S Fang, S He, W Meng, J Chen - arXiv preprint arXiv:2411.19623, 2024 - arxiv.org
Condensing large datasets into smaller synthetic counterparts has demonstrated its promise
for image classification. However, previous research has overlooked a crucial concern in …

Going Beyond Feature Similarity: Effective Dataset distillation based on Class-aware Conditional Mutual Information

X Zhong, B Chen, H Fang, X Gu, ST Xia… - arXiv preprint arXiv …, 2024 - arxiv.org
Dataset distillation (DD) aims to minimize the time and memory consumption needed for
training deep neural networks on large datasets, by creating a smaller synthetic dataset that …

DRUPI: Dataset Reduction Using Privileged Information

S Wang, Y Yang, S Zhang, C Sun, W Li, X Hu… - arXiv preprint arXiv …, 2024 - arxiv.org
Dataset reduction (DR) seeks to select or distill samples from large datasets into smaller
subsets while preserving performance on target tasks. Existing methods primarily focus on …