Winning the Pruning Gamble: A Unified Approach to Joint Sample and Token Pruning for Efficient Supervised Fine-Tuning

e EPIC Lab, Shanghai Jiao Tong University a Alibaba Group n Nanjing University
b Beijing Jiaotong University h Hong Kong University of Science and Technology
Corresponding Authors: {shaobowang1009, zhanglinfeng}@sjtu.edu.cn

TL;DR: As supervised fine-tuning (SFT) evolves from a lightweight post-training step into a compute-intensive phase rivaling mid-training in scale, data efficiency has become critical for aligning large language models (LLMs) under tight budgets. Existing data pruning methods suffer from a fragmented design: they operate either at the sample level or the token level in isolation, failing to jointly optimize both dimensions. This disconnect leads to significant inefficiencies—high-value samples may still contain redundant tokens, while token-level pruning often discards crucial instructional or corrective signals embedded in individual examples. To address this bottleneck, we introduce the Error–Uncertainty (EU) Plane, a diagnostic framework that jointly characterizes the heterogeneous utility of training data across samples and tokens. Guided by this insight, we propose Quadrant-based Tuning (Q-Tuning), a unified framework that strategically coordinates sample pruning and token pruning. Q-Tuning employs a two-stage strategy: first, it performs sample-level triage to retain examples rich in informative misconceptions or calibration signals; second, it applies an asymmetric token-pruning policy, using a context-aware scoring mechanism to trim less salient tokens exclusively from misconception samples while preserving calibration samples in their entirety. Our method sets a new state of the art across five diverse benchmarks. Remarkably, on SmolLM2-1.7B, Q-Tuning achieves a +38% average improvement over the full-data SFT baseline using only 12.5% of the original training data. As the first dynamic pruning approach to consistently outperform full-data training, Q-Tuning provides a practical and scalable blueprint for maximizing data utilization in budget-constrained LLM SFT.



Contributions

Generalized Dynamic Data Pruning. We formalize the joint sample-token pruning problem through the Generalized Dynamic Data Pruning framework — a bilevel optimization objective for hybrid pruning strategies.

Error-Uncertainty (EU) Plane. We introduce the Error-Uncertainty (EU) Plane, a diagnostic tool that quantifies and explains why naive pruning heuristics fail, revealing the heterogeneous value of data across error and uncertainty dimensions.

Q-Tuning. We propose Q-Tuning, the first integrated, diagnosis-driven algorithm for dynamic pruning that coordinates sample and token decisions based on EU Plane insights.


Method

The Error-Uncertainty (EU) Plane is a diagnostic tool used to categorize each training sample based on two orthogonal metrics. The first axis is error, quantified by perplexity ($\mathrm{PPL}$), which measures how surprising a ground-truth sequence is to the model. A high $\mathrm{PPL}$ suggests the model finds the data difficult or has a misconception. The formula is:

\[ \mathrm{PPL}(x,y;f_\theta)=\exp\Bigg(\frac{\sum_{i\in T(x)} -\log p(y_i\mid x,y_{<i};f_\theta)}{|T(x)|}\Bigg) \]

The second axis is uncertainty, quantified by predictive entropy ($\mathrm{Ent}$), which measures the model's indecision or how broadly it distributes its probability predictions, regardless of correctness. The formula is:

\[ \mathrm{Ent}(x,y;f_\theta)=\frac{\sum_{i\in T(x,y)}\Big(-\sum_{v\in\mathcal V}p(v\mid x,y_{<i};f_\theta)\log p(v\mid x,y_{<i};f_\theta)\Big)}{|T(x)|} \]

By plotting each sample on this plane, the data is partitioned into four distinct quadrants, enabling a principled approach to data pruning.

Sample-level pruning is the first stage of the Q-Tuning method, which operates on the entire training batch. Using the EU Plane coordinates for each sample, this stage aims to discard uninformative data points entirely. Specifically, it discards samples that fall into the "Harmful Noise" (high error, low uncertainty) and "Redundant Knowledge" (low error, low uncertainty) quadrants. To achieve a target sample retention ratio, $r_{\text{sample}}$, a bisect search algorithm dynamically determines the quantile thresholds for perplexity and entropy that correctly partition the batch. Only samples identified as "Valuable Misconceptions" (high error, low uncertainty) and "Calibration Data" (high error, high uncertainty) are kept for the next stage of processing.

Token-level pruning is the second, more granular stage that is selectively applied only to the samples classified as "Valuable Misconceptions" (Q2) from the previous stage. The goal is to isolate the useful learning signal within these samples by removing locally detrimental or noisy tokens. For each token, a smoothed importance score, $s_i$, is calculated, which considers both the token's own perplexity and that of its immediate neighbors, preventing the accidental removal of critical tokens that have isolated perplexity spikes. The formula is: $$s_i(x,y;f_\theta)=(1-\lambda)\,\mathrm{PPL}_i(x,y;f_\theta)+\lambda\big[\mathrm{PPL}_{i-1}(x,y;f_\theta)+\mathrm{PPL}_{i+1}(x,y;f_\theta)\big]$$ Tokens are then ranked by this score, and only the top-$r_{\textrm{token}}$ fraction are retained for the training update. Samples classified as "Calibration Data" (Q4) are exempt from this process and are preserved in their entirety.


Experimental Results

  1. Q-Tuning often matches or surpasses full-dataset fine-tuning while using only a fraction of the training budget. On LLaMA2-7B, with 25% of the samples and 70% of the tokens, Q-Tuning achieves 36.9, closely matching the full-data baseline. On Mistral-7B, the same budget yields 46.2, slightly higher than the full-data result.
  2. When compared with methods such as InfoBatch, PPL, and SparseVLM, Q-Tuning achieves higher accuracy under the same budgets. For example, with 25% samples and 50% tokens, it improves LLaMA2-7B to 36.5, well above the best baseline. On Mistral-7B under the same setting, Q-Tuning exceeds the strongest baseline by 1.63%.
  3. The advantages of Q-Tuning are consistent across model families and pruning ratios. With 12.5% samples and 50% tokens, it outperforms the best pruning baselines by 3.3 points on LLaMA2-7B and 2.7 points on Mistral-7B. At larger budgets with 50% samples and 70% tokens, Q-Tuning further widens the margin, exceeding the strongest baselines by 2.4 and 3.7 points, respectively, while closely matching full-dataset performance.
  4. Q-Tuning delivers consistent gains across reasoning benchmarks. On GSM8K, it largely improves LLaMA3-8B, Mistral-7B, and SmolLM-1.7B under the 25% × 70% budget, all surpassing their full-data counterparts. On the more challenging MATH benchmark, Q-Tuning also exceeds the strongest baselines on both LLaMA3-8B and Mistral-7B.
  5. Q-Tuning scales reliably across model sizes. Averaged over GSM8K and MATH, it reaches 21.5 on LLaMA3-8B, 26.6 on Mistral-7B, and 11.8 on SmolLM-1.7B, all notably higher than their full-dataset counterparts.


Ablation study

    Can Q-Tuning Outperform Other Methods in Independent Sample and Token Pruning? We conducted ablation studies to evaluate the effectiveness of each pruning strategy on its own. Figure as follow provides a direct comparison: panel (a) shows dynamic sample pruning with all tokens retained, while panel (b) shows dynamic token pruning with all samples retained. In both cases, Q-Tuning consistently outperforms all baseline methods, demonstrating the advantage of coordinating sample and token pruning rather than applying them independently.
    Effectiveness of context awareness λ. We examined the impact of the coefficient λ, which controls neighbor awareness. Moderate values of λ improved performance on GSM8K and SQuAD, while extreme values led to diminishing or unstable gains. In contrast, TriviaQA showed little sensitivity to λ, with performance remaining stable across all settings. These results suggest that Q-Tuning benefits from incorporating neighbor awareness, but only up to a moderate level, beyond which the gains become marginal.


Conclusion

    This work successfully transforms the high-risk endeavor of dynamic data pruning for LLM fine-tuning from a speculative gamble into a reliable, high-performance strategy. By diagnosing the failure modes of naive, one-dimensional heuristics through the novel Error-Uncertainty Plane, we reveal the heterogeneous value of data and the critical need for a nuanced approach. Our proposed Quadrant-based Tuning (Q-Tuning) directly addresses this by implementing a principled, two-stage framework that intelligently coordinates sample-level and token-level pruning decisions based on the diagnostic insights of the EU Plane. This integrated strategy surgically removes harmful or redundant data while preserving and enhancing valuable learning signals, thereby achieving unprecedented efficiency without sacrificing—and often even improving upon—model performance, effectively turning the pruning gamble into a consistently winning bet.

BibTeX


              bibtex