Unlearning in Tabular-to-Hypergraph Learning via Selective Distillation
DOI:
https://doi.org/10.63313/JCSFT.9035Keywords:
Tabular Learning, Hypergraph Modeling, Machine Unlearning, Knowledge Distillation, Counterfactual Invariance, Membership InferenceAbstract
Machine unlearning requires efficiently removing the influence of specified information (rows/columns/values) from a deployed model to satisfy privacy and regulatory constraints. Converting tables into hypergraphs can capture high-order relations such as “same column”, “same value”, and multi-column interactions, but unlearning for hypergraph-based models is often implemented via costly retraining or gradient/Hessian-based approximations, and becomes especially challenging for column- and value-level deletion due to broad structural dependencies and limited verifiability. We propose HERMES, a fundamentally different unlearning paradigm for tabular-to-hypergraph modeling based on selective knowledge transfer rather than parameter rollback or second-order correction. HERMES freezes the original trained model as a teacher and trains a student as the released unlearned model: on the retained set, the student learns from the teacher through structure-augmented distillation and consistency regularization to preserve utility; on the forget set, the student is driven toward maximum-entropy predictions and explicitly pushed away from the teacher via anti-distillation divergence, actively erasing memorized behaviors that can be exploited by membership inference. For column/value deletion, HERMES introduces counterfactual invariance by replacing deleted attributes/values and enforcing prediction consistency, preventing the student from relying on prohibited information even through hypergraph message-passing “detours”. The framework requires neither graph partitioning nor second-order information, and can be implemented as a lightweight post-processing stage that completes in a few epochs while offering practical verification signals based on forget-set entropy, teacher–student disagreement, and membership inference risk.
References
[1] Chen P, Sarkar S, Lausen L, et al. Hytrel: Hypergraph-enhanced tabular data representation learning[J]. Advances in Neural Information Processing Systems, 2023, 36: 32173-32193.
[2] Arik S Ö, Pfister T. Tabnet: Attentive interpretable tabular learning[C]//Proceedings of the AAAI conference on artificial intelligence. 2021, 35(8): 6679-6687.
[3] Newatia A, Cooper M, Krishnan R. Unlearning Tabular Data Without a" Forget Set''[C]//NeurIPS 2024 Third Table Representation Learning Workshop.
[4] Xu C, Huang H, Ying X, et al. HGNN: Hierarchical graph neural network for predicting the classification of price-limit-hitting stocks[J]. Information Sciences, 2022, 607: 783-798.
[5] Zhu Z, Fan X, Chu X, et al. HGCN: A heterogeneous graph convolutional network-based deep learning model toward collective classification[C]//Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 2020: 1161-1171.
[6] Bretto A. Hypergraph theory[J]. An introduction. Mathematical Engineering. Cham: Springer, 2013, 1: 209-216.
[7] Bourtoule L, Chandrasekaran V, Choquette-Choo C A, et al. Machine unlearning[C]//2021 IEEE symposium on security and privacy (SP). IEEE, 2021: 141-159.
[8] Nguyen T T, Huynh T T, Ren Z, et al. A survey of machine unlearning[J]. ACM Transactions on Intelligent Systems and Technology, 2025, 16(5): 1-46.
[9] Zhang H, Nakamura T, Isohara T, et al. A review on machine unlearning[J]. SN Computer Science, 2023, 4(4): 337.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 by author(s) and Erytis Publishing Limited.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.













