Home
News
People
Publications
Contact
Knowledge Distillation
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Knowledge Graph Embedding (KGE), which projects entities and relations into continuous vector spaces, has garnered significant …
Yichen Liu
,
Jiawei Chen
,
Yan Feng
,
Can Wang
PDF
Cite
DOI
Distillation Matters: Empowering Sequential Recommenders to Match the Performance of Large Language Models
Owing to their powerful semantic reasoning capabilities, Large Language Models (LLMs) have been effectively utilized as recommenders, …
Yu Cui
,
FengLiu
,
PengboWang
,
Bohao Wang
,
Heng Tang
,
YiWan
,
JunWang
,
Jiawei Chen
PDF
Cite
Code
DOI
Online Adversarial Knowledge Distillation for Graph Neural Networks
Knowledge distillation, a technique recently gaining popularity for enhancing model generalization in Convolutional Neural Networks …
Can Wang
,
Zhe Wang
,
Defang Chen
,
ShengZhou
,
Yan Feng
,
Chun Chen
PDF
Cite
Code
DOI
Accelerating Diffusion Sampling with Classifier-based Feature Distillation
Although diffusion model has shown great potential for generating higher quality images than GANs, slow sampling speed hinders its wide …
Wujie Sun
,
Defang Chen
,
Can Wang
,
DeshiYe
,
Yan Feng
,
Chun Chen
PDF
Cite
Code
DOI
Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Multi-Teacher knowledge distillation provides students with additional supervision from multiple pre-trained teachers with diverse …
HailinZhang
,
Defang Chen
,
Can Wang
PDF
Cite
Code
DOI
Customizing Synthetic Data for Data-Free Student Learning
Data-free knowledge distillation (DFKD) aims to obtain a lightweight student model without original training data. Existing works …
ShiyaLuo
,
Defang Chen
,
Can Wang
PDF
Cite
Code
DOI
Holistic Weighted Distillation for Semantic Segmentation
Channel-wise distillation for semantic segmentation has proven to be a more effective method than spatial-based distillation. By …
Wujie Sun
,
Defang Chen
,
Can Wang
,
DeshiYe
,
Yan Feng
,
Chun Chen
PDF
Cite
Code
DOI
Knowledge Distillation with Deep Supervision
Knowledge distillation aims to enhance the performance of a lightweight student model by exploiting the knowledge from a pre-trained …
ShiyaLuo
,
Defang Chen
,
Can Wang
PDF
Cite
Code
DOI
Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural Networks
Existing knowledge distillation methods on graph neural networks (GNNs) are almost offline, where the student model extracts knowledge …
JiongyuGuo
,
Defang Chen
,
Can Wang
PDF
Cite
Code
DOI
Knowledge Distillation with the Reused Teacher Classifier
Knowledge distillation aims to compress a powerful yet cumbersome teacher model into a lightweight student model without much sacrifice …
Defang Chen
,
JianpingMei
,
HailinZhang
,
Can Wang
,
Yan Feng
,
Chun Chen
PDF
Cite
DOI
»
Cite
×