- [May 2025]
Our paper, "Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation", has been accepted to ICML 2025. See you in Vancouver!
|
- [May 2025]
I'm fortunate to visit ILIAD at Stanford, hosted by Prof. Dorsa Sadigh and Amber Xie.
|
- [Jul 2024]
I'm fortunate to visit The Chinese University of Hong Kong, hosted by Prof. Irwin King and Dr. Yifei Zhang.
|
Research
I am broadly interested in robust machine learning algorithm and robot learning.
|
|
Soft Separation and Distillation: Toward Global Uniformity in Federated Unsupervised Learning
Hung-Chieh Fang, Hsuan-Tien Lin, Irwin King, Yifei Zhang
Under Review.
manuscript
We introduce SSD, a framework that enhances representation quality in federated learning by improving inter-client uniformity. SSD includes a dimension-scaled regularization term to softly separate client embeddings while preserving data structure and a projector distillation term to transfer optimization benefits from the projector to the encoder.
|
|
Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation
Hung-Chieh Fang, Po-Yi Lu, Hsuan-Tien Lin
International Conference on Machine Learning (ICML), 2025
paper / code (TBA)
We identify the unsolved Extreme UniDA sub-task, highlighting the limitations of existing partial domain matching paradigms. We propose using self-supervised loss to tackle dimensional collapse in target representation and step toward more comprehensive UniDA.
|
|
Open-domain Conversational Question Answering with Historical Answers
Hung-Chieh Fang*, Kuo-Han Hung*, Chao-Wei Huang, Yun-Nung Chen
Asian Chapter of the Association for Computational Linguistics (AACL), 2022
paper / code
We propose combining the signal from historical answers with the noise-reduction ability of knowledge distillation to improve information retrieval and question answering.
|
|
Zero-shot Text Behavior Retrieval
Hung-Chieh Fang*, Kuo-Han Hung*, Nai-Xuan Ye*, Shao-Syuan Huang*
Course Project of "Reinforcement Learning", Fall 2023
paper
We propose a method for retrieving task-relevant data for imitation learning without requiring expert demonstrations. Our approach leverages text descriptions in combination with a vision-language model to enable zero-shot behavior retrieval.
|
|
Integrating Self-supervised Speech Model with Pseudo Word-level Targets from Visually-grounded Speech Model
Hung-Chieh Fang*, Nai-Xuan Ye*, Yi-Jen Shih, Puyuan Peng, Hsuan-Fu Wang, Layne Berry, Hung-yi Lee, David Harwath
Workshop on Self-supervision in Audio, Speech and Beyond, ICASSP 2024
paper
We propose using vision as a surrogate for paired transcripts to enrich the semantic information in self-supervised speech models.
|
This template is adapted from here.
|
|