Research
I am broadly interested in robust machine learning algorithm and robot learning.
|
|
Soft Separation and Distillation: Toward Global Uniformity in Federated Unsupervised Learning
Hung-Chieh Fang, Hsuan-Tien Lin, Irwin King, Yifei Zhang
Under Review.
manuscript
We introduce SSD, a framework that enhances representation quality in federated learning by improving inter-client uniformity. SSD includes a dimension-scaled regularization term to softly separate client embeddings while preserving data structure and a projector distillation term to transfer optimization benefits from the projector to the encoder.
|
|
Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation
Hung-Chieh Fang, Po-Yi Lu, Hsuan-Tien Lin
Preprint. Under Review.
paper
We identify the unsolved Extreme UniDA sub-task, highlighting the limitations of existing partial domain matching paradigms. We propose using self-supervised loss to tackle dimensional collapse in target representation and step toward more comprehensive UniDA.
|
|
Open-domain Conversational Question Answering with Historical Answers
Hung-Chieh Fang*, Kuo-Han Hung*, Chao-Wei Huang, Yun-Nung Chen
AACL-IJCNLP 2022
paper / code
We propose combining the signal from historical answers with the noise-reduction ability of knowledge distillation to improve information retrieval and question answering.
|
|
Zero-shot Text Behavior Retrieval
Hung-Chieh Fang*, Kuo-Han Hung*, Nai-Xuan Ye*, Shao-Syuan Huang*
Course Project of "Reinforcement Learning", Fall 2023
paper
We propose a method for retrieving task-relevant data for imitation learning without requiring expert demonstrations. Our approach leverages text descriptions in combination with a vision-language model to enable zero-shot behavior retrieval.
|
|
Integrating Self-supervised Speech Model with Pseudo Word-level Targets from Visually-grounded Speech Model
Hung-Chieh Fang*, Nai-Xuan Ye*, Yi-Jen Shih, Puyuan Peng, Hsuan-Fu Wang, Layne Berry, Hung-yi Lee, David Harwath
ICASSP 2024 workshop: Self-supervision in Audio, Speech and Beyond
paper
We propose using vision as a surrogate for paired transcripts to enrich the semantic information in self-supervised speech models.
|
This template is adapted from here.
|
|