Hung-Chieh Fang

I am a senior undergraduate majoring in Computer Science at National Taiwan University, where I am fortunate to be advised by Professors Hsuan-Tien Lin, Yun-Nung (Vivian) Chen and Shao-Hua Sun.

Currently, I am a visiting student at Stanford University, working with Prof. Dorsa Sadigh and Amber Xie.

Previously, I was a visiting student at the Chinese University of Hong Kong, where I had the privilege of working with Dr. Yifei Zhang and Prof. Irwin King.

Email  /  CV  /  Google Scholar  /  Github  /  Linkedin

profile photo

Research

I am broadly interested in robust machine learning algorithm and robot learning.

Soft Separation and Distillation: Toward Global Uniformity in Federated Unsupervised Learning
Hung-Chieh Fang, Hsuan-Tien Lin, Irwin King, Yifei Zhang
Under Review.
manuscript

We introduce SSD, a framework that enhances representation quality in federated learning by improving inter-client uniformity. SSD includes a dimension-scaled regularization term to softly separate client embeddings while preserving data structure and a projector distillation term to transfer optimization benefits from the projector to the encoder.

Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation
Hung-Chieh Fang, Po-Yi Lu, Hsuan-Tien Lin
Preprint. Under Review.
paper

We identify the unsolved Extreme UniDA sub-task, highlighting the limitations of existing partial domain matching paradigms. We propose using self-supervised loss to tackle dimensional collapse in target representation and step toward more comprehensive UniDA.

Open-domain Conversational Question Answering with Historical Answers
Hung-Chieh Fang*, Kuo-Han Hung*, Chao-Wei Huang, Yun-Nung Chen
AACL-IJCNLP 2022
paper / code

We propose combining the signal from historical answers with the noise-reduction ability of knowledge distillation to improve information retrieval and question answering.

Projects

Zero-shot Text Behavior Retrieval
Hung-Chieh Fang*, Kuo-Han Hung*, Nai-Xuan Ye*, Shao-Syuan Huang*
Course Project of "Reinforcement Learning", Fall 2023
paper

We propose a method for retrieving task-relevant data for imitation learning without requiring expert demonstrations. Our approach leverages text descriptions in combination with a vision-language model to enable zero-shot behavior retrieval.

Integrating Self-supervised Speech Model with Pseudo Word-level Targets from Visually-grounded Speech Model
Hung-Chieh Fang*, Nai-Xuan Ye*, Yi-Jen Shih, Puyuan Peng, Hsuan-Fu Wang, Layne Berry, Hung-yi Lee, David Harwath
ICASSP 2024 workshop: Self-supervision in Audio, Speech and Beyond
paper

We propose using vision as a surrogate for paired transcripts to enrich the semantic information in self-supervised speech models.

Teaching

Teaching Assistant, EE5100: Introduction to Generative Artificial Intelligence, Spring 2024

Teaching Assistant, CSIE5043: Machine Learning, Spring 2023

This template is adapted from here.