Dongjin Kang
Hi! I am a M.S. student at DLI Lab advised by Jinyoung Yeo. Previously, I received B.S. in Computer Science from Yonsei University in Feb. 2024.
My recent research interests include: (i) Reasoning and Planning to solve long-horizon tasks and (ii) Embodied AI with a strong understanding of real-world dynamics. Additionally, I focus on analyzing language models (LMs) to identify limitations and room for improvement. The ultimate goal of my research is to design systems that enable humans to communicate and interact with AI in a trustworthy and beneficial manner.
Topics of interest
Reasoning and Planning: Think-and-Execute, RewardMATH
Embodied AI:
Analysis of LMs: Preference Bias, Cactus, RewardMATH
Publications
‡ indicates equal contribution.
2024
Evaluating Robustness of Reward Models for Mathematical Reasoning
Sunghwan Kim‡, Dongjin Kang‡, Taeyoon Kwon, Hyungjoo Chae, Jungsoo Won, Dongha Lee, Jinyoung Yeo
Arxiv preprint.
[paper] [code]
Coffee-gym: An environment for evaluating and improving natural language feedback on erroneous code
Hyungjoo Chae‡, Taeyoon Kwon‡, Seungjun Moon‡, Yongho Song, Dongjin Kang, Kai Tzu-iunn Ong, Beong-woo Kwak, Seonghyeon Bae, Seung-won Hwang, Jinyoung Yeo
EMNLP'24: The 2024 Conference on Empirical Methods in Natural Language Processing. 2024
[paper] [demo]
Cactus: Towards Psychological Counseling Conversations using Cognitive Behavioral Theory
Suyeon Lee‡, Sunghwan Kim‡, Minju Kim‡, Dongjin Kang, Dongil Yang, Harim Kim, Minseok Kang, Dayi Jung, Min Hee Kim, Seungbeen Lee, Kyoung-Mee Chung, Youngjae Yu, Dongha Lee, Jinyoung Yeo
EMNLP'24 findings: The 2024 Conference on Empirical Methods in Natural Language Processing. 2024
[paper]
Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support Conversation
Dongjin Kang‡, Sunghwan Kim‡, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo
[Outstanding Paper Award] ACL'24: The 62nd Annual Meeting of the Association for Computational Linguistics. 2024
[paper] [code]
Coffee: Boost your code llms by fixing bugs with feedback
Seungjun Moon‡, Yongho Song‡, Hyungjoo Chae‡, Taeyoon Kwon, Dongjin Kang, Kai Tzu-iunn Ong, Seung-won Hwang, Jinyoung Yeo
Arxiv Preprint
[paper] [demo]
Large language models are clinical reasoners: Reasoning-aware diagnosis framework with prompt-generated rationales
Taeyoon Kwon‡, Kai Tzu-iunn Ong‡, Dongjin Kang, Seungjun Moon, Jeong Ryong Lee, Dosik Hwang, Yongsik Sim, Beomseok Sohn, Dongha Lee, Jinyoung Yeo
AAAI'24: The 38th Annual AAAI Conference on Artificial Intelligence. 2024.
[paper]