I am an Research Scientist in the Emergent Artificial Intelligence Lab at Intel Labs. My research focuses on three main areas:
- Characterizing latent social biases in large generative models
- Techniques for correcting these unwanted/unexpected biases
- Conducting large-scale multimodal model training and inference (on Intel accelerators)
Prior to Intel, I was postdoc at Princeton University working with Professor Brandon Stewart, and did my PhD at the University of Oxford with Professors Andy Eggers and Raymond Duch.
News
- Dec 2024: Scholar Award for top Intel Labs Academic Author
- Dec 2024: Spotlight Paper at the Creativity and AI Workshop at NeurIPS 2024
- Oct 2024: 3 papers accepted to NeurIPS 2024 Workshops
- Sep 2024: 2 papers accepted to EMNLP 2024
- Aug 2024: ACL Outstanding Paper Award!
- Mar 2024: New model on HuggingFace: intel/llava-gemma-2b
- Feb 2024: Started at Intel Labs as AI Research Scientist
Publications
Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Neale Ratzlaff, Matthew Lyle Olson, Musashi Hinck, Shao-Yen Tseng, Vasudev Lal, and Phillip Howard. NeurIPS 2024 SafeGenAI Workshop.
Steering Large Language Models to Evaluate and Amplify Creativity
Matthew Lyle Olson, Neale Ratzlaff, Musashi Hinck, Shao-Yen Tseng, Vasudev Lal. NeurIPS 2024 Creativity and Generative AI. ⭐Spotlight⭐
Exploring Vision Transformers for Early Detection of Climate Change Signals
Sungduk Yu, Anahita Bhiwandiwalla, Yaniv Gurwicz, Musashi Hinck, Matthew Olson, Raanan Rohekar, Vasudev Lal. NeurIPS 2024 Tackling Climate Change with Machine Learning.
AutoPersuade: A Framework for Evaluating and Explaining Persuasive Arguments
Till Raphael Saenger, Musashi Hinck, Justin Grimmer, and Brandon M Stewart. EMNLP 2024 (Main).
Why do LLaVA Vision-Language Models Reply to Images in English?
Musashi Hinck*, Carolin Holtermann, Matthew Lyle Olson, Florian Schneider, Sungduk Yu, Anahita Bhiwandiwalla, Anne Lauscher, Shaoyen Tseng, Vasudev Lal. EMNLP 2024 (Findings).
LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model
Musashi Hinck*, Matthew L Olson*, David Cobbley, Shao-Yen Tseng, Vasudev Lal. CVPR 2024 Multimodal Foundation Models Workshop.
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models
Paul Röttger, Valentin Hofmann, Valentina Pyatkin, Musashi Hinck, Hannah Rose Kirk, Hinrich Schütze, Dirk Hovy. ACL 2024 (Main). ⭐Outstanding Paper Award⭐
Using Imperfect Surrogates for Downstream Inference: Design-based Supervised Learning for Social Science Applications of Large Language Models
Naoki Egami, Musashi Hinck, Hanying Wei and Brandon Stewart. NeurIPS 2023 (Main).
Preprints
ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution
Sungduk Yu, Brian L White, Anahita Bhiwandiwalla, Musashi Hinck, Matthew Lyle Olson, Tung Nguyen, Vasudev Lal.