Accepted Papers


We thank all authors who submitted to DMLR@ICLR 2024. All accepted manuscripts are listed below in random order. Authors who did not opt-in to publish their manuscript on the DMLR site have only the title of their work listed. Congrats to all author teams for being accepted to DMLR@ICLR 2024!

Posters with manuscript

Posters without manuscript

  • On the Scalability of GNNs for Molecular Graphs by Maciej Sypetkowski, Frederik Wenkel, Farimah Poursafaei, Nia Dickson, Karush Suri, Philip Fradkin, Dominique Beaini

  • Feedback-guided Data Synthesis for Imbalanced Classification by Reyhane Askari Hemmat, Mohammad Pezeshki, Florian Bordes, Michal Drozdzal, Adriana Romero-Soriano

  • Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models by Avi Singh, John D Co-Reyes, Rishabh Agarwal

  • Pretraining Probabilistic Models for Scalable Precision Agriculture by Ruhana Azam, Sang T. Truong, Samuel B. Fernandes, Andrew D.B. Leakey, Alexander Lipka, Mohammed El-Kebir, Sanmi Koyejo

  • Environment-adjusted Topic Models by Dominic Sobhani, Amir Feder, David Blei

  • FTFT: efficient and robust Fine-Tuning by transFerring Training Dynamics by Yupei Du, Albert Gatt, Dong Nguyen

  • GitChameleon: Breaking the version barrier for code generation models by Nizar Islah, Justine Gehring, Diganta Misra, Massimo Caccia, Irina Rish

  • Understanding the Robustness of Multi-modal Contrastive Learning to Distribution Shift by Yihao Xue, Siddharth Joshi, Dang Nguyen, Baharan Mirzasoleiman

  • Is a picture of a bird a bird? A mixed-methods approach to understanding diverse human perspectives and ambiguity in machine vision models by Alicia Parrish, Susan Hao, Sarah Laszlo, Lora Aroyo

  • Denoising Drug Discovery ADMET Data for Improved Regression Task Performance by Matthew Adrian, Yunsie Chung, Alan C Cheng

  • Pushing the Decision Boundaries: Discovering New Classes in Audio Data by Ryuhaerang Choi, Soumyajit Chatterjee, Dimitris Spathis, Fahim Kawsar, Mohammad Malekzadeh

  • Multi-model evaluation with labeled and unlabeled data by Divya M Shanmugam, Shuvom Sadhuka, Manish Raghavan, John Guttag, Bonnie Berger, Emma Pierson

  • Exploring the Efficacy of Meta-Learning: Unveiling Superior Data Diversity Utilization of MAML Over Pre-training by Kavita Selva, Satita Vittayaareekul, Brando Miranda

  • The Science of Data Filtering: Data Curation cannot be Compute Agnostic by Sachin Goyal, Pratyush Maini, Zachary Chase Lipton, Aditi Raghunathan, J Zico Kolter

  • QuRating: Selecting High-Quality Data for Training Language Models by Alexander Wettig, Aatmik Gupta, Saumya Malik, Danqi Chen

  • Data-Efficient Multi-Modal Contrastive Learning: Prioritizing Data Quality over Quantity by Siddharth Joshi, Arnav Jain, Ali Payani, Baharan Mirzasoleiman

  • Language Models as Science Tutors by Alexis Chevalier, Jiayi Geng, Alexander Wettig, Howard Chen, Sebastian Mizera, Toni Annala, Max Aragon, Arturo Rodriguez Fanlo, Simon Frieder, Simon Machado, Akshara Prabhakar, Ellie Thieu, Jiachen T. Wang, Zirui Wang, Xindi Wu, Mengzhou Xia, Wenhan Xia, Jiatong Yu, Junjie Zhu, Zhiyong Ren, Sanjeev Arora, Danqi Chen

  • Annotation Sensitivity: Drivers of Training Data Quality by Jacob Beck, Bolei Ma, Stephanie Eckman, Christoph Kern, Rob Chew, Frauke Kreuter

  • QualEval: Qualitative Evaluation for Model Improvement by Vishvak Murahari, Ameet Deshpande, Peter Clark, Tanmay Rajpurohit, Ashish Sabharwal, Karthik R Narasimhan, Ashwin Kalyan