ContextQ: Generated Questions to Support Meaningful Parent-Child Dialogue While Co-Reading
Much of early literacy education happens at home with caretakers reading books to young children. Prior research demonstrates how having dialogue with children...
Automatic Creative Selection with Cross-Modal Matching
Application developers advertise their Apps by creating product pages with App images, and bidding on search terms. It is then crucial for App...
pfl-research: Simulation Framework for Accelerating Research in Private Federated Learning
Federated Learning (FL) is an emerging ML training paradigm where clients own their data and collaborate to train a global model without revealing...
Generative Modeling with Phase Stochastic Bridges
This paper introduces a novel generative modeling framework grounded in phase space dynamics, taking inspiration from the principles underlying Critically Damped Langevin Dynamics...
Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling
This paper has been accepted at the Data Problems for Foundation Models workshop at ICLR 2024.
Large language models are trained on massive scrapes...
International Conference on Learning Representations (ICLR) 2024
International Conference on Learning Representations (ICLR) 2024
Source link
ACM Human-Computer Interaction conference (CHI) 2024
ACM Human-Computer Interaction conference (CHI) 2024
Source link
Conformal Prediction via Regression-as-Classification
Conformal prediction (CP) for regression can be challenging, especially when the output distribution is heteroscedastic, multimodal, or skewed. Some of the issues can...
Compressing LLMs: The Truth is Rarely Pure and Never Simple
Despite their remarkable achievements, modern Large Language Models (LLMs) encounter exorbitant computational and memory footprints. Recently, several works have shown significant success in...
Large Language Models as Generalizable Policies for Embodied Tasks
We show that large language models (LLMs) can be adapted to be generalizable policies for embodied visual tasks. Our approach, called Large LAnguage...