Multimodal Vision-Language Models (VLMs) enable powerful applications from their fused understanding of images and language, but
many perform poorly on UI tasks due to the lack of UI training data. In this paper, we adapt a recipe for generating paired text-image
training data for VLMs to the UI domain by combining existing pixel-based methods with a Large Language Model (LLM). Unlike
prior art, our method requires no human-provided annotations, and it can be applied to any dataset of UI screenshots. We generate a
dataset of 335K conversational examples paired with UIs that cover Q&A, UI…
Source link
Home Machine Learning ILuvUI: Instruction-Tuned Language-Vision Modeling of UIs from Machine Conversations