Technical Presentations Group 2: AI for Good + Business Impact/Industry
People can create image descriptions using thousands of languages, but these languages share only one visual space. The aim of this work is to leverage visual feature space to pass information across languages. We show that models trained to generate textual captions in more than one language conditioned on an input image can leverage their jointly trained feature space during inference to pivot across languages. We particularly demonstrate improved quality on a caption generated from an input image, by leveraging a caption in a second language. More importantly, we demonstrate that even without conditioning on any visual input, the model demonstrates to have learned implicitly to perform to some extent machine translation from one language to another in their shared visual feature space even though the multilingual captions used for training are created independently.
Authors: Ziyan Yang (Rice University), Leticia Pinto-Alva (University of Southern California), Franck Dernoncourt (Adobe Research), and Vicente Ordóñez (Rice University)