3 When can transformers reason with abstract symbols? We investigate the capabilities of transformer large language models (LLMs) on relational reasoning tasks involving abstract symbols. Such tasks have long been studied in the neuroscience literature as fundamental building blocks for more complex abilities in programming, mathematics, and verbal reasoning. For (i) regression tasks, we prove that transformers generalize when trained, but require astonishingly large quantities of training data. For (ii) next-token-prediction tasks with symbolic labels, we show an "inverse scaling law": transformers fail to generalize as their embedding dimension increases. For both settings (i) and (ii), we propose subtle transformer modifications which can reduce the amount of data needed by adding two trainable parameters per head. 6 authors · Oct 15, 2023 1
1 REBUS: A Robust Evaluation Benchmark of Understanding Symbols We propose a new benchmark evaluating the performance of multimodal large language models on rebus puzzles. The dataset covers 333 original examples of image-based wordplay, cluing 13 categories such as movies, composers, major cities, and food. To achieve good performance on the benchmark of identifying the clued word or phrase, models must combine image recognition and string manipulation with hypothesis testing, multi-step reasoning, and an understanding of human cognition, making for a complex, multimodal evaluation of capabilities. We find that proprietary models such as GPT-4V and Gemini Pro significantly outperform all other tested models. However, even the best model has a final accuracy of just 24%, highlighting the need for substantial improvements in reasoning. Further, models rarely understand all parts of a puzzle, and are almost always incapable of retroactively explaining the correct answer. Our benchmark can therefore be used to identify major shortcomings in the knowledge and reasoning of multimodal large language models. 10 authors · Jan 10, 2024
1 SemEval 2022 Task 12: Symlink- Linking Mathematical Symbols to their Descriptions Given the increasing number of livestreaming videos, automatic speech recognition and post-processing for livestreaming video transcripts are crucial for efficient data management as well as knowledge mining. A key step in this process is punctuation restoration which restores fundamental text structures such as phrase and sentence boundaries from the video transcripts. This work presents a new human-annotated corpus, called BehancePR, for punctuation restoration in livestreaming video transcripts. Our experiments on BehancePR demonstrate the challenges of punctuation restoration for this domain. Furthermore, we show that popular natural language processing toolkits are incapable of detecting sentence boundary on non-punctuated transcripts of livestreaming videos, calling for more research effort to develop robust models for this area. 4 authors · Feb 19, 2022
- Pseudo-differential operators associated with the gyrator transform on modulation spaces with Shubin-type symbols We develop a theory of pseudo-differential operators associated with the gyrator transform on modulation spaces. The gyrator transform is a two-dimensional linear canonical transform which can be viewed as a rotation in the time-frequency plane and is closely related to the fractional Fourier transform. Motivated by the global structure of the gyrator kernel, we work with Shubin global symbol classes on R^4. We first recall basic properties of modulation spaces and establish continuity and invertibility of the gyrator transform on these spaces, using its representation as a metaplectic operator. Then we introduce pseudo-differential operators defined via the gyrator transform and a Shubin symbol, and we prove boundedness results on modulation spaces and on gyrator-based modulation-Sobolev spaces. Our work extends and generalises earlier results of Mahato, Arya and Prasad on Schwartz and Sobolev spaces MahatoGyrator to the more flexible framework of modulation spaces. 1 authors · Dec 4, 2025