In the last few years, the natural language processing community witnessed advances in neural representations of free-form text with transformer-based language models (LMs). Given the importance of knowledge available in relational tables, recent research efforts extend LMs by developing neural representations for tabular data. In this tutorial1 , we present these proposals with three main goals. First, we aim at introducing the potentials and limitations of current models to a database audience. Second, we want the attendees to see the benefit of such line of work in a large variety of data applications. Third, we would like to empower the audience with a new set of tools and to inspire them to tackle some of the important directions for neural table representations, including model and system design, evaluation, application and deployment. To achieve these goals, the tutorial is organized in two parts. The first part covers the background for neural table representations, including a survey of the most important systems. The second part is designed as a hands-on session, where attendees will use their laptop to explore this new framework and test neural models involving text and tabular data.
Models and practice of neural table representations
SIGMOD PODS 2023, ACM SIGMOD/PODS International Conference on Management of Data, 18-23 June 2023, Seattle, WA, USA
© ACM, 2023. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SIGMOD PODS 2023, ACM SIGMOD/PODS International Conference on Management of Data, 18-23 June 2023, Seattle, WA, USA https://doi.org/10.1145/3555041.3589411
PERMALINK : https://www.eurecom.fr/publication/7344