Large language model-assisted semantic communication systems

Guo, Shuaishuai; Wang, Yanhu; Feng, Biqian; Feng, Chenyuan
Book chapter N°9 in " Wireless Semantic Communications: Concepts, Principles, and Challenges," Wiley, 2025, ISBN:9781394223305

Semantics deal with meanings, which are often context‐dependent and relevant to background knowledge. There is no universal standard for the meaning of words and phrases, making it hard to create a one‐size‐fits‐all mathematical model. Large language models (LLMs) such as bidirectional encoder representations from transformers and generative pretrained transformer are trained on huge human natural language data, enabling machines to better understand the semantics of human languages. This chapter discusses the method of leveraging LLM to assist semantic communication systems. Specifically, this chapter first discusses leveraging LLM to define the semantic loss of communications, based on which a signal shaping method is proposed to minimize the semantic loss for semantic communications with a few message candidates. Then, this proposed a more generalized method to quantify the semantic importance of a word/frame using LLM and investigate semantic importance‐aware communications to reliably convey the semantics with limited communication and network resources. Finally, this chapter points out the future direction of using LLM for semantic correction. Experiments are conducted to verify the effectiveness of leveraging LLM to assist semantic communications.


DOI
Type:
Ouvrage
Date:
2025-02-19
Department:
Systèmes de Communication
Eurecom Ref:
8096
Copyright:
© Wiley. Personal use of this material is permitted. The definitive version of this paper was published in Book chapter N°9 in " Wireless Semantic Communications: Concepts, Principles, and Challenges," Wiley, 2025, ISBN:9781394223305 and is available at : http://dx.doi.org/10.1002/9781394223336.ch9
See also:

PERMALINK : https://www.eurecom.fr/publication/8096