跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.84) 您好!臺灣時間:2024/12/11 07:07
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:陳品升
研究生(外文):Pin-Sheng Chen
論文名稱:深度學習語義通訊系統的文本擴散模型
論文名稱(外文):Text Diffusion model with Deep Learning Semantic Communication Systems
指導教授:張敏寬
指導教授(外文):Min-Kuan Chang
口試委員:簡鳳村蘇柏齊
口試委員(外文):Feng-Tsun ChienPo-Chyi Su
口試日期:2024-07-30
學位類別:碩士
校院名稱:國立中興大學
系所名稱:電機工程學系所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:英文
論文頁數:54
中文關鍵詞:深度學習語義通信擴散模型變壓器
外文關鍵詞:Deep LearningSemantic CommunicationDiffusion ModelTransformer
相關次數:
  • 被引用被引用:0
  • 點閱點閱:15
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
在 傳 統 通 信 系 統 中, 主 要 目 標 是 有 效 地 傳 輸 符 號 和 數 據。 然
而,語義交際更側重於與目標相關的語義和脉络資訊的傳遞。受先
前關於擴散模型語義通信系統工作的啟發,我們利用擴散模型和
Transformers 在自然語言處理(NLP)中利用其語義編碼和深度學習
能力來防止潜在的錯誤或語義失真。我們設計了一個旨在解釋和理解
傳輸資訊的模型,從而實現更高水准的理解和解釋。我們的系統將文
字擴散模型和語義通信系統結合在一起,將它們綜合到一個完整的通
信框架中。在語義編解碼器層面,我們直接捕獲文字中嵌入的特徵,
並將擴散模型中的正向過程應用於訓練。隨後,我們使用擴散模型的
逆過程來去除雜訊並恢復原始資訊。根據實驗結果,BLEU 分數分佈
和語義相似度等性能指標表明,我們提出的擴散模型語義通信系統可
以在退化的通道環境中有效地利用擴散模型進行語義恢復,特別是在
低信噪比的情况下,確保資訊和句子的完整性。此外,我們的系統的
穩定性和可靠性也得到了驗證,表明它可以在不同的傳輸條件下保持
高水准的效能。進一步的分析表明,我們的模型在處理語義相關資訊
方面具有顯著優勢,特別是在複雜上下文中準確捕獲和傳達語義資訊
方面。這種創新方法展示了語義通信的新可能性,預計將在未來的通
信系統中得到廣泛應用,特別是在需要高語義準確性和低錯誤率的應
用場景中。
In traditional communication systems, the main goal is to effec-
tively transmit symbols and data. However, semantic communica-
tion focuses more on the transmission of semantic and contextual
information related to the target. Inspired by previous work on dif-
fusion model semantic communication systems, we utilize diffusion
models [1] and Transformers [4] to leverage their semantic encoding
and deep learning capabilities in natural language processing (NLP) to
prevent potential errors or semantic distortions. We have designed a
model aimed at interpreting and understanding the transmitted infor-
mation, thereby achieving a higher level of understanding and inter-
pretation Our system combines a text diffusion model and a semantic
communication system, integrating them into a complete communi-
cation framework. At the level of semantic encoder and decoder, we
directly capture the features embedded in the text and apply the for-
ward process in the diffusion model to training. Subsequently, we use
the reverse process of the diffusion model to remove noise and restore
the original information According to the experimental results, per-
formance indicators such as BLEU score distribution and semantic
similarity indicate that our proposed diffusion model semantic com-
munication system can effectively utilize the diffusion model for se-
mantic recovery in degraded channel environments, especially in low
signal-to-noise ratio situations, ensuring the integrity of information
and sentences. In addition, the stability and reliability of our sys-
tem have also been verified, showing that it can maintain a high level
of performance under different transmission conditions Further anal-
ysis shows that our model has significant advantages in processing
semantic related information, especially in accurately capturing and
conveying semantic information in complex contexts. This innovative
method demonstrates new possibilities for semantic communication
and is expected to be widely applied in future communication sys-
tems, especially in application scenarios that require high semantic
accuracy and low error rates
1 Introduction 1
2 Related Work 4
3 System Model 6
3.1 Semantic Communication Systems 6
3.1.1 Semantic Encoder 8
3.1.2 Channel Encoder and Channel Decoder 11
3.1.3 Semantic Decoder 13
3.2 Diffusion Training 14
3.2.1 Forward Process 16
3.2.2 Reverse Process 18
3.2.3 Training Objective 19
3.3 Self-Conditioning 20
4 Performance Metrics 23
4.1 BLEU Score 23
4.2 Sentence Similarity 24
5 Simulation 25
5.1 Simulation Setting 25
5.2 Simulation Results 26
6 Conclusion 51
References 53
[1] Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and Lingpeng Kong.
DiffuSeq: Sequence to sequence text generation with diffusion models. In
International Conference on Learning Representations, ICLR, 2023.
[2] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic
models. Advances in neural information processing systems, 33:6840–6851,
2020.
[3] Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tat-
sunori Hashimoto. Diffusion-lm improves controllable text generation. ArXiv,
abs/2205.14217, 2022.
[4] Yueling Liu, Shengteng Jiang, Yichi Zhang, Kuo Cao, Li Zhou, Boon-Chong
Seet, Haitao Zhao, and Jibo Wei. Extended context-based semantic commu-
nication system for text transmission. Digital Communications and Networks,
2022.
[5] Robin Strudel, Corentin Tallec, Florent Altché, Yilun Du, Yaroslav Ganin,
Arthur Mensch, Will Grathwohl, Nikolay Savinov, Sander Dieleman, Laurent
Sifre, et al. Self-conditioned embedding diffusion for text generation. arXiv
preprint arXiv:2211.04236, 2022.
[6] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you
need. Advances in neural information processing systems, 30, 2017.
[7] Huiqiang Xie, Zhijin Qin, Geoffrey Ye Li, and Biing-Hwang Juang. Deep
learning enabled semantic communication systems. IEEE Transactions on
Signal Processing, 69:2663–2675, 2021.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊