跳到主要內容

臺灣博碩士論文加值系統

(44.200.117.166) 您好!臺灣時間:2023/09/24 08:22
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:吳韋成
研究生(外文):Wei-Cheng Wu
論文名稱:在服務功能鏈中以深度Q網路為基礎之虛擬化網路功能佈置策略
論文名稱(外文):An Efficient VNF Deployment Mechanism for SFC in 5G using Deep Q-Network
指導教授:高勝助高勝助引用關係
指導教授(外文):Shang-Juh Kao
口試委員:廖宜恩張阜民
口試委員(外文):I-En LiaoFu-Min Chang
口試日期:2021-07-01
學位類別:碩士
校院名稱:國立中興大學
系所名稱:資訊工程學系所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2021
畢業學年度:109
語文別:英文
論文頁數:33
中文關鍵詞:虛擬化網路功能服務功能鏈深度Q網路
外文關鍵詞:Virtualized network functionserver function chainingdeep Q-network
相關次數:
  • 被引用被引用:0
  • 點閱點閱:114
  • 評分評分:
  • 下載下載:6
  • 收藏至我的研究室書目清單書目收藏:0
虛擬化網路功能(VNF)是一種在網路功能虛擬化基礎設施(NFVI)上執行以取代硬體設備的網路功能。使用VNF的優勢包含提高安全性、降低能耗和增加可用物理空間。結合軟體定義網路和網路功能虛擬化,網路供應商(ISPs)可以透過串聯VNFs,來提供不同網路需求,此方式稱為服務功能鏈(SFC)。目前在SFC中VNF配置問題的探討議題有能耗、延遲、服務品質等,但重要的是要同時考慮ISPs與用戶的角度,為了降低營運支出並滿足用戶要求,應將能耗與使用者體驗(QoE)視為目標。
在本文中,我們提出一個基於QoS/QoE/能源感知深度Q網路(DQN)的方案,稱為DQN-QQE,用於決定VNFs的部署,此策略在滿足QoS限制下同時考慮能耗與QoE。DQN是Q-Learning與神經網路的結合,本文中使用到的神經網路為Eval網路(EN)與Target網路(TN)。EN通過ϵ-greedy策略選擇一個動作,TN評價EN選擇的動作並根據獎勵調整神經網路的權重。代理會收到SFC請求所需的頻寬與延遲。代理會根據過去收到的Q值選擇一個動作。提出方案的獎勵由韋伯-費希曼定律(WFL)和假設QoS和QoE指數相互依賴關係(IQX)制定。我們假設能源為服務品質的一種衡量標準,並套用IQX假說。我們將提出的DQN-QQE與著重於QoE的方法(DQN-Q2-SFC)、暴力破解法和隨機配置等策略相比較。實驗結果顯示DQN-QQE在平均處理時間和能耗上比DQN-Q2-SFC分別減少43%與11%;QoE整體數值較DQN-Q2-SFC高且穩定;暴力破解法在能耗、QoE與錯誤率都是最佳,但平均處理時間高於我們提出的DQN-QQE約20倍。
Virtualized Network Functions (VNFs) are virtualized network services running on the Network Functions Virtualization Infrastructure (NFVI) in order to replace physical hardware. The advantages of using VNF include increasing security, reducing power consumption, and increasing available physical space. Integrating with Software Defined Networking (SDN) and Network Functions Virtualization (NFV), Internet Service Providers (ISPs) can chain VNFs together, called Service Function Chain (SFC), to provide various network service demands with virtual links. Many studies have proposed for the VNFs deployment under different objectives, e.g. energy, delay, Quality of Service (QoS), and Quality of Experience (QoE). From the aspects of both ISPs and customers, both energy consumption and QoE are the primary concern for the purpose of operating expense (OPEX) reduction and users’ satisfaction.
In this study, a QoS/QoE/energy-aware deep Q-network (DQN)-based scheme, called DQN-QQE, is proposed for VNFs deployment under the consideration of both energy consumption and QoE requirement with QoS constraints. DQN adopts the technology of Q-Learning and neural networks, Eval network (EN) and Target network (TN). EN performs an action by using ϵ-greedy policy and TN criticizes the action chosen by EN by adjusting weights according to the reward. In reinforcement learning (RL), the agent is the RL algorithm which learns by interacting with the environment. Once the agent receives the bandwidth and delay of SFC requests, it will choose an action according to the previous Q-value. The reward of the proposed scheme is formulated by the Weber-Fechner law and the exponential interdependency of QoE and QoS (IQX) hypothesis. Afterwards, we compared the proposed DQN-QQE to the QoS/QoE-aware approach DQN-Q2-SFC, brute force and randomness in terms of energy, QoE, error rate, and processing time. The simulation results revealed that DQN-QQE is more stable, and is better than DQN-Q2-SFC and randomness approach in terms of energy, QoE and error rate. The average processing time and energy consumption of the proposed DQN-QQE was 43% and 11% less than DQN-Q2-SFC, respectively. Though brute force is better than others in terms of energy, QoE, and error rate, the processing time of brute force is nearly 20 times larger than the proposed DQN-QQE.
摘要 i
Abstract ii
Contents iv
List of Tables vi
List of Figures vii
Chapter 1. Introduction 1
1.1. Research Motivation 1
1.2. Thesis Contributions and Structure 3
Chapter 2. Related Work 4
2.1. Service Function Chaining 4
2.2. Deep Q-Network 5
2.3. Related Studies 6
2.4. The Relationship between QoS and QoE 8
Chapter 3. System Architecture 10
3.1. VNF Deployment 11
3.2. Off-Idle-Active State 12
Chapter 4. DQN-based VNF Deployment Scheme 15
4.1. Markov Decision Process 15
4.2. DQN-QQE 17
4.2.1. Selection Phase 19
4.2.2. Learning Phase 22
Chapter 5. Simulation Environment and Performance Evaluation 24
Chapter 6. Conclusions and Future Work 31
References 32
[1]M. M. Tajiki, S. Salsano, L. Chiaraviglio, M. Shojafar and B. Akbari, “Joint energy efficient and QoS-aware path allocation and VNF placement for service function chaining,” IEEE Trans. Netw. Service Manag., vol. 16, no. 1, pp. 374-388, Mar. 2019.
[2]X. Chen, Z. Li, Y. Zhang, R. Long, H. Yu, X. Du and M. Guizani, “Reinforcement learning–based QoS/QoE-aware service function chaining in software-driven 5g slices,” Trans. Emerg. Telecommun. Technol., vol. 29, no. 11, pp. e3477, 2018.
[3]R. Huang and E. Masanet, “Data center IT efficiency measures,” Nat. Renew. Energy Lab. (NREL), Golden, CO, USA, Rep. NREL/SR–7A40-63181, 2015.
[4]G. Chen, W. He, J. Liu, S. Nath, L. Rigas, L. Xiao and F. Zhao, “Energy-aware server provisioning and load dispatching for connection-intensive Internet services”, in Proc. 5th USENIX Symp. Netw. Syst. Design Implement., San Francisco, CA, USA, 2008, pp. 337–350.
[5]B. Kar, E. H.-K. Wu and Y.-D. Lin, “Energy cost optimization in dynamic placement of virtualized network function chains,” IEEE Trans. Netw. Service Manag., vol. 15, no. 1, pp. 372-386, Mar. 2018.
[6]S. Kim, S. Park, Y. Kim, S. Kim and K. Lee, “VNF-EQ: dynamic placement of virtual network functions for energy efficiency and QoS guarantee in NFV,” Cluster Comput 20, 2107–2117 (2017).
[7]M. M. Tajiki, M. Shojafar, B. Akbari, S. Salsano, M. Conti, M. Singhal, “Joint failure recovery, fault prevention, and energy-efficient resource management for real-time SFC in fog-supported SDN,” Comput. Netw., vol. 162, article 106850, Oct. 2019.
[8]G. Li, B. Feng, H. Zhou, Y. Zhang, K. Sood and S. Yu, “Adaptive service function chaining mappings in 5G using deep Q-learning,” Comput. Commun., vol. 152, pp. 305-315, Feb. 2020.
[9]T. Subramanya, D. Harutyunyan and R. Riggio, “Machine learning-driven service function chain placement and scaling in MEC-enabled 5G networks,” Comput. Netw., vol. 166, Jan. 2020.
[10]F. Z. Yousaf, M. Bredel, S. Schaller, F. Schneider, “NFV and SDN – key technology enablers for 5G networks,” IEEE J Sel Areas Commun., vol. 35, no. 11, pp. 2468-2478, Nov. 2017.
[11]B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma, S. Banerjee, N. McKeown, “ElasticTree: Saving energy in data center networks,” in Proc. 7th USENIX Symposium Netw. Syst. Design Implement. (NSDI), San Jose, CA, USA, pp. 249–264, Apr. 2010.
[12]P. Reichl, S. Egger, R. Schatz, A. D’Alconzo, “The logarithmic nature of QoE and the role of the Weber-Fechner law in QoE assessment,” Paper presented at: 2010 IEEE International Conference on Communications (ICC); 2010; Cape Town, South Africa.
[13]M. Fiedler, T. Hossfeld, P. Tran-Gia, “A generic quantitative relationship between quality of experience and quality of service,” IEEE Netw., vol. 24, no. 2, pp. 36-41, Mar.-Apr. 2010.
[14]R. Bellman, “Dynamic Programming,” publisher: Dover Publications, ISBN-10: 0-486-42809-5, 4 Mar. 2003.
[15]V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing Atari with Deep Reinforcement Learning,” CoRR abs/1312.5602, 19 Dec 2013.
[16]V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabisk, “Human-level control through deep reinforcement learning,” Nat., vol. 518, number 7540, pp. 529-533, 2015.
[17]R. S. Sutton and A. G. Barto, “Reinforcement learning: an introduction,” Cambridge, MA: MIT Press, 1998.
[18]G. T. Fechner, “Elements of psychophysics,” Holt, Rinehart and Winston, Nov 1966.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top