跳到主要內容

臺灣博碩士論文加值系統

(216.73.216.134) 您好!臺灣時間:2025/11/19 23:19
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:徐志榮
研究生(外文):Chih-Jung Hsu
論文名稱:預測交通需求之分佈與數量—基於多重式注意力 機制之AR-LSTMs 模型
論文名稱(外文):Predicting Transportation Demand based on AR-LSTMs Model with Multi-Head Attention
指導教授:陳弘軒陳弘軒引用關係
學位類別:碩士
校院名稱:國立中央大學
系所名稱:軟體工程研究所
學門:電算機學門
學類:軟體發展學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:52
中文關鍵詞:計程車需求預測深度學習遞歸神經網絡長短期記憶模型注意力模型
外文關鍵詞:Taxi Demand PredictionDeep LearningRecurrent Neural NetworksLong Short-Term Memory WorkAttention
相關次數:
  • 被引用被引用:3
  • 點閱點閱:595
  • 評分評分:
  • 下載下載:76
  • 收藏至我的研究室書目清單書目收藏:1
智慧交通儼然成為智慧城市的重要一環,計程車需求預測是智慧交
通中一項重要課題。有效地預測下個時間點載客需求的分布可以減少司
機空車時間、降低乘客等待時間及增加獲利載客次數,將計程車產業獲
利最大化並解決車輛巡迴攬客所造成的能源消耗及汙染。

本文利用計程車行車紀錄資料結合深度學習的架構提出有效的計程
車載客需求預測模型,使用善於處理時間序列架構的短中長期記憶模
型(LSTM) 為基礎,交通議題的資料與長時間周期變化有關,過去的
方式難以克服於尖峰與離峰間變化的預測,因此我們使用注意力機制
(Attention) 加強長時間週期的交通問題資訊處理,並設計多層的深度學習網路架構來提高預測準確率。我們並自訂了一個同時考慮均方損失誤差及平均百分比誤差的損失函數,因為均方損失誤差通常會低估低需求區域的叫車數,而平均百分比誤差則容易錯估高需求區域的叫車數。

為驗證模型的一般性,我們使用兩組資料集,分別為紐約市計程車
的行車紀錄資料與台灣大車隊在台北的計程車叫車資料進行驗證。在實
驗中我們比較了傳統的預測方式、淺層機器學習、及深度學習模型等方
式預測計程車需求分佈,實驗結果顯示我們提出的多重式AR-LSTMs 預
測模型能有效的提高預測的準確度。
Smart transportation is a crucial issue for a smart city, and the forecast for taxi demand is one of the important topics in smart transportation. If we can effectively predict the taxi demand in the near future, we may
be able to reduce the taxi vacancy rate, reduce the waiting time of the passengers, increase the number of trip counts for a taxi, expand driver’s income, and diminish the power consumption and pollution caused by
vehicle dispatches.

This paper proposes an efficient taxi demand prediction model based on state-of-the-art deep learning architecture. Specifically, we use the LSTM model as the foundation, because the LSTM model is effective in predicting time-series datasets. We enhance the LSTM model by introducing the attention mechanism such that the traffic during the peak hour and the off-peak period can better be predicted. We leverage a multi-layer
architecture to increase the predicting accuracy. Additionally, we design a loss function that incorporates both the absolute mean-square-error (which tends under-estimate the low taxi demand areas) and the relative meansquare-error (which tends to misestimate the high taxi demand areas).

To validate our model, we conduct experiments on two real datasets — the NYC taxi demand dataset and the Taiwan Taxi’s taxi demand dataset in Taipei City. We compare the proposed model with non-machine learning based models, traditional machine learning models, and deep learning models. Experimental results show that the proposed model outperforms the baseline models.
摘要iv
Abstract vi
目錄viii
圖目錄ix
表目錄x
一、緒論1
1.1 研究動機.................................................................. 1
1.2 研究目標.................................................................. 2
1.3 研究貢獻.................................................................. 2
1.4 論文架構.................................................................. 3
二、預前工作4
2.1 遞歸神經網路(Recurrent Neural Network, RNN)............... 4
2.2 長短期記憶模型(Long Short-Term Memory, LSTM) .......... 6
三、模型及方法8
3.1 原始座標資料編碼...................................................... 8
3.2 保留連結(Residual Connections) ................................... 9
3.3 注意力機制(Attention)................................................ 10
3.3.1 比例內積式注意力(Scaled Dot-Product Attention) ... 10
3.3.2 多頭式注意力(Multi-Head Attention) .................... 11
3.4 損失函數.................................................................. 11
3.5 Residual-LSTMs 預測模型............................................ 12
3.6 Attention-Residual-LSTMs 預測模型............................... 13
四、實驗結果與分析15
4.1 資料集介紹............................................................... 15
4.1.1 NYC Taxi ........................................................ 16
4.1.2 台灣大車隊...................................................... 16
4.2 實驗環境.................................................................. 16
4.3 實驗對照模型與方法介紹............................................. 17
4.3.1 歷史平均......................................................... 17
4.3.2 ARIMA ........................................................... 17
4.3.3 XGboost.......................................................... 18
4.3.4 線性回歸(Linear Regression) ............................... 18
4.3.5 DMVST-Net..................................................... 18
4.4 評量指標.................................................................. 18
4.5 實驗結果.................................................................. 19
4.5.1 實驗結果分析................................................... 20
4.5.2 不同時間區段比較............................................. 27
五、相關研究32
5.1 LSTM-MDN-Conditional .............................................. 33
5.2 DMVST-Net.............................................................. 33
六、結論及未來展望35
6.1 結論........................................................................ 35
6.2 未來展望.................................................................. 35
參考文獻37
附錄一39
[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep
convolutional neural networks,” pp. 1097–1105, 2012.
[2] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale
image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,”
pp. 770–778, 2016.
[4] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation,
vol. 9, no. 8, pp. 1735–1780, 1997.
[5] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun,
Y. Cao, Q. Gao, K. Macherey, et al., “Google’s neural machine translation system:
Bridging the gap between human and machine translation,” arXiv preprint arXiv:
1609.08144, 2016.
[6] R. Pascanu, T. Mikolov, and Y. Bengio, “Understanding the exploding gradient
problem,” CoRR, abs/1211.5063, vol. 2, 2012.
[7] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser,
and I. Polosukhin, “Attention is all you need,” pp. 5998–6008, 2017.
[8] H. Yao, F. Wu, J. Ke, X. Tang, Y. Jia, S. Lu, P. Gong, J. Ye, and Z. Li, “Deep
multi-view spatial-temporal network for taxi demand prediction,” 2018.
[9] 徐志榮and 陳弘軒, “多層式短中長期記憶模型之即時計程車需求預測,” Conference
on Technologies and Applications of Artificial Intelligence, 2018.
[10] J. Xu, R. Rahmatizadeh, L. Bölöni, and D. Turgut, “Real-time prediction of taxi
demand using recurrent neural networks,” IEEE Transactions on Intelligent Transportation
Systems, vol. 19, no. 8, pp. 2572–2581, 2017.
[11] NYC Taxi Limousine Commission and others, “Taxi and limousine commission(tlc)
trip record data,”
[12] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv
preprint arXiv:1412.6980, 2014.
37
[13] R. J. Hyndman, Y. Khandakar, et al., “Automatic time series for forecasting: the
forecast package for r,” no. 6/07, 2007.
[14] L. Moreira-Matias, J. Gama, M. Ferreira, J. Mendes-Moreira, and L. Damas, “Predicting
taxi–passenger demand using streaming data,” IEEE Transactions on Intelligent
Transportation Systems, vol. 14, no. 3, pp. 1393–1402, 2013.
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep
convolutional neural networks,” pp. 1097–1105, 2012.
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊