|
English
|
正體中文
|
简体中文
|
全文筆數/總筆數 : 18278/19583 (93%)
造訪人次 : 912311
線上人數 : 915
|
|
|
資料載入中.....
|
請使用永久網址來引用或連結此文件:
http://nhuir.nhu.edu.tw/handle/987654321/29214
|
題名: | 運用直交表因素配置進行循環神經網路之最佳化研究 |
其他題名: | Using the Factor Configuration of an Orthogonal Array to Optimize a Recurrent Neural Network |
作者: | 郭育坤 GUO, YU-KUEN |
貢獻者: | 資訊管理學系 陸海文 LU, HAI-WEN |
關鍵詞: | 卷積神經網路;循環神經網路;田口品質工程;最佳化參數 convolutional neural network;recurrent neural network;Taguchi quality engineering;optimized parameters |
日期: | 2022 |
上傳時間: | 2022-09-06 14:40:48 (UTC+8) |
摘要: | 辨識系統在現今社會已成為不可或缺的一部分,從早期AI人工智慧的發展,到如今深度學習成為主流。科技的發展一日千里,而深度學習的運算卻日漸複雜,但相對來說,辨識能力也隨著深度學習技術更加純熟而提升。 本研究利用卷積神經網路(Convolutional Neural Network,CNN),作為特徵提取並結合長短期記憶模型(Long Short-term memory,LSTM)來預測,這樣的模型為循環神經網路(Recurrent Neural Network,RNN)中的一種。將此模型放在Google雲端硬碟裡的Colaboratory環境下執行,並搭配GPU加速器加速程式碼的運行速度,以田口品質工程實驗設計探討影響準確度的控制因子和水準,其中因子包含:卷積層激勵函數、池化層大小、輸出層激勵函數、Loss損失函數、Optimizer優化函數及Metrics評估準確率方法等6項。本實驗選擇L12直交表進行實驗,並將實驗結果計算S/N比、平均值反應圖及反應表,以期得到最佳化參數配置,本研究結果經確認實驗得知卷積層激勵函數elu、池化層大小3x3、輸出層激勵函數tanh、Loss損失函數binary_crossentropy、Optimizer優化函數adam、Metrics評估準確率方法accuracy為最佳化參數組合。 Recognition systems have become an indispensable part of Contemporary society. As the technology advances rapidly from the initial development of artificial intelligence to deep learning becoming the mainstream technology today, the complexity of deep learning computation has increased, and with the increasing complexity, its recognition capacity has also improved. This study employed convolutional neural network (CNN) for feature extraction and, in conjunction with long short-term memory (LSTM), for predictions. The proposed model, which is a recurrent neural network, was executed in Google Colaboratory, and a graphic processing unit accelerator was used to accelerate the code execution. The Taguchi experimental design for quality engineering was used to identify factors and levels that affected accuracy. These factors were the activation function at the convolutional layer, the size of the pooling layer, the activation function at the output layer, the loss function, the optimizer function, and the accuracy evaluation metrics. This study used the L12 orthogonal array for its experiment, calculated the signal–noise ratio of the experimental result, and produced a response graph and table for the mean signal–noise ratio. The aim was to optimize the parameter configuration. According to the study results, the optimal parameter combination was as follows: activation function at the convolutional layer is elu; pooling layer size is 3 × 3; activation function at the output layer is tanh; loss function is binary_crossentropy; optimizer function is Adam; and accuracy evaluation metrics is accuracy. |
顯示於類別: | [資訊管理學系] 博碩士論文
|
文件中的檔案:
檔案 |
描述 |
大小 | 格式 | 瀏覽次數 |
110NHU00396012-001.pdf | | 3345Kb | Adobe PDF | 399 | 檢視/開啟 | index.html | | 0Kb | HTML | 278 | 檢視/開啟 |
|
在NHUIR中所有的資料項目都受到原著作權保護.
|