南華大學機構典藏系統:Item 987654321/29214
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 18278/19583 (93%)
Visitors : 915272      Online Users : 688
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://nhuir.nhu.edu.tw/handle/987654321/29214


    Title: 運用直交表因素配置進行循環神經網路之最佳化研究
    Other Titles: Using the Factor Configuration of an Orthogonal Array to Optimize a Recurrent Neural Network
    Authors: 郭育坤
    GUO, YU-KUEN
    Contributors: 資訊管理學系
    陸海文
    LU, HAI-WEN
    Keywords: 卷積神經網路;循環神經網路;田口品質工程;最佳化參數
    convolutional neural network;recurrent neural network;Taguchi quality engineering;optimized parameters
    Date: 2022
    Issue Date: 2022-09-06 14:40:48 (UTC+8)
    Abstract:   辨識系統在現今社會已成為不可或缺的一部分,從早期AI人工智慧的發展,到如今深度學習成為主流。科技的發展一日千里,而深度學習的運算卻日漸複雜,但相對來說,辨識能力也隨著深度學習技術更加純熟而提升。  本研究利用卷積神經網路(Convolutional Neural Network,CNN),作為特徵提取並結合長短期記憶模型(Long Short-term memory,LSTM)來預測,這樣的模型為循環神經網路(Recurrent Neural Network,RNN)中的一種。將此模型放在Google雲端硬碟裡的Colaboratory環境下執行,並搭配GPU加速器加速程式碼的運行速度,以田口品質工程實驗設計探討影響準確度的控制因子和水準,其中因子包含:卷積層激勵函數、池化層大小、輸出層激勵函數、Loss損失函數、Optimizer優化函數及Metrics評估準確率方法等6項。本實驗選擇L12直交表進行實驗,並將實驗結果計算S/N比、平均值反應圖及反應表,以期得到最佳化參數配置,本研究結果經確認實驗得知卷積層激勵函數elu、池化層大小3x3、輸出層激勵函數tanh、Loss損失函數binary_crossentropy、Optimizer優化函數adam、Metrics評估準確率方法accuracy為最佳化參數組合。
      Recognition systems have become an indispensable part of Contemporary society. As the technology advances rapidly from the initial development of artificial intelligence to deep learning becoming the mainstream technology today, the complexity of deep learning computation has increased, and with the increasing complexity, its recognition capacity has also improved.   This study employed convolutional neural network (CNN) for feature extraction and, in conjunction with long short-term memory (LSTM), for predictions. The proposed model, which is a recurrent neural network, was executed in Google Colaboratory, and a graphic processing unit accelerator was used to accelerate the code execution. The Taguchi experimental design for quality engineering was used to identify factors and levels that affected accuracy. These factors were the activation function at the convolutional layer, the size of the pooling layer, the activation function at the output layer, the loss function, the optimizer function, and the accuracy evaluation metrics. This study used the L12 orthogonal array for its experiment, calculated the signal–noise ratio of the experimental result, and produced a response graph and table for the mean signal–noise ratio. The aim was to optimize the parameter configuration. According to the study results, the optimal parameter combination was as follows: activation function at the convolutional layer is elu; pooling layer size is 3 × 3; activation function at the output layer is tanh; loss function is binary_crossentropy; optimizer function is Adam; and accuracy evaluation metrics is accuracy.
    Appears in Collections:[Department of Information Management] Disserations and Theses

    Files in This Item:

    File Description SizeFormat
    110NHU00396012-001.pdf3345KbAdobe PDF400View/Open
    index.html0KbHTML278View/Open


    All items in NHUIR are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback