English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 18278/19583 (93%)
造訪人次 : 911798      線上人數 : 490
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: http://nhuir.nhu.edu.tw/handle/987654321/29879


    題名: 基於深度學習的中醫虛證辨證
    其他題名: Deep Learning Based Deficiency Syndrome Differentiation in Traditional Chinese Medicine
    作者: 劉家佑
    LIU, CHIA-YU
    貢獻者: 資訊管理學系
    邱宏彬
    CHIU, HUNG-PIN
    關鍵詞: 中醫症狀;證候;辨證;詞向量;深度學習
    TCM symptoms;patterns;pattern identification;word vectors;deep learning;TF-IDF
    日期: 2023
    上傳時間: 2023-11-16 15:31:35 (UTC+8)
    摘要:   本研究旨在解決中醫辨證中的分類問題,即根據病人的症狀和證候,推斷疾病的原因並選擇相應的治療方法。中醫辨證具有非線性、模糊性和複雜性,因此利用資訊科技進行中醫辨證研究具有挑戰性。  本研究提出一個結合症狀詞向量和深度學習網路的中醫辨證模型。將原始症狀轉換為標準化症狀,以克服中醫用詞差異的問題。使用Word2Vec將標準化症狀映射到中文詞向量空間,以提取字詞間的上下文資訊。通過神經網路模型學習症狀集在證候間的交互作用資訊,從而將輸入的症狀集準確地對應到相應的證候。  研究使用包含18個虛證和50個臨床案例報告的資料集進行實驗。研究採用了四種模型,分別是onehot encoding、embedding layer、Word2Vec和TF-IDF,並針對多標籤問題進行訓練,以評估深度學習網路在中醫辨證方面的效能。研究結果表明四種模型在以下超參數下效能最好,分別為:onehot encoding模型在對齊長度為25和神經元數量為128;embedding layer模型在Vector-Dim為16、對齊長度為25和神經元數量為128;Word2Vec模型在Window_size =4、Vector-Dim為16、對齊長度為25和神經元數量為128;TF-IDF模型在對齊長度為25和神經元數量為128。特別是在四種模型中,Word2Vec預訓練詞向量模型表現最佳,驗證了其在辨證中具有較好的表徵能力。  本研究所提出的中醫辨證模型在中醫研究中有輔助價值。模型可以針對特定虛證進行有效的辨證,同時為深度學習提供臨床病歷中缺乏的證候資訊。這將有助於提升中醫辨證的準確性和效率,為醫生提供更可靠的輔助工具,同時也促進了中醫研究領域與機器學習的結合。
      The purpose of this study is to address the classification problem in Traditional Chinese Medicine (TCM) pattern identification, which involves inferring the causes of diseases and selecting appropriate treatment methods based on patients' symptoms and patterns. TCM pattern identification is characterized by nonlinearity, fuzziness, and complexity, making it challenging to conduct research using information technology.  This study proposes a TCM pattern identification model that combines symptom word vectors and deep learning networks. To overcome the issue of terminology variations in TCM, the original symptoms are transformed into standardized symptoms. Word2Vec is used to map standardized symptoms into a Chinese word vector space, extracting contextual information between terms. By training a neural network model, the interaction information among symptom sets in patterns is learned, accurately mapping the input symptom set to the corresponding pattern.  Experiments were conducted using a dataset comprising 18 virtual patterns and 50 clinical case reports. Four models were employed in the study: one-hot encoding, embedding layer, Word2Vec, and TF-IDF. Training was performed for multi-label problems to evaluate the performance of deep learning networks in TCM pattern identification. The research results indicate that the four models performed best under the following hyperparameters: the one-hot encoding model with an alignment length of 25 and 128 neurons; the embedding layer model with a vector dimension of 16, alignment length of 25, and 128 neurons; the Word2Vec model with a window size of 4, vector dimension of 16, alignment length of 25, and 128 neurons; and the TF-IDF model with an alignment length of 25 and 128 neurons. Particularly, among the four models, the Word2Vec pre-trained word vector model demonstrated the best performance, validating its superior representational capacity in pattern identification.  The proposed TCM pattern identification model in this study holds auxiliary value in TCM research. The model can effectively identify specific virtual patterns while providing pattern information that is lacking in clinical records for deep learning. This will contribute to improving the accuracy and efficiency of TCM pattern identification, providing more reliable support tools for doctors, and promoting the integration of TCM research and machine learning.
    顯示於類別:[資訊管理學系] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    111NHU00396004-004.pdf2300KbAdobe PDF171檢視/開啟
    index.html0KbHTML195檢視/開啟


    在NHUIR中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋