|Table of Contents|

Short text automatic scoring system based on BERT-BiLSTM model(PDF)

Journal of Shenzhen University Science and Engineering[ISSN:1000-2618/CN:44-1401/N]

Issue:
2022 Vol.39 No.3(237-362)
Page:
349-354
Research Field:
Electronics and Information Science

Info

Title:
Short text automatic scoring system based on BERT-BiLSTM model
Author(s):
XIA Linzhong YE Jianfeng LUO De’an GUAN Mingxiang LIU Jun and CAO Xuemei
Engineering Applications of Artificial Intelligence Technology Laboratory, Shenzhen Institute of Information Technology, Shenzhen 518172, Guangdong Province, P. R. China
Keywords:
signal and information processing natural language processing BERT language model short text automatic scoring long short-term memory net quadratic weighted kappa coefficient
PACS:
TP18;H08
DOI:
10.3724/SP.J.1249.2022.03349
Abstract:
Aiming at the problems of sparse features, polysemy of one word and less context related information in short text automatic scoring, a short text automatic scoring model based on bidirectional encoder representations from transformers - bidirectional long short-term memory (BERT-BiLSTM) is proposed. Firstly, the large-scale corpus is pre-trained with bidirectional encoder representations from transformers (BERT) language model to acquire the semantic features of the general language. Then the semantic features of short text and the semantics of keywords in a specific context are acquired through the short text data for the pre-fine tuning downstream specific tasks set pre-fined by BERT. And then the deep-seated context dependency is captured through bidirectional long short-term memory (BiLSTM). Finally, the obtained feature vectors are input into Softmax regression model for automatic scoring. The experimental results show that compared with other benchmark models of convolutional neural networks(CNN), character-level CNN (CharCNN), long short-term memory (LSTM) and BERT, the short text automatic scoring model based on BERT-BiLSTM achieves the best average value of quadratic weighted kappa coefficient.

References:

[1] DIKLI S. An overview of automated scoring of essays [J]. Journal of Technology, Learning, and Assessment, 2006, 5(1): 1-35.
[2] PAGE E B. The imminence of grading essays by computer [J]. Phi Delta Kappan, 1966, 48: 238-243.
[3] CLAUDIA L, MARTIN C. C-rater: automated scoring of short-answer questions [J]. Computers and the Humanities, 2003, 37(4): 389-405.
[4] DEERWESTER S, DUMAIS S T, FURNAS G W, et al. Indexing by latent semantic analysis [J]. Journal of the American Society for Information Science, 1990, 41(6): 391-407.
[5] BLEI D M, NG A Y, JORDAN M I. Latent Dirichlet allocation [J]. Journal of Machine Learning Research, 2003, 3(4/5): 993-1022.
[6] TANG D. Sentiment-specific representation learning for document-level sentiment analysis [C] // Proceedings of the 8th International Conference on Web Search and Data Mining. Shanghai, China: ACM, 2015: 447-452.
[7] PANG B, LEE L. Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales [C] // Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. Ann Arbor, USA: ACL, 2005: 115-124.
[8] LEE K, HAN S, MYAENG S H. A discourse-aware neural network-based text model for document-level text classification [J]. Journal of Information Science, 2018, 44(6): 715-735.
[9] MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations in vector space [EB/OL]. (2013- 09- 07) [2021- 08- 05]. https: //arxiv.org/abs/1301.3781.
[10] COLLOBERT R, WESTON J, BOTTOU L, et al. Natural language processing (almost) from scratch [J]. Journal of Machine and Learning Research, 2011, 12: 2493-2537.
[11] PENNINGTON J, SOCHER R, MANNING C D. Glove: global vectors for word representation [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing. Doha, Qatar: ACL, 2014: 1532-1543.
[12] HOCHREITER S, SCHMIDHUBER J. Long short-term memory [J]. Neural Computation, 1997, 9(8): 1735-1780.
[13] RAN Xiangdong, SHAN Zhiguang, FANG Yufei, et al. An LSTM-based method with attention mechanism for travel time prediction [J]. Sensors, 2019, 19(4): 861.
[14] GRAVES A, SCHMIDHUBER J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures [J]. Neural Networks, 2005, 18(5/6): 602-610.
[15] BIN Yi, YANG Yang, SHEN Fumin, et al. Describing video with attention-based bidirectional LSTM [J]. IEEE Transactions on Cybernetics, 2019, 49(7): 2631-2641.
[16] 刘欢,张智雄,王宇飞.BERT模型的主要优化改进方法研究综述[J].数据分析与知识发现,2021,5(1):3-15.
LIU Huan,ZHANG Zhixiog,WANG Yufei.A review on main optimization methods of BERT [J]. Data Analysis and Knowledge Discovery, 2021, 5(1): 3-15.(in Chinese)
[17] 方晓东,刘昌辉,王丽亚,等.基于BERT的复合网络模型的中文文本分类[J].武汉工程大学学报,2020,42(6):688-692.
FANG Xiaodong, LIU Changhui, WANG Liya, et al. Chinese text classification based on BERT’s composite network model [J]. Journal of Wuhan Institute of Technology, 2020, 42(6): 688-692.(in Chinese)
[18] 段丹丹,唐加山,温勇,等.基于BERT模型的中文短文本分类算法[J].计算机工程,2021,47(1):79-86.
DUAN Dandan, TANG Jiashan, WEN Yong, et al. Chinese short text classification algorithm based on BERT model [J]. Computer Engineering, 2021, 47(1): 79-86.(in Chinese)
[19] DEVLIN J, CHANG Mingwei, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding [C]// Proceedings of NAACL-HLT. Minneapolis, USA: ACL, 2019: 4171-4186.
[20] SU Jing, DAI Qingyun, GUERIN F, et al. BERT-hLSTMs: BERT and hierarchical LSTMs for visual storytelling [J]. Computer Speech & Language, 2021, 67: 1-14.
[21] 夏林中,罗德安,刘俊,等.基于注意力机制的双层LSTM自动作文评分系统[J].深圳大学学报理工版,2020,37(6):559-566.
XIA Linzhong, LUO De’an, LIU Jun, et al. Attention-based two-layer long short-term memory model for automatic essay scoring [J]. Journal of Shenzhen University Science and Engineering, 2020, 37(6): 559-566.(in Chinese)

Memo

Memo:
-