SB_NITK at MEDIQA 2021: Leveraging Transfer Learning for Question Summarization in Medical Domain

No Thumbnail Available

Date

2021

Journal Title

Journal ISSN

Volume Title

Publisher

Association for Computational Linguistics (ACL)

Abstract

Recent strides in the healthcare domain, have resulted in vast quantities of streaming data available for use for building intelligent knowledge-based applications. However, the challenges introduced to the huge volume, velocity of generation, variety and variability of this medical data have to be adequately addressed. In this paper, we describe the model and results for our submission at MEDIQA 2021 Question Summarization shared task. In order to improve the performance of summarization of consumer health questions, our method explores the use of transfer learning to utilize the knowledge of NLP transformers like BART, T5 and PEGASUS. The proposed models utilize the knowledge of pre-trained NLP transformers to achieve improved results when compared to conventional deep learning models such as LSTM, RNN etc. Our team SB_NITK ranked 12th among the total 22 submissions in the official final rankings. Our BART based model achieved a ROUGE-2 F1 score of 0.139. © 2021 Association for Computational Linguistics

Description

Keywords

Citation

Proceedings of the 20th Workshop on Biomedical Language Processing, BioNLP 2021, 2021, Vol., , p. 273-279

Endorsement

Review

Supplemented By

Referenced By