Natural Language Inference: Detecting Contradiction and Entailment in Multilingual Text

dc.contributor.authorSree Harsha, S.
dc.contributor.authorKrishna Swaroop, K.
dc.contributor.authorChandavarkar, B.R.
dc.date.accessioned2026-02-06T06:36:10Z
dc.date.issued2021
dc.description.abstractNatural Language Inference (NLI) is the task of characterising the inferential relationship between a natural language premise and a natural language hypothesis. The premise and the hypothesis could be related in three distinct ways. The hypothesis could be a logical conclusion that follows from the given premise (entailment), the hypothesis could be false (contradiction), or the hypothesis and the premise could be unrelated (neutral). A robust and reliable system for NLI serves as a suitable evaluation measure for true natural language understanding and enables the use of such systems in several modern day application scenarios. We propose a novel technique for the NLI task by leveraging the recently proposed Bidirectional Encoder Representations from Transformers (BERT). We utilize a robustly optimized variant of BERT, integrate a contextualized definition embedding mechanism, and incorporate the use of global average pooling into our proposed NLI system. We use several different benchmark datasets, including a dataset containing premise-hypothesis pairs from 15 different languages to systematically evaluate the performance of our model and show that it yields superior results. © 2021, Springer Nature Switzerland AG.
dc.identifier.citationCommunications in Computer and Information Science, 2021, Vol.1483, , p. 314-327
dc.identifier.issn18650929
dc.identifier.urihttps://doi.org/10.1007/978-3-030-91244-4_25
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/30306
dc.publisherSpringer Science and Business Media Deutschland GmbH
dc.subjectBERT
dc.subjectNatural Language Processing
dc.subjectTransformers
dc.titleNatural Language Inference: Detecting Contradiction and Entailment in Multilingual Text

Files