The use of transformer-based models like BERT for natural language processing has achieved remarkable performance across multiple domains. However, these models face challenges when dealing with very specialized domains, such as scientific literature. In this paper, we conduct a comprehensive analysis of knowledge injection strategies for transformers in the scientific domain, evaluating four distinct methods for injecting external knowledge into transformers. We assess these strategies in a single-label multi-class classification task involving scientific papers. For this, we develop a public benchmark based on 12k scientific papers from the AIDA knowledge graph, categorized into three fields. We utilize the Computer Science Ontology as our external knowledge source. Our findings indicate that most proposed knowledge injection techniques outperform the BERT baseline.
Enhancing Scholarly Understanding: A Comparison of Knowledge Injection Strategies in Large Language Models
De Leo V.;Fenu G.;reforgiato Recupero Diego
;Secchi L.
2023-01-01
Abstract
The use of transformer-based models like BERT for natural language processing has achieved remarkable performance across multiple domains. However, these models face challenges when dealing with very specialized domains, such as scientific literature. In this paper, we conduct a comprehensive analysis of knowledge injection strategies for transformers in the scientific domain, evaluating four distinct methods for injecting external knowledge into transformers. We assess these strategies in a single-label multi-class classification task involving scientific papers. For this, we develop a public benchmark based on 12k scientific papers from the AIDA knowledge graph, categorized into three fields. We utilize the Computer Science Ontology as our external knowledge source. Our findings indicate that most proposed knowledge injection techniques outperform the BERT baseline.File | Dimensione | Formato | |
---|---|---|---|
Enhancing Scholarly Understanding A Comparison of Knowledge Injection Strategies in Large Language Models - paper-7.pdf
accesso aperto
Tipologia:
versione editoriale (VoR)
Dimensione
437.05 kB
Formato
Adobe PDF
|
437.05 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.