Large Language Models (LLMs) are increasingly used to support technical tasks such as software development. However, they often struggle in low-documented or fast-evolving domains, where missing training data leads to inaccurate or incomplete responses. This paper presents a reproducible pipeline based on Retrieval-Augmented Generation to specialize LLMs for such domains by integrating curated external knowledge. We detail a systematic process to build a high-quality Q&A dataset from public instructional sources and developer forums and apply it to the Unity XR Interaction Toolkit (XRIv2) as a case study. We construct a domain-specific benchmark of 101 question-answer pairs based on real learning resources and evaluate five open and proprietary LLMs (GPT-3.5-Turbo, GPT-4o Mini, LLaMA2, LLaMA3, and Mistral) under varying retrieval settings. Results show that standard automatic metrics (e.g., METEOR) struggle to detect quality differences, while LLM-as-a-Judge evaluations reveal significant model-specific improvements as more documents are retrieved. Our findings offer practical guidance for tuning retrieval strategies and highlight the potential for generalizing this approach to other technical domains requiring targeted LLM specialization.

Specializing LLMs to Low-Documented Domains with RAG: An Analysis Across Models and Retrieval Depths

Mereu, Jacopo
;
Carcangiu, Alessandro;Artizzu, Valentino;Cau, Federico Maria;Spano, Lucio Davide
2026-01-01

Abstract

Large Language Models (LLMs) are increasingly used to support technical tasks such as software development. However, they often struggle in low-documented or fast-evolving domains, where missing training data leads to inaccurate or incomplete responses. This paper presents a reproducible pipeline based on Retrieval-Augmented Generation to specialize LLMs for such domains by integrating curated external knowledge. We detail a systematic process to build a high-quality Q&A dataset from public instructional sources and developer forums and apply it to the Unity XR Interaction Toolkit (XRIv2) as a case study. We construct a domain-specific benchmark of 101 question-answer pairs based on real learning resources and evaluate five open and proprietary LLMs (GPT-3.5-Turbo, GPT-4o Mini, LLaMA2, LLaMA3, and Mistral) under varying retrieval settings. Results show that standard automatic metrics (e.g., METEOR) struggle to detect quality differences, while LLM-as-a-Judge evaluations reveal significant model-specific improvements as more documents are retrieved. Our findings offer practical guidance for tuning retrieval strategies and highlight the potential for generalizing this approach to other technical domains requiring targeted LLM specialization.
2026
Large language model; Question answering; Retrieval augmented generation; Domain adaptation; Extended reality; Text generation metric evaluation
File in questo prodotto:
File Dimensione Formato  
s42979-026-04844-6.pdf

accesso aperto

Tipologia: versione editoriale (VoR)
Dimensione 8.23 MB
Formato Adobe PDF
8.23 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/480485
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact