A Machine Learning and Large Language Model-Integrated Approach to Research Project Evaluation

A Machine Learning and Large Language Model-Integrated Approach to Research Project Evaluation

Jian Ma, Zhimin Zheng, Peihu Zhu, Zhaobin Liu
Copyright: © 2024 |Pages: 14
DOI: 10.4018/JDM.345400
Article PDF Download
Open access articles are freely available for download

Abstract

Research project evaluation upon completion is one of the important tasks for research management in government funding agencies and research institutions. Due to the increased number of funded projects, it is hard to find qualified reviewers in the same research disciplines. This paper proposes a machine learning and large language model integrated approach to provide decision support for research project evaluation. Machine learning algorithms are proposed to compute the weights of key performance indicators (KPIs) and scores of KPIs based on the evaluation results of completed projects, large language models are used to summarize research contributions or findings on project reports. Then domain experts are invited to consolidate the weights and scores for the KPIs and assess the novelty and impact of research contribution or findings. Experiments have been conducted in practical settings and the results have shown that the proposed method can greatly improve research management efficiency and provide more consistent evaluation results on funded research projects.
Article Preview
Top

Introduction

Research project evaluation is an important task in government funding agencies and research institutions (Wang et al., 2013). Key performance factors used in evaluating research projects include: 1) scientific merits on the novelty and impact of the research project; 2) relevance of the research to the mission and priorities of the funding scheme; and 3) output of the research project, that is, academic publications, awards, invention patents, and collaborations with other researchers, institutions, and industries.

Peer review methods are widely used in evaluating research projects, where domain experts are invited to evaluate the scientific merit, relevance, and research outputs of completed research projects (Bence & Oppenheim, 2004). Scientific review panels may be invited to make final decisions on the undecided evaluation results. Hence, reviewer assignment plays a significant role in controlling the quality of project evaluation (Thushari et al., 2014), and many approaches have been proposed to support reviewer assignment (Liu et al., 2016), such as heuristic algorithm (Cook et al., 2005), hybrid knowledge, and model approach (Sun et al., 2008). As a main type of research project evaluation method, metrics-based approaches are also used to evaluate research projects based on specific metrics, such as the number and quality of publications, awards, and patents generated by the research project.

Research information systems, a decision support system for government funding agencies to support research management, are developed to track the progress of research projects, monitor research outputs, and evaluate the impact of research projects. These systems are used to streamline the evaluation process, ensure transparency, and facilitate knowledge sharing among researchers in universities and industries. With the increased number of funded research projects, it is becoming difficult to find relevant peer reviewers in the subject disciplines to evaluate the research projects, as a result, the evaluation results may be inconsistent. Current research information systems mainly focus on quantitative analysis that evaluates research projects based on statistical information of their research outputs (Donovan, 2007). The existing research evaluation approaches ignore the qualitative assessment on the novelty and impact of research projects and their relevance to the funding scheme.

This paper aims to propose an integrated approach that leverages machine learning techniques and large language models for the evaluation of research projects. Machine learning methods are employed to calculate the weights and scores of key performance indicators (KPIs) for quantitative analysis. Large language models are utilised to assist in processing text contents and then summarizing the research contributions or findings of the projects for qualitative assessment.

Experiments have been conducted in practical settings, and the results have shown that the proposed decision support framework can assist decision-makers in achieving more consistent evaluation results for funded research projects. It significantly improves the efficiency of research management work.

Complete Article List

Search this Journal:
Reset
Volume 35: 1 Issue (2024)
Volume 34: 3 Issues (2023)
Volume 33: 5 Issues (2022): 4 Released, 1 Forthcoming
Volume 32: 4 Issues (2021)
Volume 31: 4 Issues (2020)
Volume 30: 4 Issues (2019)
Volume 29: 4 Issues (2018)
Volume 28: 4 Issues (2017)
Volume 27: 4 Issues (2016)
Volume 26: 4 Issues (2015)
Volume 25: 4 Issues (2014)
Volume 24: 4 Issues (2013)
Volume 23: 4 Issues (2012)
Volume 22: 4 Issues (2011)
Volume 21: 4 Issues (2010)
Volume 20: 4 Issues (2009)
Volume 19: 4 Issues (2008)
Volume 18: 4 Issues (2007)
Volume 17: 4 Issues (2006)
Volume 16: 4 Issues (2005)
Volume 15: 4 Issues (2004)
Volume 14: 4 Issues (2003)
Volume 13: 4 Issues (2002)
Volume 12: 4 Issues (2001)
Volume 11: 4 Issues (2000)
Volume 10: 4 Issues (1999)
Volume 9: 4 Issues (1998)
Volume 8: 4 Issues (1997)
Volume 7: 4 Issues (1996)
Volume 6: 4 Issues (1995)
Volume 5: 4 Issues (1994)
Volume 4: 4 Issues (1993)
Volume 3: 4 Issues (1992)
Volume 2: 4 Issues (1991)
Volume 1: 2 Issues (1990)
View Complete Journal Contents Listing