Comparison of the scholarly impact of CLEF and TREC
This study examined the scholarly impact of the two evaluation campaigns and conferences, Text REtrieval Conference (TREC) and Conference and Labs of the Evaluation Forum (CLEF), using both a scientometric and bibliometric analysis. The bibliometric databases OpenAlex and Semantic Scholar were utilized, with bibliometric and metadata collected through their APIs and further enriched for analysis using GeneRation Of BIbliographic Data (GROBID). The hypothesis that TREC has lost relevance compared to CLEF in recent years was tested by validating the results with the data collected from the bibliometric databases and quantitatively evaluating the observed discrepancies.
Citation analyses were conducted to determine which tracks and labs, as well as which chapters in the context of the Lecture Notes in Computer Science (LNCS) restructuring, were particularly successful. Additionally, the scholarly impact of TREC and CLEF on various research areas was analyzed. A particular focus was placed on examining the differences between the TREC and CLEF communities, with a specific in-depth analysis of the German community. Based on the collected data, influential factors were analyzed to explain discrepancies in citation success.
The results indicated that the hypotheses derived from Google Scholar Metrics were largely confirmed. It was shown that the most successful evaluation campaigns of TREC were located in the early 2000s to 2010, and TREC has been able to establish only a few successful tracks in recent years. In contrast, CLEF has successfully specialized in four manually identified thematic areas over the last ten years, establishing one or more successful labs in these areas. In comparison, TREC covered three of these four thematic areas over the past 22 years but has achieved only limited success in recent times.