Beni Takip Edin !

On 4 December 2025, Turkey’s Council of Higher Education (Yükseköğretim Kurulu, YÖK) announced its revised list of 20 research universities, a restructuring that saw Sakarya University promoted to the main list while Dokuz Eylül University was downgraded to candidate status (YÖK, 2025). Although YÖK employs a diverse set of criteria for these designations, publication quality remains the paramount determinant. Reliance on WoS, however, has long been scrutinized for two structural weaknesses: its vulnerability to predatory journals (Oviedo-García, 2021) and its inadequate coverage of Turkish-language scholarship (Furat and Yılmaz, 2023; Mongeon and Paul-Hus, 2016; Macháček and Srholec, 2022). While quantifying the impact of excluded vernacular scholarship remains methodologically elusive, the distortion caused by predatory publishing is amenable to correction. Accordingly, this paper focuses specifically on mitigating this first structural weakness to provide a more rigorous assessment of research quality.

During the same December meeting, YÖK signaled a strategic pivot, declaring that Scopus would replace WoS as the primary reference for future quality assessments. Yet, this announcement highlights a systemic uncertainty: the Turkish higher education system is struggling to identify a metric capable of reliably distinguishing high-impact science from low-quality publication volume. Merely migrating from one commercial index to another may not solve the underlying problem of predatory infiltration.

In this commentary, I argue that a superior mechanism for quality assurance already exists within the national infrastructure: the ULAKBIM UBYT incentive program. Unlike the passive acceptance of all WoS-indexed content, ULAKBIM applies an active filter, funding only a specific subset of journals while excluding thousands of others. This paper tests the efficacy of that filter. By benchmarking ULAKBIM’s list against the publishing habits of the world’s top 25 universities, I demonstrate that ULAKBIM offers a more sanitized, robust, and meritocratic basis for ranking universities than either the raw Web of Science or Scopus datasets.

To DOWNLOAD dataset, please click

Why Adjusted Impact Factor

The core purpose of Turkey’s academic incentive mechanisms—including TÜBİTAK’s publication reward schemes, ÜAK’s promotion criteria, and YÖK’s research university evaluations—is to enhance scientific productivity and increase the international visibility of Turkish scholarship. A persistent challenge, however, lies in determining what constitutes a “nitelikli yayın” (high-quality publication) and designing incentives that reliably differentiate prestigious, selective journals from low-quality or strategically inflated ones. For many years, Turkish institutions have leaned heavily on the Journal Impact Factor (JIF) in the Web of Science database to assess journal quality. Yet reliance on JIF alone has become increasingly problematic. High-impact journals are not always the most selective or reputable venues; citation-based metrics can be distorted through excessive self-citation, editorial manipulation, and the expansion of open-access mega-journals that publish very high volumes of articles with uneven quality. These concerns have fueled debate about the suitability of JIF-centric assessments.

Consequently, Turkey’s academic system faces a growing need for more refined quality indicators—metrics that capture not only citation performance but also journal prestige and contributor profiles. TÜBİTAK’s Incentive Program for International Scientific Publications (UBYT) attempts to address these issues by regularly updating its journal lists and adjusting reward levels. For example, journals delisted by WoS in 2024 for failing to meet indexing criteria, or those falling below the SCI-E MEP threshold of 0.6 identified in the 2025 UBYT Principles, were removed from the 2025 incentive list (UBYT, 2025). YÖK, however, does not employ UBYT’s journal-quality filtering. Instead, it relies directly on WoS data—leaving its evaluations more exposed to the weaknesses associated with predatory outlets and inflated publication counts. The complexity of UBYT’s scoring system and its selective method of delisting highlight the need for simpler, transparent, and easily replicable metrics to more accurately evaluate whether YÖK’s decisions genuinely reward research quality.

I propose a straightforward and transparent metric for distinguishing high-quality journals from the rest: the Top 25 University Benchmark (T25UB). The T25UB measures the share of a journal’s publications authored by scholars affiliated with the world’s top 25 universities (THE, 2025). Conceptually, this metric draws upon the logic of the Author Affiliation Index (AAI) proposed by Ferratt et al. (2007), which assesses journals based on “input quality” rather than output citations. The underlying premise is that the value of a publication outlet is best revealed by the institutional affiliations of the authors choosing to submit there. Elite researchers operate under high reputational stakes and possess the resources to discern genuine academic rigor from predatory mimicry; therefore, journals with higher T25UB scores are those that consistently attract submissions from these leading institutions. The metric can be used in two complementary ways: first, as a standalone indicator of journal reputation, and second, as the basis for an Adjusted Impact Factor (AIF), calculated by multiplying a journal’s JIF by its T25UB score. This adjustment produces a more meaningful measure of scholarly prestige than JIF alone by incorporating both citation performance and contributor quality. Ultimately, the T25UB functions as a forensic indicator, exposing venues that may have gamed the citation economy while failing to maintain the rigorous peer-review standards expected by the global scientific elite.

Does T25UB Work?

A direct comparison of Cambridge University Press (CUP) and MDPI journals (Oviedo-García, 2021)[1] illustrates how the prestige-sensitive AIF better captures journal quality than the raw impact factor. For CUP, the vast majority of titles move upward in the revised ranking; field-defining journals such as Behavioral and Brain SciencesAmerican Political Science Review, and International Organization climb hundreds of places, with an average improvement of +2371 ranks across the sample. This aligns with CUP’s reputation for maintaining selective, rigorous review standards. In sharp contrast, MDPI titles systematically plummet under the revised metric. Flagship megajournals like NutrientsSustainability, and Energies drop thousands of places, often by 3,000 to 6,000 positions, averaging a decline of –2683 ranks. These shifts confirm that the AIF effectively discounts high-volume, low-selectivity outlets while rewarding journals that attract submissions from elite institutions, providing strong face-validity for the new metric.

Table 1: Quartile Distributions of MDPI and CUP Journals

 Distribution of MDPI JournalsDistribution of CUP Journals
     
All WoSWOS JCI QuartileWOS JCI Quartile
4487Q134Q1131
3830Q254Q2109
3092Q310Q356
2090Q40Q414
     
All T25UBT25UB QuartileT25UB Quartile
3405Q12Q1136
3405Q217Q2122
3405Q358Q336
3409Q421Q418
     
All T50UBT50UB QuartileT50UB Quartile
3404Q11Q1130
3404Q212Q2113
3404Q355Q353
3412Q430Q416

Under standard WoS quartiles (Table 1), MDPI exhibits a profile of potentially inflated quality, with the vast majority of its analyzed titles (88 out of 98) securely positioned in Q1 and Q2, and zero appearing in Q4—a distribution that superficially rivals established academic presses. However, applying the T25UB metric precipitates a massive downward correction for MDPI: its prestige profile effectively inverts, with 79 journals sliding into the lower tiers (Q3 and Q4) and only two retaining Q1 status, signaling that despite high citation counts, these venues are largely avoided by scholars at top-tier institutions. In sharp contrast, CUP demonstrates robustness and stability; its strong concentration in WoS Q1 and Q2 (240 journals) is not only preserved but slightly reinforced under the T25UB metric (258 journals in Q1 and Q2). This divergence confirms that while citation-based metrics may mask the distinction between mass-publishing models and selective academic rigor, the T25UB successfully distinguishes between them by filtering for genuine engagement from the global scientific elite.

One might assume that the prestige-sensitive AIF inherently disadvantages Area Studies and country-specific journals. The data, however, reveals a more bifurcated reality. For flagship outlets—such as Journal of Asian Studies, Slavic Review, and International Journal of Middle East Studies—the AIF is highly favorable; these titles climb an average of 1968 rank positions (and nearly 1800 when using the top 50 benchmark), benefiting from the removal of inflated mega-journals. Yet, the T25UB quartile applies a stricter definition of “elite” status than WoS quartile. While WoS classifies a broad 34 journals as Q1, the T25UB narrows this tier to 19, shifting the remainder into Q2 (which rises from 23 to 27) and Q4 (which expands from 3 to 13). It is important to note that the quartile distribution must be interpreted in light of the 2023 expansion of the Journal Impact Factor (JIF) to Arts & Humanities (AHCI) and Emerging Sources (ESCI) journals. Because ESCI titles now populate much of the lower quartiles, established journals in the core indices (SSCI, SCI, and AHCI) have migrated upward, resulting in an artificial scarcity of Q3 and Q4 outlets in the standard WoS breakdown.

Data and Key Variables

The dependent variable in this study is the monetary incentive awarded by ULAKBIM to scholars affiliated with Turkish institutions. This incentive specifically targets research published in journals indexed within the Web of Science (WoS) Core Collection, strictly limited to the Science Citation Index Expanded (SCI-E), Social Sciences Citation Index (SSCI), and Arts & Humanities Citation Index (AHCI), while explicitly excluding ESCI. The financial value of this reward is variable, ranging from a minimum of 2,500 TL to a maximum of 50,000 TL per publication, depending on the specific assessment year regulations. Crucially, however, indexation in a major WoS category does not automatically guarantee eligibility for this incentive. ULAKBIM enforces a selective “UBYT” journal list that serves as a secondary quality filter; consequently, within the scope of this analysis, funding is restricted to a curated list of 9,003 journals. Conversely, 4,617 journals—despite being indexed in the SCI, SSCI, or AHCI—are disqualified from the payment scheme.

The T25UB metric can generate three complementary measures of journal quality: (1) the raw T25UB ratio, (2) quartile categories based on this ratio, and (3) the Adjusted Impact Factor (AIF). Constructing the raw T25UB ratio is straightforward. Using the WoS database (excluding the ESCI category), I collected the total number of research “articles” and “reviews” published by each journal between 2020 and 2024 (excluding book reviews, editorial notes, and similar non-research items). I then gathered the number of articles authored by scholars affiliated with the world’s top 25 universities over the same period. Dividing the latter by the former yields the T25UB ratio, which reflects the share of contributions originating from elite institutions. In addition, I downloaded each journal’s 2024 Journal Impact Factor (JIF). AIF is calculated by multiplying a journal’s JIF by its T25UB score (Adjusted_JIF_25).

Empirical Results

Table presents the regression results analyzing the determinants of ULAKBIM reward amounts using three distinct impact metrics: the standard Web of Science Journal Impact Factor (JIF) and two Adjusted Impact Factors based on authorship affiliation with the world’s Top 25 and Top 50 universities. Across all three specifications, the coefficients are positive and statistically significant at the 1% level, confirming that higher impact metrics consistently translate to higher monetary incentives. In the baseline model (Column 1), a one-unit increase in the standard JIF is associated with an increase of approximately 1,422 TL in the reward payment. This establishes a baseline verify that ULAKBIM scales its payments according to standard bibliometric prestige.

However, a comparison of the coefficients reveals that ULAKBIM’s payout structure is far more sensitive to “elite” impact than to generic citation counts. The coefficient for the Top 25 Adjusted Impact Factor (Model 2) is 2,467.30, which is approximately 74% higher than the coefficient for the standard JIF. Similarly, the Top 50 Adjusted metric (Model 3) yields a coefficient of 2,125.29. This indicates that for every unit of “quality” gained—when quality is defined by the publishing preferences of the world’s leading universities—the financial reward increases much more sharply than it does for standard impact improvements. This finding supports the hypothesis that the ULAKBIM incentive system operates as a sophisticated filter; it does not simply reward high-citation journals, but disproportionately values the specific venues where scholars from top-tier global institutions choose to publish.

This magnitude difference is partly driven by the construction of the variable itself. Because the “Top 25 Ratio” acts as a deflator (ranging strictly between 0 and 1), the Adjusted JIF distribution is naturally compressed (“squeezed”) relative to the raw JIF. Consequently, a single unit increase in Adjusted JIF represents a much more significant leap in journal quality than a unit increase in raw JIF. Therefore, results needs to be interpreted carefully.

Bibliography

Ferratt, Thomas W., et al. 2007. “IS journal quality assessment using the author affiliation index.” Communications of the Association for Information Systems 19(1): 710-724.

Furat, Ayşe Zişan, and Mehmet Yalçın Yılmaz, 2023. Akademide reyting kaygısı, Star Açık Görüş, 28 Ağustos, Available at: https://www.star.com.tr/acik-gorus/akademide-reyting-kaygisi-haber-1806880/

Macháček, Vít, and Martin Srholec. 2022. “Predatory publishing in Scopus: Evidence on cross-country differences.” Quantitative Science Studies 3(3): 859-887.

Mongeon, Philippe, and Adèle Paul-Hus. 2016. “The journal coverage of Web of Science and Scopus: a comparative analysis.” Scientometrics 106(1): 213-228.

Oviedo-García, M. Ángeles. 2021. “Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI).” Research Evaluation 30(3): 405-419a.

THE, 2025. World University Rankings 2026, Available at:  https://www.timeshighereducation.com/world-university-rankings/latest/world-ranking

YÖK, 2025. “Araştırma Üniversiteleri 2025 Sıralaması Açıklandı,” Available at: https://www.yok.gov.tr/tr/news/arastirma-universiteleri-2025-siralamasi-aciklandi-WJaoS

UBYT, 2025. “2025 Yılı UBYT Dergi Listesi Hakkında Bilgilendirme”, 12 May, Available at: https://cabim.ulakbim.gov.tr/ubyt/

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir

This site uses Akismet to reduce spam. Learn how your comment data is processed.