Beni Takip Edin !

The core purpose of Türkiye’s academic incentive schemes—TÜBİTAK’s publication rewards, ÜAK’s academic promotions, and YÖK’s research university criteria—is to strengthen scientific productivity and increase the international visibility of Turkish scholarship. A central challenge is determining which journals constitute nitelikli yayın (high-quality publication) and designing incentives that reliably distinguish prestigious, selective outlets from low-quality or strategically inflated ones.

Historically, institutions have relied heavily on the Journal Impact Factor (JIF) to assess journal quality. However, relying solely on JIF is increasingly viewed as inadequate: high-impact journals do not always represent the best or most selective venues, and the metric can be distorted through self-citation loops, editorial manipulation, and the rise of open-access megajournals with large publication volumes. Therefore, the Turkish academic system needs more refined measures of journal quality—metrics that incorporate not only citation counts but also prestige, namely the scholarly profile of contributing institutions. Adjusted Impact Factor (AIF) proposes multiplying impact by the share of authors from top universities, creating a more meaningful measure of scholarly reputation than JIF alone.

This paper provides a large-scale quantitative test of how well ULAKBİM payments actually reflect journal quality. Using regression models incorporating four dimensions ( Citation impact (JIF), Prestige (elite-university authorship), Category-level quality (JCI Quartile), and Selectivity (publication volume), I assess whether the payment system meaningfully differentiates high-quality journals from weaker ones.

To DOWNLOAD dataset, please click

Why Adjusted Impact Factor

As emphasized by critiques of Web of Science, JIF does not reflect journal selectivity, High JIF does not mean a journal is preferred by top universities, JIF varies dramatically across fields and can mislead cross-disciplinary comparisons, and JIF can be inflated through editorial policies or citation cartels. For example, ECONOMIC JOURNAL and JOURNAL OF THE KNOWLEDGE ECONOMY both appear as Q1 journals, yet the former is clearly more prestigious. ULAKBİM’s metric correctly separates them, because it accounts for within-field percentile ranks and uses additional quality filters. The question, then, is whether this ability of ULAKBİM in separating to economics journals can be generalized across fields and publication types.

The Adjusted Impact Factor (AIF) is a refined journal-quality metric designed to overcome the well-known limitations of the traditional Journal Impact Factor. Whereas JIF measures citation frequency alone, AIF multiplies a journal’s citation impact by the share of its articles authored by scholars from top-ranked global universities. This adjustment introduces a crucial dimension of prestige and selectivity: journals that attract submissions from leading research institutions are more likely to maintain rigorous peer-review standards, exercise editorial discretion, and publish work that shapes their fields. By incorporating institutional authorship profiles, AIF distinguishes genuinely influential, high-quality journals from outlets whose citation counts may be inflated through large publication volumes, editorial practices, or citation cartels. In doing so, AIF offers a more holistic, reputation-sensitive metric that better reflects the academic community’s qualitative judgments and aligns more closely with how researchers themselves evaluate journal standing.

A direct comparison of Cambridge University Press (CUP) and MDPI journals further illustrates how the revised, prestige-sensitive JIF measure, namely AIF, better captures journal quality than the raw impact factor. For CUP, the vast majority of titles move upward in the revised ranking: journals such as Behavioral and Brain Sciences, American Political Science Review, Psychological Medicine, International Organization and Political Analysis all climb hundreds of places, and the mean change across CUP titles is an improvement of roughly +2,255 rank positions. This pattern is consistent with CUP’s reputation for selective, rigorously reviewed journals that are widely regarded as field-defining outlets.

In sharp contrast, MDPI titles systematically move downward once the revised metric is applied. Flagship megajournals such as Nutrients, Sustainability, Energies, Polymers, Remote Sensing or International Journal of Molecular Sciences drop thousands of places—often by 3,000–6,000 positions—with an average decline of about –2,667 ranks across the MDPI sample. These shifts are exactly what we would expect if the revised measure discounts high-volume, low-selectivity outlets and rewards journals that attract submissions from established, research-intensive institutions. In other words, the Cambridge–MDPI comparison provides strong face-validity evidence that the revised JIF (and, by extension, AIF-style metrics) discriminates journal quality much more effectively than the raw impact factor alone.

However, one might assume that the revised JIF disadvantages area-studies and country-specific journals. The rank comparison suggests the opposite. Most flagship outlets in this field – such as Post-Soviet Affairs, China Quarterly, African Affairs, Journal of Contemporary China, International Journal of Middle East Studies, Pacific Affairs, Latin American Politics and Society, or Journal of Modern African Studies – increase dramatically once the revised metric is applied with an average increase of about 1,688 rank positions across the sample. When the AIF is calculated using the top 50 universities, the average increase in area studies journals reaches approximately 2,246 positions.

Data and Key Variables

Our empirical analysis draws on 13,499 journal-level observations from the 2020–2024 period. The dependent variable is:

ULAKBIM_payment: the monetary reward paid to researchers for a publication in a given journal.

Independent variables correspond to the four dimensions of journal quality discussed above:

JIF_2024: raw WoS impact factor.

THE25_average: share of authors affiliated with the world’s top 25 universities.

JCI_Quartile: a categorical ranking (Q1–Q4) of journals’ field-normalized impact.

WoSDocuments2020_24_Article: total publication volume (proxy for selectivity).

We estimate OLS models separately for JIF-based and THE25-based specifications.

Empirical Results

Model 1: JIF_2024 + JCI_Quartile + WoSDocuments2020_24_Article

The regression results provide clear evidence that ULAKBİM’s payment scheme is structured around journal selectivity and quality signals rather than raw output. The negative and statistically significant coefficient on publication volume (t = –3, p = 0.003) indicates that journals producing larger numbers of articles receive lower payments, consistent with the notion that very high output is interpreted as a proxy for weaker editorial selectivity. This suggests that ULAKBİM implicitly discounts journals whose publication strategies may dilute quality, irrespective of their nominal impact factors. The results also reveal a pronounced and systematic penalty for journals in lower JCI quartiles. The large negative coefficient for JCI_Quartile (t = –77.46, p < 0.001) shows that payments decrease sharply as journals move from Q1 toward Q4. This pattern demonstrates that ULAKBİM relies heavily on field-normalized prestige indicators when determining payment levels, aligning the incentive structure with international quality benchmarks rather than with citation volume alone. At the same time, the strong positive effect of JIF (Year_JIF coefficient = 903.195; t = 46.96, p < 0.001) confirms that journals with higher impact factors receive substantially higher payments, even after controlling for output and quartile placement. This indicates that citation performance remains an important criterion, but one that operates within a broader evaluative framework that penalizes inflated output and lower-tier placement.

Model 2: THE25_average + JIF_2024  + JCI_Quartile + WoSDocuments2020_24_Article  

The full model, which includes elite-university authorship, JIF, publication volume, and JCI quartile simultaneously, provides the strongest evidence that ULAKBİM’s payment system aligns with a multidimensional understanding of journal quality. The coefficient on the25_average is by far the largest and most statistically powerful: journals that attract a higher share of authors from the world’s top universities receive dramatically higher payments. This result confirms that institutional prestige is the single strongest predictor of ULAKBİM valuations. The coefficient on JIF remains positive and highly significant, indicating that citation impact continues to matter even after controlling for prestige and quartile quality. Publication volume loses statistical significance in this model (p = 0.096). The negative and highly significant JCI quartile coefficient (–5,626) again shows that ULAKBİM sharply penalizes journals in lower WoS quality tiers (Q2–Q4), reinforcing the system’s field-normalized quality filter. With an R² of 0.608, this combined model explains over 60% of variation in payments, demonstrating that ULAKBİM’s incentive structure implicitly mirrors an AIF-style evaluation.

Do Turkish Universities Select Quality?

Using the number of articles published by different university groups as the dependent variable allows us to assess whether Turkish institutions systematically target higher-quality journals. The two models below reveal notable differences between publication strategies across the Turkish university system as a whole and among elite institutions (Koç–Sabancı–Bilkent). In the Turkey-wide model (Model 1), higher average THE25 rank scores are associated with significantly fewer publications, and journals in lower JCI quartiles attract substantially more submissions. By contrast, journals with higher JIF scores and greater overall WoS output receive more publications from Turkish scholars, indicating a mixed strategy that combines productivity with selective quality considerations.

The elite-university model (Model 2) presents a contrasting pattern. Here, higher THE-based averages are positively associated with publication counts, suggesting that top universities purposefully target journals aligned with global prestige hierarchies. Elite institutions not only avoid lower-quartile journals but also display strong responsiveness to JIF, privileging journals with higher citation impact. Although WoS output remains a positive predictor of publication patterns in this group, its influence is considerably weaker than in the national model, implying that elite universities prioritize journal selectivity and reputation over sheer output.

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir

This site uses Akismet to reduce spam. Learn how your comment data is processed.