
Research results are evaluated using quantitative (bibliometric) and qualitative (peer review) methods, as well as combinations thereof (expert evaluation).
This section provides information on quantitative (bibliometric) evaluation.
KTU research publications are evaluated based on journal metrics provided in two bibliographic databases:
Journal Impact Metrics
Journal impact metrics, or simply journal metrics, indicate the quality of journals, help identify leading journals, and can assist in selecting the most appropriate journal for publishing research results.
In Lithuania, formal research assessment is based on journal metrics from the Web of Science (WoS) and Scopus databases. It is supervised by the Research Council of Lithuania, in accordance with the Description of the Annual Evaluation of Research and Experimental Development, and Artistic Activities at Universities and Research Institutes (Order No. V-1593 of the Minister of Education, Science, and Sports of the Republic of Lithuania, dated September 2, 2021).
Web of Science journal metrics
One of the most widely used databases providing journal metrics (as well as various other bibliometric indicators) is the Web of Science database, managed by Clarivate Analytics, which consists of a series of individual publication databases and collections, the main ones being:
Key journal metrics of Web of Science:
| Metrics | |
| Journal impact factor (JIF) |
Shows the average number of times journal articles published in the previous two years were cited in the current year. Cannot be used to compare journals in different research categories. |
| Quartile | Indicates the journal’s relative position within a scientific category based on its JIF ranking, from highest to lowest. Journals are divided into four quartiles: Q1 represents the highest-ranked journals, and Q4 represents the lowest-ranked journals within the scientific category. |
| Aggregate impact factor (AIF) | The aggregate impact factor refers to a metric that combines the number of citations to all articles in a subject category with the number of articles published in that category. |
Scopus journal metrics
Scopus is a bibliographic database managed by Elsevier.
Scopus journal metrics are relevant when selecting a journal for publishing research in the social sciences.
Key Scopus journal metrics:
Useful Links
Trainings:
Checklists:
Author impact metrics
Author impact metrics help identify the significance of an author’s research to other researchers and to the field of science as a whole. Author metrics are important for evaluating the research performance of the institution where they work, as well as for assessing applications for research projects. The simplest author metrics are the number of published papers and the number of citations they receive. One of the most commonly used author evaluation metrics—the h-index—is derived from these.
| Database | Metrics |
| Web of Science | h-index Total Documents Times Cited Author Impact |
| Scopus | h-index Citations Documents Documents in top 25% journals Field-Weighted Citation Impact |
| Google Scholar | Citations h-index i10-index |
Note: h-index values often differ across different databases (Web of Science, Scopus, Google Scholar), as they are calculated based only on the publications indexed in that particular database.
Trainings:
Article Impact Metrics
Article impact metrics assess publications and other research outputs (monographs, conference proceedings, etc.) and indicate the impact they have had on other researchers. Typically, a publication is evaluated based on the number of times it has been cited and the metrics of the journal in which it was published. By assessing not only the number of citing authors but also their significance, other metrics for the publication itself are derived.
Tools for Research Assessment
Subscribed by KTU
The most widely recognized and used tools for evaluating research output are published by the commercial publishers Clarivate Analytics and Elsevier. Kaunas University of Technology has access to the following subscription-based tools:
Journal Citation Reports (JCR) – a journal metrics resource. It allows users to search for journals indexed in the WoS database by research category or publisher, view journal citation metrics, quartiles within scientific categories, and other information.
Essential Science Indicators (ESI) – a tool designed to assess the most relevant and popular research fields and trends.
InCites Benchmarking & Analytics (InCites) – an analytical tool designed to evaluate the volume and distribution of publications and citations across various dimensions and contexts.
Open Access Tools
The SCImago Journal & Country Rank tool provides citation metrics for journals and countries, calculated based on data from the Scopus database. It allows you to easily find metrics for any journal indexed in Scopus, journals published in a specific country, country-level metrics, etc.
Google Scholar provides access to publications, dissertations, books, abstracts, and other research literature found online across various fields of research. Google Scholar database contains many journals that are not indexed in Web of Science or Scopus databases; therefore, it is recommended to use Web of Science and Scopus databases rather than Google Scholar for searching research literature.
Training
Alternative Metrics (Altmetrics)
Alternative metrics (Altmetrics) are designed to assess the impact of research output on the whole society, i.e., beyond the scientific community. These metrics are often used as supplementary indicators of research assessment alongside traditional metrics.
Alternative metrics are defined as quantitative data that show how many times users (typically online) view, react to, save, discuss, share, or otherwise engage with specific published research results. Alternative metrics are used to evaluate not only research articles, but also journals, books, datasets, presentations, videos, websites, etc.
Alternative metrics can be found on specialized platforms: Altmetric, PlumX, and OpenAlex. PlumX categorizes alternative metrics into 5 categories:
| Citations | Shows how many times and where the publication has been cited. This may include data from traditional databases (such as Scopus) as well as official documents, patents, etc. |
| Usage | How many times the publication has been viewed or downloaded. It may specify whether the entire content was viewed or just the abstract. |
| Captures | How many times it has been saved, e.g., exported to a bibliographic management program (e.g., Mendeley), downloaded to a device, sent via email, printed, etc. Captures indicate that the user would like to return to the viewed content later. |
| Mentions | Mentions in news portal articles, blog posts, comments, reviews, Wikipedia references. |
| Social media | Shares, reactions (e.g., “likes”), comments on social media platforms (Twitter/X, Facebook, Instagram, LinkedIn, etc.). |
Qualitative Research Assessment
Overreliance on quantitative metrics in research assessment distorts the evaluation process and fosters a so-called publish or perish culture. This negatively impacts the research process and its quality, violates fundamental principles of research ethics, and creates an unfair competitive environment within the academic community. For these reasons, it is proposed to focus on qualitative research assessment, which includes both quantitative (bibliometric) methods and expert evaluation. The drawback of qualitative assessment is that it is more difficult to measure and compare than quantitative assessment based on bibliometric data.
Over the past dozen years, several initiatives have been launched to promote qualitative methods in the evaluation of research output:
Events