The national public discussion on the scientific endeavor these days exposes challenges that invite deep and critical reflections on how we value and how we measure the generation of knowledge. The process of knowledge generation for most of us who do science culminates in a prized scientific publication. Getting an article published in a prestigious international journal implies having gone through a process where new ideas were developed (which takes months or even years of work) and then judged and tested in a rigorous peer review, where experts in the field judge the originality and quality of the article. What is a prestigious journal and who signs the articles published in them depends and varies according to the area of research.

A person doing research in particle physics publishes in different journals than a person doing research in art history. Moreover, each sub-area of research works differently. In theoretical particle physics, for example, it is usual to publish papers with few authors, from 1 to perhaps 10 people. These papers may propose new theoretical models that predict new elementary particles and suggest how to search for them. In experimental particle physics, large collaborations at CERN publish a high volume of papers with thousands of authors. This is because performing searches for proposed new particles involves the development and construction of software and sophisticated instruments, their constant monitoring and calibration, and intense subsequent data analysis for each model, which depend on the transfer of specialized knowledge in the efficient operation of a gigantic particle accelerator. This process requires the work of many people from many nations and institutions throughout the world, who sign the papers in global scientific collaboration.

This exemplifies the nature and diversity of publications that exist in each specific area of knowledge. Understanding what defines the number of publications and their co-authorships in each specialized area is a necessity and a responsibility of researchers and institutions, in order to protect scientific work from abuses of monetary incentives. The hyperspecialization of knowledge, where each area of study is becoming more defined and deeper, also contributes to the debate on how we measure productivity and scientific quality. And how this is rewarded.

How is scientific productivity measured? The products associated with scientific work and its most standard metrics include the number of publications and the number of citations that the publication has. For each researcher, a productivity index called the h-index is defined. If a researcher has, for example, 20 articles and each of those articles has at least 20 citations, this means that his h-index is 20. However, it hides the limitation that it does not allow a fair comparison between authors from specific disciplines. This is because not all disciplines publish in the same way, nor with the same volume, nor with the same frequency. When the number of authors is large, it is not easy to externally identify individual contributions. When calculating individual scientific productivity, the number of talks in which the researcher has presented his or her work at international conferences is usually added to the calculation of individual scientific productivity. Perhaps the appropriate valuation of institutional seminars, where the contributions of each speaker to his/her own article are specifically presented, would contribute to the design of new, fairer productivity metrics.

How do we measure the quality of scientific productivity? This is not easy, since it again depends on each specific area and the usual metrics also have limitations. The number of citations of an article reflects the usefulness or impact of that article for the community. The "impact factor" of scientific journals is constructed with the citations of the articles published in those journals and is used as a metric to measure the quality of the journal. Although it does not measure the quality or impact of an individual article (but rather the average of the articles published in that journal over a certain period of time), it is a metric that protects, for example, against the existence of predatory journals, which also threaten science. Precisely because they "predate" or take advantage of institutional loopholes. Every week, my institutional mail sends to spam dozens of messages coming from these journals. In these emails, scientists are often invited to pay to publish their articles that were even already published in open access!

Faced with the limitations of metrics and threats, the following reflection arises: How can we, from our own scientific work, protect its quality? Measuring the quality of a scientist and her science based solely on her number of publications is an incomplete metric. Identifying the quality of an article, a piece of research or an idea becomes very difficult - especially in the absence of reliable metrics - if we do not have the time to judge the value of both the products and the processes associated with research work in the generation of knowledge.

In terms of products, one way of judging the value of productivity that is used is to devalue the impact of an article according to the number of authors who sign it. But sometimes this devaluation does not consider the specific processes of each area. This measure can even be detrimental to other areas when the areas are not comparable. So it is prudent to have different impact measurement metrics even for each sub-area. Perhaps, on a cross-cutting basis, publications in conjunction with students could be of greater value.

As for the processes, these seem to me to be more difficult to assess, since they often depend on the ethics and seriousness of us as researchers and our institutions. Ethical lapses and lack of transparency and rigor in the processes are detrimental to the quality of science, and it is our responsibility to discuss them as well. How to minimize them may include, from my own experience, discussing regularly, responsibly and rigorously with our research groups about ideas, methodologies, results and how to make them public.

Protocols for the use of open science platforms such as arXiv, for example, help in my opinion to protect transparency, since the community has free access to research and can demonstrate its processes, where authors can correct their manuscripts with different versions before being accepted for publication in journals. Standardizing the publication of erratums of our own articles when necessary also contributes to the rigor and transparency in the research work.

If we add to all of the above the implementation of solid institutional policies, I believe that we will be able to identify and neutralize the threats to science. And thus move from a scientific culture marked by the shadows of " publish or perish " to a reality where the construction of prosperous knowledge shines and prevails. In my opinion, the investment of time in defining, recognizing and valuing scientific productivity assertively, its quality and its processes, will allow a publication to stop being a bargaining chip, and become a precious place where a small grain of knowledge is immortalized.

Source: https://opinion.cooperativa.cl/opinion/ciencia-y-tecnologia/protejamos-la-ciencia-desde-la-ciencia/2025-11-05/101151.html