There are several ways one can asses the importance and quality of a journal. A commonly used metric is the Journal Impact Factor, but one can also use metrics from Scopus/Scimago. These all use citation data to rank journals. Another way to look at journals is to use acceptance rate as a proxy for quality.
Each of these numerical rankings can be useful, but a score based on citations or acceptance alone cannot tell the whole story of a journal's importance in its field. For more information on choosing a journal, open access initiatives, avoiding predatory publishing schemes, and public access policies please visit the Michigan State University Libraries: Scholarly Communication Webpage.
Also remember that the ranking for a journal as a whole does not mean a similar ranking for each article within that journal.
The impact factor is a citation measure produced by Clarivate Analytics Web of Science database and they are published annually in their Journal Citation Reports Database. Impact factors are only available for journals that are indexed in ISI databases, so they do not apply to all journals or all years of all journals. They are in no way a universal measure.
One journal's impact factor on its own does not mean anything as to whether it is "high" or "low". Instead, it's important to compare impact factors among the journals in one particular subject area. This way, one can determine if the impact factor of the journal of interest is high or low compared to other journals in a subject area. Some fields have lower impact factors than others merely because of citation traditions in those fields, nothing to do with quality of the journals.
San Francisco Declaration on Research Assessment
In 2012 editors and publishers of scientific journals got together at the Annual Meeting of The American Society for Cell Biology in San Francisco CA and created a statement on the use of research metrics. It discusses the use of journal impact factor and states: "do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions."
Impact Factor Debate
Impact factors have been much debated in the literature in terms of their value for evaluating research quality. The general consensus is that impact factors have been misunderstood and abused by many institutions that place too much value on something that is not entirely scientific or reliable.
Impact factor manipulation is increasingly becoming a concern and there have been instances of journals using a fake impact factor on their website. Always check the journal on Journal Citation Reports to verify this information.
Further Reading on issues with JIF:
How Impact Factors are Calculated
A journal's impact factor for 2017 would be calculated by taking the number of citations in 2017 from articles that were published in 2015 and 2016 and dividing that number by the total number of articles published in that same journal in 2015 and 2016. Please see the example below.
The specific calculations for Nursing Research's 2018 impact factor are displayed below.
Articles published in 2015 that were cited in 2017: 63
Articles published in 2016 that were cited in 2017: 94
Total Number of articles published in 2015: 61
Total number of articles published in 2016: 51
157 (articles published in 2015 and 2016 that were cited in 2017)
112 (total number of articles published in 2015 and 2016)
The 2018 Impact Factor for the journal Nursing Research means that, on average, articles published in this journal from one or two years ago have been cited around 1.4 times.
Factors that Influence Impact Factors
Date of Publication
The impact factor is based solely on citation data and only looks at the citation frequency of articles from a journal in their first couple years of publication. Journals with articles that are steadily cited for a long period of time (say, 10 years) rather than only immediately lose out with this calculation.
Large vs. Small Journals
Large and small journals are compared equally. Large journals tend to have higher impact factors--nothing to do with their quality.
It’s important to remember that the impact factor only looks at an average citation and that a journal may have a few highly cited papers that greatly increase its impact factor, while other papers in that same journal may not be cited at all. Therefore, there is no direct correlation between an individual article’s citation frequency or quality and the journal impact factor.
Impact factors are calculated using citations not only from research articles but also review articles (which tend to receive more citations), editorials, letters, meeting abstracts, and notes. The inclusion of these publications provides the opportunity for editors and publishers to manipulate the ratio used to calculate impact factor and falsely try to increase their number.
Changing / Growing Fields
Rapidly changing and growing fields (e.g. biochemistry and molecular biology) have much higher immediate citation rates, so those journals will always have higher impact factors than nursing, for instance.
Clarivate's Indexing / Citation Focus
There is unequal depth of coverage in different disciplines. In the health sciences, the Institute for Scientific Information (ISI), the company which publishes impact factors, has focused much of their attention on indexing and citation data from journals in clinical medicine and biomedical research and has not focused on nursing as much. Very few nursing journals are included in their calculations (around 45). This does not mean that nursing journals they do not include are of lesser quality, and, in fact, they do not give any explanation for why some journals are included and others are not. In general, ISI focuses more heavily on journal dependent disciplines in the sciences and provides less coverage for areas of the social sciences and humanities, where books and other publishing formats are still common.
The Subject Field of the Journal
In some disciplines such as some areas of clinical medicine where there is not a distinct separation between clinical/practitioner versus research journals, research journals tend to have higher citation rates. This may also apply to nursing. There are some fields that, by tradition, do not cite as often as others. Mathematics is one example of a field like this. By definition, some fields like that have lower impact factors because of lower rates of citation.
One can see how the database Scopus ranks journals and compare multiple titles by using the 'Sources' tab at the top of the Scopus homepage. Each journal has a CiteScore, SNIP, or % cited.
CiteScore measures the citation impact of a journal. More information on how it is calculated can be found on the Scopus Journal Metrics webpage.
SNIP stands for Source Normalized Impact per Paper (SNIP) and it measures contextual citation impact by weighting citations based on the total number of citations in a subject field. It helps to compare a journal with competing journals in a subject area.
Journal acceptance rates are the number of manuscripts accepted for publication compared to the total number of manuscripts submitted in one year. The exact method of calculation varies depending on the journal, is often an approximation, changes annually, and can be tricky to find. Acceptance rates are often used as a proxy to journal quality - a low acceptance rate may mean the journal is of high quality.
There is no database that compiles this metric. To find a journal acceptance rate: