Liaisons and other librarians working with faculty should be aware of Elsevier’s recent release of a new bibliometric, called the CiteScore Index (CSI). This metric will be a direct competitor to Thomson Reuters’ (now Clarivate Analytics’) ubiquitous Journal Impact Factor (JIF). The metrics are similar in that they both purport to measure the impact of academic journals based on the ratio between citable content published in the journal to citations to the journal.
While the JIF is based on content indexed in the Web of Science database, CSI will be based on the content in Scopus, which indexes a significantly larger number of titles (22,000 titles compared to 11,000).
If a journal’s impact is a consistent and measurable attribute, it stands to logic that its impact rank and score would be very similar regardless of who calculates the metric. However, preliminary analyses are showing that this is not the case. Librarians might wish to read the findings of early comparisons by Carl T. Bergstrom and Jevin West (developers of yet another metric, the EigenFactor). Surprising no one, they report that Elsevier journals seem to enjoy a boost in ranking using the new CiteScore, while the scores for Nature and Springer journals (now owned by the same company, and a major competitor to Elsevier journals in the space) are lower than what you might expect given their Impact Factors. Additionally, journals published by Emerald, which performed poorly compared to journals from other publishers in the same disciplines during our own analysis, have also seen a boost from the new metric.
These findings underscore the fact that reputational metrics are neither impartial nor objective and are subject to the influences of the entities that produce them. Librarians should be prepared to engage in critical evaluation of these metrics and to answer questions from faculty.
(Thank you to Klara Maidenberg, Assessment Librarian, for providing this information.)
Thank you to all of you who attended today’s webinar on research metrics, Research impact metrics for librarians: calculation & context | May 19, 2016. This was a great overview of the challenges of metrics. Although the presentation focused on sciences, the content of the slides may be helpful to all of us who need to become better acquainted with benefits and limitations of key metrics tools.
You can now view the presentation on demand at your convenience with audio.
The December 4 Liaison Update Forum showcased 6 lightning round presentations. Each presentation was followed by small group discussion and an open Q&A session. Presenters kept track of the questions (which were submitted on index cards to preserve anonymity) and have kindly recorded and shared their responses for this post.
- Stephanie Orfano: Thinking beyond fair dealing: Questions facing the Scholarly Communications and Copyright Office (…and how you can help) Powerpoint || Q&A
- Caitlin Tillman: Talking to faculty about Downsview Powerpoint || Q&A
- Judith Logan: Choosing the right platform for your web content Powerpoint || Q&A
- Carey Toane: EntComp: Establishing an entrepreneurship community of practice at UTL Powerpoint ||Q&A
- Dylanne Dearborn: Research data management at the U of T Powerpoint ||Q&A
- Gail Nichol: I’ll follow you if you’ll follow me: How Scopus can track your research impact, connect you with others in your field and keep you up to date Powerpoint || Q&A
Gail Nichol reviewed the recent discussions between the Library and senior university members on how to support the acquisition of reputation metrics for use by faculty, departments and divisions. Several librarians shared stories of how they are currently supporting faculty and departmental requests for information.
Trends we noticed:
- Although the H-index isn’t perfect, it has become the de facto tool for inter-institutional and inter-departmental comparisons. Most understand its limitations.
- Supporting faculty and departmental requests for metrics is time-intensive, with no one-size-fits-all approach. Nevertheless, there is an important role for U of T librarians to support these kinds of requests at the divisional, departmental and individual level.
- It is not easy to construct profiles even with tools like Web of Science and Scopus, that enable automatic generation of H-indices, so wider exposure to these kinds of tools and their capabilities is an area of interest. Be patient, there’s a learning curve.
- Librarians are interested in further training and development to support their work in the area of metrics, and expressed interest in creating an information space to share information, strategies, and approaches to various requests.
Materials from today’s session:
On December 1 2014, Dylanne
Dearborn and Stephanie
Orfano presented an open session for liaison librarians on ORCID
, a persistent digital identifier that disambiguates researchers names. As an identification system, ORCID enables all aspects of a researcher’s work to be identified, while also allowing for linkage in the scholarly communications workflow.
The presentation introduced ORCID and detailed the potential benefits and uses for different stakeholder groups including researchers, university administrators, funding bodies, publishers, and the library. Examples of ORCID integration were introduced and the prospective role of ORCID in the scholarly communication process was discussed.
This spring SPARC published a community resource, Article-Level Metrics: A SPARC Primer (PDF)…. This SPARC primer provides an overview of what ALMs are, why they matter, how they complement established utilities and metrics, and how they might be considered for use in the tenure and promotion process.