Introduction to Elsevier’s CiteScore

Liaisons and other librarians working with faculty should be aware of Elsevier’s recent release of a new  bibliometric, called the CiteScore Index (CSI). This metric will be a direct competitor to Thomson Reuters’ (now Clarivate Analytics’) ubiquitous Journal Impact Factor (JIF). The metrics are similar in that they both purport to measure the impact of academic journals based on the ratio between citable content published in the journal to citations to the journal.

While the JIF is based on content indexed in the Web of Science database, CSI will be based on the content in Scopus, which indexes a significantly larger number of titles (22,000 titles compared to 11,000).

If a journal’s impact is a consistent and measurable attribute, it stands to logic that its impact rank and score would be very similar regardless of who calculates the metric. However, preliminary analyses are showing that this is not the case. Librarians might wish to read the findings of early comparisons by Carl T. Bergstrom and Jevin West (developers of yet another metric, the EigenFactor). Surprising no one, they report that Elsevier journals seem to enjoy a boost in ranking using the new CiteScore, while the scores for Nature and Springer journals (now owned by the same company, and a major competitor to Elsevier journals in the space) are lower than what you might expect given their Impact Factors. Additionally, journals published by Emerald, which performed poorly compared to journals from other publishers in the same disciplines during our own analysis, have also seen a boost from the new metric.

These findings underscore the fact that reputational metrics are neither impartial nor objective and are subject to the influences of the entities that produce them. Librarians should be prepared to engage in critical evaluation of these metrics and to answer questions from faculty.

(Thank you to Klara Maidenberg, Assessment Librarian, for providing this information.)

test

Webinar: Research Impact Metrics for Librarians

Thank you to all of you who attended today’s webinar on research metrics, Research impact metrics for librarians: calculation & context | May 19, 2016.  This was a great overview of the challenges of metrics. Although the presentation focused on sciences, the content of the slides may be helpful to all of us who need to become better acquainted with benefits and limitations of key metrics tools.

You can now view the presentation on demand at your convenience with audio.

Additional documents:

test

Liaison Librarians Update Forum December 4 2015

The December 4 Liaison Update Forum showcased 6 lightning round presentations.  Each presentation was followed by small group discussion and an open Q&A session.  Presenters kept track of the questions (which were submitted on index cards to preserve anonymity) and have kindly recorded and shared their responses for this post.

  1. Stephanie Orfano: Thinking beyond fair dealing: Questions facing the Scholarly Communications and Copyright Office (…and how you can help)  Powerpoint || Q&A
  2. Caitlin Tillman: Talking to faculty about Downsview Powerpoint || Q&A
  3. Judith Logan: Choosing the right platform for your web content Powerpoint || Q&A
  4. Carey Toane: EntComp: Establishing an entrepreneurship community of practice at UTL Powerpoint ||Q&A
  5. Dylanne Dearborn: Research data management at the U of T Powerpoint ||Q&A
  6. Gail Nichol: I’ll follow you if you’ll follow me: How Scopus can track your research impact, connect you with others in your field and keep you up to date Powerpoint || Q&A

test

Talking about metrics to the University community – notes from the February 24 2015 practice exchange

Gail Nichol reviewed the recent discussions between the Library and senior university members  on how to support the acquisition of reputation metrics for use by faculty, departments and divisions.  Several librarians  shared stories of how they are currently supporting faculty and departmental requests for information.

Trends we noticed:

  • Although the H-index isn’t perfect, it has become the de facto tool for inter-institutional and inter-departmental comparisons.  Most understand its limitations.
  • Supporting faculty and departmental requests for metrics is time-intensive, with no one-size-fits-all approach.  Nevertheless, there is an important role for U of T librarians to support these kinds of requests at the divisional, departmental and individual level.
  • It is not easy to construct profiles even with tools like Web of Science and Scopus, that enable automatic generation of H-indices, so wider exposure to these kinds of tools and their capabilities is an area of interest. Be patient, there’s a learning curve.
  • Librarians are interested in further training and development to support their work in the area of metrics, and expressed interest in creating an information space to share information, strategies, and approaches to various requests.

Materials from today’s session:

 

test

ORCID: What It Is and What It Can Do

On December 1 2014, Dylanne Dearborn and Stephanie Orfano presented an open session for liaison librarians on ORCID, a persistent digital identifier that disambiguates researchers names. As an identification system, ORCID enables all aspects of a  researcher’s work to be identified, while also allowing for linkage in the scholarly communications workflow.
The presentation introduced ORCID and detailed the potential benefits and uses for different stakeholder groups including researchers, university administrators, funding bodies, publishers, and the library. Examples of ORCID integration were introduced and the prospective role of ORCID in the scholarly communication process was discussed.

test