The Altmetric score is a general measure of the attention that an article, book or dataset has received online.
- The quantity of attention received - in general the more people talking about an article the higher the score
- The quality of that attention - a news story counts for more than a Facebook post. Attention from a researcher counts more than attention from an automated Twitter bot.
The Altmetric score is useful to rank articles based on attention - it can't tell you anything about the quality of the article itself, though reading the linked discussions might.
It is important to know that the score is based on the kinds of attention that Altmetric tracks (specifically links to or saves of scholarly articles, books and datasets) and to be mindful of potential limitations.
You should also bear in mind that different subject areas usually aren't directly comparable: a "popular" physics paper may have a far lower Altmetric score than an "average" genetics paper.
We don't use reader counts from Mendeley or CiteULike in the score calculation.
The steps we take to calculate the score are:
We aggregate the different pieces of content (tweets, news stories, blog posts, Facebook wall posts, Stack Exchange threads... we call them all posts) mentioning each article.
Intuitively, some forms of attention are of a 'higher quality' than others. If you ask scientists if they'd rather have somebody tweet about their article or write a piece in the New York Times about it then they'll choose the latter most of the time.
So all else being equal each type of content will contribute a different base score to the article's total. For example, a tweet may be worth 1 and a blog post 5. In practice these scores are usually modified by subsequent steps in the scoring algorithm.
Practical example: a news story in the NYT will, by default, contribute more to an article's final score than a single tweet.
Collect & analyse profiles
We fetch the profile of the user who created each post whenever possible. We also scan the Altmetric database for the items those users have already mentioned.
We look at how often the user links out to scholarly content, if they're biased towards any one publisher or journal and what type of people follow or are friends with them.
All this information is used to produce a weighting that influences how much each post contributes to the final score.
Practical example: posts from an automated journal TOC (that posts new papers to Facebook as they are published) will contribute very little to the article's final score. Posts from a doctor who links to articles once or twice a week and is followed by other doctors will score relatively highly.
Search other datasets
For some types of attention like blogs and the mainstream media it doesn't make sense to look at post author profiles.
In these cases we typically try to measure influence by looking at how much attention the source of attention gets on different social media sites.
Practical example: more people tweet or repost BBC News science stories than science articles in Le Figaro - so posts from the BBC News site contribute more to the article's final score than posts written by Le Figaro.
Produce final score
We total the contributions made by post after applying any relevant modifiers.