Feedback & Support
Feedback & Support
staging, yo
ImpactStory

FAQ

what is ImpactStory?

ImpactStory is an open-source, web-based tool that helps researchers explore and share the diverse impacts of all their research products—from traditional ones like journal articles, to emerging products like blog posts, datasets, and software. By helping researchers tell data-driven stories about their impacts, we're helping to build a new scholarly reward system that values and encourages web-native scholarship. We’re funded by the Alfred P. Sloan Foundation and incorporated as a nonprofit corporation.

ImpactStory delivers open metrics, with context, for diverse products:

  • Open metrics: Our data (to the extent allowed by providers’ terms of service), code, and governance are all open.
  • With context: To help researcher move from raw altmetrics data to impact profiles that tell data-driven stories, we sort metrics by engagement type and audience. We also normalize based on comparison sets: an evaluator may not know if 5 forks on GitHub is a lot of attention, but they can understand immediately if their project ranked in the 95th percentile of all GitHub repos created that year.
  • Diverse products: Datasets, software, slides, and other research products are presented as an integrated section of a comprehensive impact report, alongside articles--each genre a first-class citizen, each making its own kind of impact.

who is it for?

  • researchers who want to know how many times their work has been downloaded, bookmarked, and blogged
  • research groups who want to look at the broad impact of their work and see what has demonstrated interest
  • funders who want to see what sort of impact they may be missing when only considering citations to papers
  • repositories who want to report on how their research products are being discussed
  • all of us who believe that people should be rewarded when their work (no matter what the format) makes a positive impact (no matter what the venue). Aggregating evidence of impact will facilitate appropriate rewards, thereby encouraging additional openness of useful forms of research output.

how should it be used?

ImpactStory data can be:

  • highlighted as indications of the minimum impact a research product has made on the community
  • explored more deeply to see who is citing, bookmarking, and otherwise using your work
  • run to collect usage information for mention in biosketches
  • included as a link in CVs
  • analyzed by downloading detailed metric information

how shouldn’t it be used?

Some of these issues relate to the early-development phase of ImpactStory, some reflect our early-understanding of altmetrics, and some are just common sense. ImpactStory reports shouldn't be used:

  • as indication of comprehensive impact

    ImpactStory is in early development. See limitations and take it all with a grain of salt.

  • for serious comparison

    ImpactStory is currently better at collecting comprehensive metrics for some products than others, in ways that are not clear in the report. Extreme care should be taken in comparisons. Numbers should be considered minimums. Even more care should be taken in comparing collections of products, since some ImpactStory is currently better at identifying products identified in some ways than others. Finally, some of these metrics can be easily gamed. This is one reason we believe having many metrics is valuable.

  • as if we knew exactly what it all means

    The meaning of these metrics are not yet well understood; see section below.

  • as a substitute for personal judgement of quality

    Metrics are only one part of the story. Look at the research product for yourself and talk about it with informed colleagues.

what do these number actually mean?

The short answer is: probably something useful, but we’re not sure what. We believe that dismissing the metrics as “buzz” is short-sited: surely people bookmark and download things for a reason. The long answer, as well as a lot more speculation on the long-term significance of tools like ImpactStory, can be found in the nascent scholarly literature on “altmetrics.”

The Altmetrics Manifesto is a good, easily-readable introduction to this literature. You can check out the shared altmetrics library on Mendeley for a growing list of relevant research.

terms of use

Due to agreements we have made with data providers, you may not scrape this website -- use the embed or download funtionality instead.

which metrics are measured?

Metrics are computed based on the following data sources (column names for CSV export are in parentheses):

where is the journal impact factor?

We do not include the Journal Impact Factor (or any similar proxy) on purpose. As has been repeatedly shown, the Impact Factor is not appropriate for judging the quality of individual research products. Individual article citations reflect much more about how useful papers actually were. Better yet are article-level metrics, as initiated by PLoS, in which we examine traces of impact beyond citation. ImpactStory broadens this approach to reflect product-level metrics, by inclusion of preprints, datasets, presentation slides, and other research output formats.

where is my other favourite metric?

We only include open metrics here, and so far only a selection of those. We welcome contributions of plugins. Write your own and tell us about it.

Not sure ImpactStory is your cup of tea? Check out these similar tools:

you're not getting all my citations!

We'd love to display citation information from Google Scholar and Thomson Reuter's Web of Science in ImpactStory, but sadly neither Google Scholar nor Web of Science allow us to do this. We're really pleased that Scopus has been more open with their data, allowing us to display their citation data on our website. PubMed and Crossref are exemplars of open data: we display their citation counts on our website, in ImpactStory widgets, and through our API. As more citation databases open up, we'll include their data as fully as we can.

Each source of citation data gathers citations in its own ways, with their own strengths and limitations. Web of Science gets citation counts by manually gathering citations from a relatively small set of "core" journals. Scopus and Google Scholar crawl a much more expansive set of publisher webpages, and Google also examines papers hosted elsewhere on the web. PubMed looks at the reference sections of papers in PubMed Central, and CrossRef by looking at the reference lists that they see. Google Scholar's scraping techniques and citation criteria are the most inclusive; the number of citations found by Google Scholar is typically the highest, though the least curated. A lot of folks have looked into the differences between citation counts from different providers, comparing Google Scholar, Scopus, and Web of Science and finding many differences; if you'd like to learn more, you might start with this article.

what are the current limitations of the system?

ImpactStory is in early development and has many limitations. Some of the ones we know about:

gathering IDs sometimes misses products

  • ORCID and BibTex import sometimes can't parse or locate all objects.

products are sometimes missing metrics

  • doesn’t display metrics with a zero value
  • sometimes the products were received without sufficient information to use all metrics. For example, the system sometimes can't figure out all URLs from a DOI.

metrics sometimes have values that are too low

  • some sources have multiple records for a given product. ImpactStory only identifies one copy and so only reports the impact metrics for that record. It makes no current attempt to aggregate across duplications within a source.

other

  • the number of items on a report is currently limited.
Tell us about bugs! @ImpactStory (or via email to team@impactstory.org)

is this data Open?

We’d like to make all of the data displayed by ImpactStory available under CC0. Unfortunately, the terms-of-use of most of the data sources don’t allow that. We're trying to figure out how to handle this.

An option to restrict the displayed reports to Fully Open metrics — those suitable for commercial use — is on the To Do list.

The ImpactStory software itself is fully open source under an MIT license. GitHub

who developed ImpactStory?

Concept originally hacked at the Beyond Impact Workshop, part of the Beyond Impact project funded by the Open Society Foundations (initial contributors). Here's the current team.

who funds ImpactStory?

Early development was done on personal time, plus some discretionary time while funded through DataONE (Heather Piwowar) and a UNC Royster Fellowship (Jason Priem).

In early 2012, ImpactStory was given £17,000 through the Beyond Impact project from the Open Society Foundation. As of May 2012, ImpactStory is funded through a $125k grant from the Alfred P. Sloan Foundation.

what have you learned?

  • the multitude of IDs for a given product is a bigger problem than we guessed. Even articles that have DOIs often also have urls, PubMed IDs, PubMed Central IDs, Mendeley IDs, etc. There is no one place to find all synonyms, yet the various APIs often only work with a specific one or two ID types. This makes comprehensive impact-gathering time consuming and error-prone.
  • some data is harder to get than we thought (wordpress stats without requesting consumer key information)
  • some data is easier to get than we thought (vendors willing to work out special agreements, permit web scraping for particular purposes, etc)
  • lack of an author-identifier makes us reliant on user-populated systems like Mendeley for tracking author-based work (we need ORCID and we need it now!)
  • API limits like those on PubMed Central (3 request per second) make their data difficult to incorporate in this sort of application

how can I help?

  • do you have data? If it is already available in some public format, let us know so we can add it. If it isn’t, either please open it up or contact us to work out some mutually beneficial way we can work together.
  • do you have money? We need money :) We need to fund future development of the system and are actively looking for appropriate opportunities.
  • do you have ideas? Maybe enhancements to ImpactStory would fit in with a grant you are writing, or maybe you want to make it work extra-well for your institution’s research outputs. We’re interested: please get in touch (see bottom).
  • do you have energy? We need better “see what it does” documentation, better lists of collections, etc. Make some and tell us, please!
  • do you have anger that your favourite data source is missing? After you confirm that its data isn't available for open purposes like this, write to them and ask them to open it up... it might work. If the data is open but isn't included here, let us know to help us prioritize.
  • can you email, blog, post, tweet, or walk down the hall to tell a friend? See the this is so cool section for your vital role....

this is so cool.

Thanks! We agree :)

You can help us. Demonstrating the value of ImpactStory is key to receiving future funding.

Buzz and testimonials will help. Tweet your reports. Blog, send email, and show off ImpactStory at your next group meeting to help spread the word.

Tell us how cool it is at @ImpactStory (or via email to team@impactstory.org) so we can consolidate the feedback.

I have a suggestion!

We want to hear it. Send it to us at @ImpactStory (or via email to team@impactstory.org).