By Porter Anderson, Editor-in-Chief | @Porter_Anderson
Verses: ‘Research With an Impact on Society’As we work to recoup some of the relevant material released to the news media near the end of our publication year, we look now at two significant research reports from Elsevier, one on research evaluation and the other on real-world impact—both increasingly pressing interests in the world of academic publication.
In the 30-page report “Back to Earth: Landing Real-World Impact in Research Evaluation,” the program carried out a survey of 400 academic leaders, funders, and researchers in seven countries about real-world impact as part of academic evaluation. Key findings include:
- Sixty-six percent of respondents say academia has a moral responsibility to incorporate real-world impact into standard research evaluation
- Seventy percent say they are passionate about research that has a positive real-world impact
- Fifty-three percent say a more holistic approach to evaluation would improve research cost-effectiveness.\
- Fifty-one percent of respondents identified at least one serious problem with current methods of research evaluation
In this report, it’s interesting to note some of the differences, culture-to-culture in the question of how important it is for research “to aim for real-world impact.” Particularly in the coronavirus COVID-19 pandemic, there could hardly have been a time when it was so obvious, the need that the world-at-large has for the most sophisticated, committed, and efficient research.
Nevertheless, this graphic indicates that surveyed personnel on this point came in on the affirmative side (yes, research should aim for real-world impact) at rates up to 93 percent in the United Kingdom and a low of 64 percent in the Elsevier report’s home, the Netherlands.
Another very interesting point in this report compares the view of funders and those of researchers.
While funders surveyed seem to agree with researchers that more holistic approaches are important, the funders did say that they were more in agreement with the researchers that the current system creates vested interests.
And it’s the researchers who said they were more passionate than the funders about having “real-world impact as researchers and academic leaders.”
Topping the list of barriers offered by respondents overall (researchers and funders) to a more holistic form of research assessment was lack of resources at 56 percent, and 48 percent citing a lack of consensus on what actually constitutes impact.
Also running heavily were the lack of a common framework or methodology in holistic method of assessing research’s impact, at 45 percent. But a tie came in next, with 40 percent saying that two more barriers are “achieving sufficient alignment between different actors” and “complexity.”
And in the 20-page report “The Future of Evaluation: Emerging Consensus on a More Holistic System,” a series of round-table discussions with the heads of funding bodies from 18 countries and 40 academic leaders explored three key questions:
- How do you view the existing evaluation system?
- How would you like to see it change?
- What is needed to get there?
Among highlights of the outcomes covered here:
- Evaluation is a priority: The subject is of high importance to academic leaders
- A primary focus is institutional-level assessment, including societal impact: Leaders are interested in evaluation of the university and its teaching, research and societal mission
- There is a strong appetite for change: The current system, with its emphasis on articles and citations, does not align with desired outcomes. There is wide support for reform towards a system that also addresses education and societal impact
- Striking the right balance between research and education will be key: This involves acknowledging that research underpins education, especially at research-intensive universities
- A holistic approach is required: Harmonization at an institutional and international level, as well as portability at the individual level, is critical to the successful development of evaluation:
- Evaluation of universities shouldn’t be viewed as separate from evaluation of academics
- Evaluation of universities in a specific country cannot be out of sync with global trends
- Bringing about change won’t be easy: A comprehensive, objective evaluation of societal impact is far from straightforward
- A shift in culture is necessary: A move towards a more interdisciplinary approach, emphasizing aspects such as team science, and diversity and inclusion, is seen as an important ingredient for success
- Qualitative assessment and peer review are critical for evaluation of broader impact
- Quantitative measures of broader impact are needed: While these are complex and elusive, they would enable easy aggregation and comparison of elements such as societal impact
- Artificial intelligence has an important role to play: AI will change the way we teach and do research, and it has the potential to enhance future evaluation through addressing challenges around qualitative and quantitative assessment
And around issues involving artificial intelligence, several observations from the research will sound familiar to trade publishing professionals, as academic leaders told Elsevier about opportunities they could see to use it:
- Improve how research is conceived, conducted and communicated: AI is already proving its value in a variety of areas, from sifting and analyzing data to providing personalized and predictive services
- Innovate teaching and learning: Generative AI is currently used in coursework and homework assignments, and has the potential to create personalized learning materials, as well as provide virtual mentoring and other support
- Aid peer review: Options identified included scanning manuscripts for ethical issues, such as plagiarism, and checking for alignment with journals’ aims & scopes
- Optimize institutions’ impact: Studies are already underway to explore the use of AI in predicting and evaluating contributions
- Convert qualitative comments into quantitative metrics: Many felt that AI is a promising route to turn qualitative comments into practical indicators
- Build evaluation tools: These include algorithms designed to analyze case studies
Clearly, the effort to assess research and its impact on society, an undercurrent in both reports, has been heightened among the world’s biggest publishers in the field.
In her introduction to the Back to Earth study, Judy Verses, president for academic and government affairs at Elsevier, writes, “Research with an impact on society has always been important. But with increasingly stretched budgets, it’s now equally important to assess, audit and communicate this impact. Funders know this and researchers know this.
“Indeed, academics are already being increasingly called on to show the economic and societal impact of their work, and funders have systems in place to evaluate this.”
The Call for Harmonization: Not Just in Academia
Needless to say, there are stark parallels here with some of the issues seen in the international trade publishing industry’s efforts to develop better practices and results in comparative evaluation, as well.
For example, there’s been a fine pilot exercise performed by AldusUp that looks at the different ways European book markets evaluate themselves. The apples-to-pears challenges of market-to-market publishing statistics can be dizzying.
In this, we’re indebted to the researcher Owena Reinke of the Johannes Gutenberg-Universität Mainz, who pointed out in our preparation for a Frankfurter Buchmesse panel at the Guest of Honor Slovenia pavilion that that after 18 months of “digging through ’50 Shades of Fruit Salat’ to find out whether some research had even a hope of being harmonized, she could confirm that there still is “surprisingly little systematic surveying done to determine basic figures about the reading habits in different European countries on a national level. And the data we have,” she continued, “is still remarkably hard to compare.”
Two of the most cogent points Reinke made after those 18 months of research she’d performed—relative to “Data and Reading and Publishing Research” with Christoph Blasi and Miha Kovač—were that in the trade data comparison she was examining, (1) the entity paying for data could heavily influence the results, leading to “quite heterogeneous ideas of which data is relevant,” and (2) the “general social conditions in each country could have a strong impact on what types of texts and media are considered beneficial.”
What’s more, Reinke noted, many nation’s research is published only in its own language, creating additional friction because of translation requirements and lack of clarity about the efficacy of side-by-side comparisons’ accuracy in parameters.
At the worldwide level beyond Aldus’ purview in Europe, Karine Pansa president of the International Publishers Association (IPA), has made data, its collection and coherence from market to market, a major center of her attention during her term in office.
More on academic publishing is here, more on Elsevier is here, more on industry statistics is here, more on Frankfurter Buchmesse is here, more on the Guest of Honor Slovenia program at Frankfurt this year is here, and more on questions of data, it’s use and place in many parts of publishing, is here.
Publishing Perspectives is the world media partner of the International Publishers Association.