Elsevier’s Scopus AI Tool Rolls Out for Database Customers

In News by Porter Anderson

With its Scopus AI search now offered to Scopus customers, Elsevier is stressing its emphasis on responsible use of artificial intelligence.

Image – Getty iStockphoto: Peshkova

By Porter Anderson, Editor-in-Chief | @Porter_Anderson

‘Powered by Responsible AI’
You may recall, we reported that Elsevier had released its “Scopus AI” search tool for scholarly testing in August. Now that it’s  offering the program to customers of its existing database of abstracts and citations, Scopus, it’s interesting to note how many times the word responsible is used. It’s one of the clearest signals yet that mistrust of, and challenges to, artificial intelligence capabilities are far from behind us, and that taking such a tool to the academic and research community requires aggressive statements of awareness on this point.

Brought into public view about a month ago, Elsevier’s fully released Scopus AI, now in selling mode, is described as “a generative AI product to help researchers and research institutions get fast and accurate summaries and research insights that support collaboration and societal impact.”

The new product is essentially housed in that database and responsive to it as an extensive search engine built to provide users with what Elsevier says testers call “foundational and influential papers that enable researchers to rapidly pinpoint seminal works, navigating academic progress and impact with precision and ease,” among other features that include expanded and enhanced summaries; a search for academic experts; and enhanced breadth of research.

Scopus AI, Elsevier says, is based on “trusted content from more than 27,000 academic journals” produced by “more than 7,000 publishers worldwide, with more than 1.8 billion citations.” It’s presented as including more than 17 million author profiles, and its content is said to be vetted by an independent board of scientists and librarians.

Maxim Khan

Soft-launched in August, the program is reported to have been tested by thousands of researchers in many parts of the world, something Maxim Khan, senior vice-president of analytics products and data platform at Elsevier, says appreciates.

“Scopus AI is built on trusted knowledge and data,” Khan says in a prepared comment, “that will help accelerate understanding of new research topics, provide deeper research insights, identify relevant research and experts in a particular field, all with the aim of paving the way for academic success.”

‘Clear and Verifiable References to Document Abstracts’

Image: Elsevier, Scopus

In the above promotional graphic, the company lists five points of responsibility it sees as important in AI principles.

Perhaps more useful are these three points listed in further text about the product. Many in world publishing will spot in the second point below—the italics are ours—the answer to what might be the primary question in terms of what “responsible” means:

  • Clear and verifiable references to document abstracts used in summarizations
  • Legal and technology protections to ensure zero data exchange or use of Elsevier data to train OpenAI’s public model
  • Adherence to European GDPR that guarantees user privacy and avoids unnecessary data retention

While it may be reassuring to see such an open and pointed statement on the intent to shield Scopus content from OpenAI, there are other companies’ large language models, of course, and the European Union’s legislation relative to AI hasn’t yet been implemented. Protection of that content may well need to go far beyond the data-training of OpenAI’s LLM.

A promising element of the company’s material about the product is the source-transparency built into it, in theory requiring the system to provide a reference for everything it finds and hopefully making hallucinations less likely. Nevertheless, Elsevier’s text about the product is clear: “It’s currently impossible to entirely eliminate inaccurate responses.” The system is being refined and developed, the company says to keep reducing the chance of bad answers.

Elsevier, in its promotional approaches to the news media, likes to say that it has “used AI and machine learning responsibly in our products” for more than 10 years, combining it with peer-reviewed content, data sets, and analytics. Not least because the company’s reputation could be badly damaged by a drastic error in AI usage of some kind, this is a reassuring and helpful element of its pitch for its new product.

Time, of course, provides tests that speedy search results don’t, and over time, the academic publishing world will come to know better whether Scopus AI and other products of this kind are the very helpful elements of an evolving industry that many hope they are.


More from Publishing Perspectives on artificial intelligence and its debate relative to publishing is here, more on digital publishing is here, more from Publishing Perspectives on scholarly publishing is here, and more on Elsevier is here.

About the Author

Porter Anderson

Facebook Twitter

Porter Anderson has been named International Trade Press Journalist of the Year in London Book Fair's International Excellence Awards. He is Editor-in-Chief of Publishing Perspectives. He formerly was Associate Editor for The FutureBook at London's The Bookseller. Anderson was for more than a decade a senior producer and anchor with CNN.com, CNN International, and CNN USA. As an arts critic (Fellow, National Critics Institute), he was with The Village Voice, the Dallas Times Herald, and the Tampa Tribune, now the Tampa Bay Times. He co-founded The Hot Sheet, a newsletter for authors, which now is owned and operated by Jane Friedman.