Publishing’s Paper Problem and How to Future-Proof the Industry

In Guest Contributors by Guest Contributor

There’s an urgent need for publishers to update legacy rights management and content creation systems, according to speakers at BISG’s “Making Information Pay” conference.

By Charlotte Abbott

Though the idea of publishing as a data-driven industry may still be anathema to its old guard, the Book Industry Study Group’s 8th annual Making Information Pay conference hammered home once again that gathering and managing the right data is critical to “future-proofing” the industry. The key is using data to improve content and product development, book discovery and rights management, as well as customer loyalty and profitable growth, said Book Industry Study Group chair Scott Lubeck in his introduction to the ten presentations packed into last Thursday morning’s meeting at the McGraw-Hill auditorium in New York.

Rights Management Systems Are Out-of-Date

The big jaw-dropper was BISG’s joint survey with the Copyright Clearance Center on the fundamental lack of rights management systems throughout the industry in the U.S. — described as “a vast problem.” The conference, which tends to focus on improving supply chain, operational and technical infrastructure, also went further than in the past in outlining how the adoption of new data-driven mechanisms will affect marketing and even editorial functions within publishers.

The crux of the industry’s rights management problem, as articulated by Heather Reid of the Copyright Clearance Center, is that while digital publishing in the new global marketplace offers new licensing opportunities not just for books, but also for fragments of books, publishers are not equipped to respond promptly to rights requests. The nine publishers and six vendors in the survey said the problem is rooted legacy rights management systems created in the 1960s. Largely made up of paper records and in some cases PDFs of legal contracts, these old systems fall far short of the well-structured data storage necessary for fast access and to build the automated processes needed to exploit new markets.

The paper problem is endemic, said Reid. One vendor reported that 50% of the publishers it deals with, including big ones, have rights contracts filed in paper. Publishers themselves said that their inbound rights records were inaccessible and outbound rights transactions take so long to process that rights querents often give up and move on to other content providers. The sole publisher in the survey who had transitioned to a structured rights data system reported “a 100% increase in licensing revenue when we started responding faster to rights requests,” according to Reid. However, this publisher also admitted that moving to the new data system was “arduous.” To turn the industry around, Reid called for standardized terms to optimize business processes, and the alignment of rights management systems within and between publishers and vendors. To that end, BISG is crafting a taxonomy it hopes will become the foundation of data modeling for the next generation of industry rights tools.

Building Efficiences into Content Creation

“Automation,” “alignment” and “standards” were just a few of the morning’s watchwords. Keynote speaker Ken Michaels, Chief Operating Officer of the Hachette Book Group, discussed the company’s aim to “create content once for all consumption” via new “auto-sync” digital platforms and an “internal cloud infrastructure.” Reflecting a shift toward collaboration within a team culture and away from a hierarchical, silo mentality, these tools have helped Hachette realign its workflow from a traditional print and inventory-based product delivery system to an “inventory intelligence” system that responds to demand based on contact management to reach influencers, marketing list management, and “event based” campaign management.

For Bill Kasdorf of Apex Content Solutions, who reminded publishers that they “publish content not books,” the watchword was “content chunking.”  He advocated using design, semantics, workflow strategies, standards and metadata to facilitate content development, while acknowledging that repurposing content from novels, textbooks, guidebooks and manuals will lead to very different products.

Fail Forward Fast

Andrew Savikas of O’Reilly, who addressed how the tech publisher transformed itself from a print based business model to a web based one, introduced an edgy slogan –- “Fail Forward Fast.” It underscored his emphasis on the need to “maximize learning relative to time and money -– make the big mistakes early on, when you have less invested” in order to limit costly mistakes as the scale grows. While acknowledging that O’Reilly’s tech authors are particularly suited to adopting manuscript assembly software that feeds into the publisher’s system, he noted that ePub-compatible tools for research and writing aimed at novelists and screenwriters, such as Scrivener 2.0, might be worthy of pilot projects for other publishers.

Focusing on book discovery, Madi Weland Solomon, Director of Content Standards for Pearson in London, also hatched one of the day’s memorable slogans: “Only librarians want to search, readers want to find.” She focused on “smart content” -– meaning content that has good metadata, classifications to aid discovery, and that is structured for in-depth analytics –- so that semantic web technologies can find it and connect it with readers. This approach requires “data-grooming” — a new skill set for the editorial process, and the basis for building rules and algorithms not just for books, but “micro-content,” “learning objects,” “customized content,” “games,” etc.

Kaplan Publishing’s Brett Sandusky looked at discovery from a sales and marketing process perspective. The first step, he said, is to be vigilant about the metadata that describes their content (e.g. author, title, publisher, price, BISAC category, etc.). To understand better how consumers access and use their content, publishers should also research reader-generated content tags for their books on sites like GoodReads and LibraryThing, he said, and use Google Analytics to track the keywords that drive readers to their content.

Internal Cultural Disconnect

The conference was just as interesting for its subtext. Almost all the speakers alluding to significant disconnects between top managers and employees on the ground, between IT and operations and content-focused staff, and sales/marketing and editorial as major hurdles to overcome. There was also the specter of the environmental impact of the move to “cloud” computing upon which digital publishing will be dependent, when Solomon mentioned the massive data centers that Google, Sun Microsystems and others are building on more than 600,000 acres of land around the world –- at $600 million each. “Imagine electricity it will take,” she said, eroding digital publishing’s image as a way to “go green” by reducing returns and paper waste. But that, it seems, is tomorrow’s problem.

Download the presentations here.

Charlotte Abbott is a journalist, professor and consultant specializing in e-books and digital publishing. Find her on Twitter at @charabbott, where she moderates a weekly e-publishing chat on Fridays from 4-5pm ET using the #followreader hashtag.

SURVEY: Does Your Publishing Company Still Rely on Paper-based Internal Systems?

About the Author

Guest Contributor

Guest contributors to Publishing Perspectives have diverse backgrounds in publishing, media and technology. They live across the globe and bring unique, first-hand experience to their writing.