• Tidak ada hasil yang ditemukan

Areas of Potential Cooperation and Collaboration

Dalam dokumen LIBRARIES Advances in Librarianship Vol 28.pdf (Halaman 119-125)

the good of science more than strict economic rationality to ensure that highly specialized, little-used research material continues to be published and collected.

The argument against the Big Deal is that the deal protects “lesser- quality” journals. This goes to the heart of the central questions in collection development. Who is to determine what content is worth supporting, and should librarians give users what they want, or what the librarian thinks they need?

Librarians and publishers can work together to ensure that the content being published, and the methods used to acquire access to it, evolve to meet the dynamic needs of the users we both serve.

B. New Pricing Models

Many librarians have been frustrated with the subscription model that is based on historical holdings for some years. Many libraries have needed to reduce their collections by canceling subscriptions to journals that they perceive as only marginally adding value for their users.

With the advent of online publishing, the “Big Deal” model (with restrictions on subscription cancellations, in return for access to more journals for more users, and caps on price increases for the term of the license) has achieved significant uptake. This model was pioneered by libraries, particularly OhioLINK, the consortium of libraries in the state of Ohio.

However, by accepting the Big Deal, librarians feel that they are “locked in”

to levels of spending and to subscriptions, and they want to reassert more control over the collections (Frazier, 2001).

“Usage-based pricing” is an alternative model that has been much discussed in the past few years. In this model, libraries would pay based on the articles that their patrons actually use, rather than based on subscriptions to journals whose articles might or might not be used by the library’s patrons.

However, so far there is not an accepted usage-based model with which librarians are comfortable; the amount that the library might be charged is too unpredictable. And meanwhile models such as the Big Deal have increased usage dramatically, and driven down the per-use cost that large libraries pay via subscriptions (Stange, 2003, slide 12). Wiley and other publishers are currently working closely with libraries to find a mutually agreeable method to add usage as a metric to the cost/value equation.

Pricing is not an area that lends itself to formal standardization.

However, market forces have a way of evolving towards de facto standardized forms. Publishers and librarians will continue to experiment with new pricing models, and eventually the best approaches will survive. In the search for new and sustainable business models, controlling the risks of experiments is key.

In pricing experiments, both the publishers and libraries will want protection from unforeseen financial consequences. For the next few years, such risk- controlled experiments will shed important light on the results of different pricing schemes. But it will likely take several more years before experiments can be concluded, the results digested, and new models be added or replace the current ones.

C. Archiving Digital Content

Libraries have been de facto redundant archives of print copies. In the electronic world, in contrast, libraries normally access content from publishers’ servers. Who is responsible to ensure the long-term preservation

of the electronic resources that scientists use as part of their research? If libraries have carried that responsibility in the print world, should libraries continue with the responsibility in the electronic world even though libraries are not hosting the electronic content?

Or should publishers take on the responsibility? Libraries have expressed reservations that publishers cannot be relied upon to ensure long-term preservation of the e-content: What if the publisher goes out of business, or, if the content ceases to be a financial asset, will publishers be willing to continue to bear the costs of preserving the content? Publishers have tried to reassure librarians that they will indeed preserve content, but librarians have not responded positively to what is essentially a “Trust us” argument by publishers.

As a result, publishers have agreed with librarians that long-term preservation of electronic content is important, and that solutions need to be found. At present there are a number of electronic archive projects underway that have been initiated by libraries and that include publishers’ participation:

JSTOR (http://www.jstor.org/about/earchive.html) and LOCKSS (http://

www.lockss.org/), both with Mellon funding; the California Digital Library (http://www.cdlib.org/programs/digital_preservation.html); the British Library (http://www.bl.uk); and the Koninklijke Bibliotheek, the National Library of the Netherlands (http://www.kb.nl/).

There are a number of issues yet to be solved:

† Standard formats and metadata packaging for sending content to the archive.

† Agreement on what must be archived, vs. what is optional (for example, must non-article items such as meeting announcements and advertisements be e-archived? What about user interface functionality such as links, search options, and personalization?).

† How to “future-proof” the data so that today’s formats will be accessible and readable by users in the future?

† How to handle non-standard formats that often occur as supplementary material, e.g., audio and video files?

† What restrictions are there on access to the archived content?

† At what point should there be no restrictions on access?

† Who pays the costs of the e-archive?

† Can the archive exploit the archived content in ways to recover costs, i.e., to make money?

† Who “owns” the archive, and the content in it?

† Who has governance over the policies and practices of the archive?

These are difficult questions. But libraries and publishers are on the right track. Working together, going step by step, these questions will be answered.

The Harvard University Library provides a good summary of electronic journal archiving issues (Digital Library Federation, 2003).

Over the past 15 years, one thing has become clear—what seems intractable today, will be resolved tomorrow. Stakeholders in electronic publishing have demonstrated their ability to work together to continue the evolution of electronic publishing. There is more than enough forward momentum to overcome obstacles. The more powerful user functionality of electronic publishing makes electronic content the preferred medium for scientists, so those serving the scientists (librarians and publishers) are strongly motivated to solve the problems that arise.

D. New Publishing Models

The journal is the package in which scientific research is disseminated;

journal articles are the components of the package. Journals present a collection of articles that are within the journal’s defined scope and focus, and that are published with the imprimatur of the journals’ Editorial Boards, who manage the all-important peer review process. This model has existed for hundreds of years, and has been a key contributor to the advancement of science. The system has provided a validation process to help scientists sift through the overload of new information, including metrics such as the Impact Factor to provide at least some degree of quantitative measurement of value. Libraries have been the gateway to this content:

organizing, preserving, and helping users get to it. New publishing models challenge not only the scientific communication method currently employed by publishers and authors, but also libraries which must be prepared to evolve if access to information is to remain efficient.

Electronic publishing presents new possibilities. Some might ask whether it is necessary to subscribe to an entire journal, when online searching tools can bring you to the precise articles that you want, no matter what journals they are in, and you can purchase access one article at a time.

They ask whether a journal’s formal peer review process is necessary, when authors can post their own articles, or send them to servers that collect articles in a particular discipline (e.g., the physics preprint server arXiv; see http://www.arxiv.org) and anyone in the community can read and openly comment on these articles. Can technology horsepower and tools handle all of the tasks that journals and publishers perform, at less cost? Can libraries manage to find the sources their users demand when users are dispersed so broadly? Will these sources be preserved, with historical access available?

Publishers must be prepared to modify their own models, and to compete with new models; and may the best models and the best service providers win.

That is the open market of ideas, models, and competition. When we think of potential alternative models for the dissemination of scientific research, there are several possibilities visible on the horizon. For each of these, there is the opportunity for publishers and libraries to work through the issues together.

1. Self-publishing

Authors can post their own articles. By adhering to the Open Archives Initiative Protocol for Metadata Harvesting (http://www.openarchives.org/

OAI/openarchivesprotocol.html), the articles are discoverable via their metadata. The full text can be indexed by Google and other search engines.

By using the DOI (http://www.doi.org), authors can ensure that their articles are persistently available on the Web. By participating in CrossRef (http://

www.crossref.org), authors can take advantage of reference linking, forward linking, and other features such as cross-publisher full-text searching. One can imagine standards being established so that a loose federation (e.g., of scientific societies) could organize all of these self-published articles. How would peer review, or some other form of validation, work? Either there could be zero peer review, or some new form of organized online peer review could be established, perhaps akin to the readers’ reviews in Amazon.com.

We do not know yet whether authors will be willing to let go of the validation that comes by being associated with a respected journal.

2. Institutional Repositories

Authors can look to their institutions to post their articles. AsCrow (2002) describes well, many universities are working to organize the intellectual output of their populations, and to make it available on the Web across universities. Whether faculty will be willing to adhere to standards set by their universities and to give over the management of their articles to the school, and whether universities will be able to support the costs of institutional repositories, and whether commercial firms will participate in such a network are not clear.

3. Open Access

New online journals have started up that are available for free to users.

Authors pay a fee to publish. This is a conventional journal publishing model, but with the economic burden shifted from the readers (or libraries, as the readers’ proxies) to the authors (who in many cases are the same people as the readers) or their proxies, the institutions that employ them. Initially, the author fees were in the $500 – 1500 range (BioMedCentral and Public Library

of Science, respectively). However, as stated above (see Section II.C), the per- article publishing costs exceed that amount, and it is not clear whether authors will support this model, so the financial sustainability of Open Access publishing is not yet confirmed. Public Library of Science and BioMedCen- tral are leading Open Access publishers (http://www.publiclibraryofscience.

organd http://www.biomedcentral.com) to explore this approach. By mid 2004, author fees from $3000 (Springer’s “Open Choice”) to $6000 (American Society of Human Genetics) had been announced. Meanwhile, government and private research funders had begun to ask for Open Access to articles they funded.

In conclusion, these different future models are not mutually exclusive.

We will likely see a mixture of models living together, serving different situations and needs.

E. Distributed Aggregation: CrossRef

One of the weaknesses in the present system is that each publisher is a silo of content, whereas users identify with journals, not with publishers. CrossRef is an attempt to bridge the silos via reference linking. CrossRef is a non-profit member organization including 300 scholarly publishers; see http://www.

crossref.org. CrossRef was launched by a small group of leading journal publishers with the purpose of establishing a mechanism to link from references to the cited articles. This effort has been very successful. Using the DOI, CrossRef publishers now make millions of links each year, which has added an important new functionality to online journals (see http://www.

crossref.org/01company/00introduction.html and click on Annual Report, letter from the executive director and the chairman).

CrossRef’s statistics as of early 2004 show that there are 10.5 million articles and 9500 journals in this metadata database; about 20 million online links are embedded in the publishers’ content per year, with over 5 million real-time end-user links (“DOI resolutions”) per month (CrossRef, 2004).

Based on this success, CrossRef now plans to add further functionality such as forward linking, and CrossRef is discussing with librarians and scientists what further functionalities it could be helpful for CrossRef to provide.

Having been developed by publishers, CrossRef initially met with some questions from librarians. This is a good example of how working together earlier could have reduced the potential for misunderstanding. However, 4 years later, CrossRef is now being asked by some librarians to broaden its scope much more widely. In meetings such as CrossRef’s Library Advisory Board, as well as in publishers’ own library advisory boards, librarians look

to CrossRef as the logical entity to eliminate barriers and to improve users’

experience.

For example, each publisher’s website is different, with different interfaces, different log-ins, different navigation, and different transactional protocols. All of these differences detract from the users’ experience.

Librarians ask, why cannot publishers establish one interface? Would not CrossRef be the perfect forum?

Publishers are conflicted about this. Publishers view their online publishing engines as sources of competitive advantage. And publishers want to be differentiated from their competitors. On the other hand, there are undeniably inefficiencies for the users. How to reconcile these differences?

As usual, going step by step is the best way forward. Having achieved cross-publisher reference linking and forward linking, CrossRef is beginning to seriously consider further steps. Librarians are asking for full-text searching across all CrossRef member publishers’ content, “one-stop shop”

for interlibrary loan and document delivery, cross-platform authentication of users, more integration of discovery and other tools and content, and fewer barriers to access. Publishers are viewing these requests as opportunities to enhance their services to libraries and users.

Dalam dokumen LIBRARIES Advances in Librarianship Vol 28.pdf (Halaman 119-125)