Categories
change digital libraries metadata

Report from the Digital Public Library of America Midwest

Two years after the initial meeting for the Digital Public Library of America, another major planning and update meeting took place in Chicago at DPLA Midwest. At this meeting the steering committee handed the project over to the inaugural board and everyone who has been working on the project talked about what had happened over the past few years and the ambitious timetable to launch in April 2013.

In August I wrote about the DPLA and had many unanswered questioned. Luckily I had the opportunity to attend the meeting and participate heavily in the backchanel (both virtual and physical). This post is a report of what happened at the general meeting (I was not able to attend the workstream meetings the day before). This is a followup to my last post about the Digital Public Library of America–then I felt like an observer, but the great thing about this project is how easy it is to become a participant.

Looking Back and Ahead

The day started with a welcome from John Palfrey, who reported that through the livestream and mailing lists there were over a thousand active participants in the process. The project seemed two years ago (and still does) seem to him “completely ambitious and almost crazy,” but actually is working out. He emphasized that everything is still “wet clay” and a participatory process, but everything is headed to April 2013 for the public launch with initial version of the service and a fair amount of content being available. We will come back a bit later to exactly what that content is and from what sources it will come.

In this welcome, Palfrey introduced several themes that the day revolved around–that the project is still moldable despite the structure that seems to be there (the “wet clay”), and that it is still completely participatory even though the project will recruit an Executive Director and has a new board. One of the roles of the board will be to ensure that participation remains broad. The credentials of the board and the steering committee are impressive; but they cannot get the project going without a lot of additional support, both financial and otherwise.

The rest of the day was organized to talk about supporting the DPLA, reporting on several of the “hubs” that will make up the first part of the content available, the inaugural board, and the technical and platform components of the DPLA. The complete day, including tweets and photos was captured in a live blog. While much of interest took place that day, I want to focus on the content and the technical implementation as described during the day.

Content: What will be in the DPLA?

Emily Gore started in September of this year as the Director of Content, and has been working since then to get the plans in motion for the initial content in the DPLA. She has been working with seven exisiting state or regional digital libraries as so-called “Service Hubs” and “Content Hubs” to take the steps to begin aggregating metadata that will be harvested for the DPLA and get people to the content. The April 2013 launch will feature exhibits showcasing some of this content–topics include civil rights, prohibition, Native Americans, and a joint presentation with Europeana about immigration.

The idea of these “hubs” is that there are already many large digital libraries with material, staff, and expertise available–as Gore put it, we all have our metadata geeks already who love massaging metadata to make it work together. Dan Cohen (director of the Roy Rosenzweig Center for History and New Media at George Mason University) gave the analogy in his blog of the local institutions having ponds of content, which then are fed into the lake of the service hubs, and then finally into the ocean of the DPLA. The service hubs will offer a full menu of standardized digital services to local institutions, including digitization, metadata consultation, data aggregation, storage services, community outreach, and exhibit building. These collaborations are crucial for several reasons. First, they mean that great content that is already available will finally be widely accessible to the country at large–it’s on the web, but often not findable or portable. Regional content hubs will be able to work with their regions more effectively than any central DPLA staff, which simply will not have the staff to deal with one-to-one relationships with all the potential institutions who have content. The pilot service hubs are Mountain West, Massachusetts, Digital Library of Georgia, Kentucky, Minnesota, Oregon, and South Carolina. The digital hubs project has a two year timeline and $1 million in funding, but for next April they will prepare metadata and content previews for harvest, harvest existing metadata to make it available for launch, and develop exhibitions. After that, the project will move on to new digitization and metadata, aggregation, new services, new partners, and targeted community engagement.

Representatives from two of the service hubs spoke about the projects and collections, which was the best view into what types of content we can expect to see next April. Mary Molinaro from Kentucky gave a presentation called “Kentucky Digital Library: More than just tobacco, bourbon, and horse racing.” She described their earliest digitization efforts as “very boutique–every pixel was perfect”, but it wasn’t cost effective or scalable. They then moved on to a system of mass digitization through automating everything they could and tweaking workflows for volume. Their developers met with developers from Florida and ended up using DAITSS and Blacklight to manage the repository. They are now at the point where they were able to scan 300,000 pages in the last year, and are reaching out to other libraries and archives around the state to offer them “the on-ramp to the DPLA”. She also highlighted what they are doing with oral history search and transcription with the Oral History Metadata Synchronizer and showed some historical newspapers.

Jim Butler from the Minnesota Digital Library spoke about the content in that collection from an educational and outreach point of view. They do a lot of outreach to to local historical societies and libraries and other cultural organizations to find out what collections they have and digitize them, which is the model that all the service hubs will follow. One of the important projects that he highlighted was an effort to create curricular guides to facilitate educator use of the material–the example he showed was A Foot in Two Worlds: Indian Boarding Schools in Minnesota, which has modules to be used in K-12 education. He showed many other examples of material that would be available through the DPLA, including Native American history and cultural materials and images of small town life in 19th and 20th century Minnesota. Their next steps are to work on state/region wide digital library metadata aggregation, major new digitization efforts, and community-sourced digital documentation, particularly in terms of Somali and Hmong communities self-documentation.

Followup comments during the question portion of these presentations emphasized that the goal of having big pockets of content is to work with those smaller pockets of content. This is a pilot business model test case to see how aggregating all these types of content together actually works. It is important to remember that for now, the DPLA is not ingesting any content, only metadata. All the content will remain in the repositories at each content hubs.

An  additional component is that all the metadata in the DPLA will be licensed with a CC0 (public domain) license only. This will set the tone that the DPLA is for sharing and reusing metadata and content. It is owned by everyone. This generated some discussion over lunch and via Twitter about what that actually would mean for libraries and if it would cause tension to release material under a public domain license that for-profit entities could repackage and sell back to libraries and schools. Most people that I spoke to felt this was a risk worth taking. Of course, future content in the DPLA will be there under whatever copyright or license terms the rightsholder allows. Presumably most if not all of it will be material in the public domain, but it was suggested, for instance, that authors could bequeath their copyrights to the DPLA or set up a public domain license through something like unglue.it. Either way, libraries and educators should share all the materials they create around DPLA content, and by doing so will mean less duplicate effort.

Technology: How will the DPLA work?

Jeff Licht, a member of the technical development advisory board,  spoke about the technical side of the DPLA. The architecture for the system (PDF overview) will have at its core a metadata repository aggregated from various sources described above. An ingester will bring in the metadata in usable form from the service hubs that will have already cleaned up the data, and then an API will expose the content and allow access to front ends or apps. There will also be functions to export the metadata for analysis that cannot easily be done through the API. The metadata schema (PDF) types that they collect will be item, collection, contributor, event.

One of the important points that raised a lot of discussion was that while they have contracted with iFactory to have a front end available by April, this front end doesn’t have more priority or access to the API than something developed by someone else. In fact, while someone could go to dp.la to access content, the planners right now see the DPLA “brand” as sublimated to other points of access such as local public libraries or apps using the content. Again, the CC0 license makes this possible.

The initial front end prototype is due for December, and the new API is due in early November for the Appfest (see below for details). There will be an iterative process between the API and front end between December and March before the April launch, with of course lots of other technical details to sort out. One of the things they need to work on is a good method for sharing contributed modules and code, which hopefully will be done in the next few weeks.

Anyone can participate in this process. You can follow the Dev Portal on the DPLA wiki and the Technical Aspects workstream to participate in decision making. Attending the Appfest hackathon at the Chattanooga Public Library on November 8 and 9 will be a great way to spend time with a group creating an application that will use the metadata available from the hubs (the new API will be completed before the Appfest). This is the time to ask questions and make sure that nothing is being overlooked.

Conclusion: Looking ahead to April 2013

John Palfrey closed the day with reminding everyone that April is just the start, and not to be disappointed with what they see then. If April delivers everything promised during the DPLA Midwest meeting, then it will be a remarkable achievement–but as Doran Weber from the Sloan Foundation pointed out, the DPLA has so far met every one of its milestones on time and on budget.

I found the meeting to be inspirational about the future for libraries to cross boundaries and build exciting new collections. I still have many unanswered questions, but as everyone throughout the day understands, this will be a platform on which we can build and imagine.

Categories
change publication Scholarly Communication what-if

PeerJ: Could it Transform Open Access Publishing?

Open access publication makes access to research free for the end reader, but in many fields it is not free for the author of the article. When I told a friend in a scientific field I was working on this article, he replied “Open access is something you can only do if you have a grant.” PeerJ, a scholarly publishing venture that started up over the summer, aims to change this and make open access publication much easier for everyone involved.

While the first publication isn’t expected until December, in this post I want to examine in greater detail the variation on the “gold” open-access business model that PeerJ states will make it financially viable 1, and the open peer review that will drive it. Both of these models are still very new in the world of scholarly publishing, and require new mindsets for everyone involved. Because PeerJ comes out of funding and leadership from Silicon Valley, it can more easily break from traditional scholarly publishing and experiment with innovative practices. 2

PeerJ Basics

PeerJ is a platform that will host a scholarly journal called PeerJ and a pre-print server (similar to arXiv) that will publish biological and medical scientific research. Its founders are Peter Binfield (formerly of PLoS ONE) and Jason Hoyt (formerly of Mendeley), both of whom are familiar with disruptive models in academic publishing. While the “J” in the title stands for Journal, Jason Hoyt explains on the PeerJ blog that while the journal as such is no longer a necessary model for publication, we still hold on to it. “The journal is dead, but it’s nice to hold on to it for a little while.” 3. The project launched in June of this year, and while no major updates have been posted yet on the PeerJ website, they seem to be moving towards their goal of publishing in late 2012.

To submit a paper for consideration in PeerJ, authors must buy a “lifetime membership” starting at $99. (You can submit a paper without paying, but it costs more in the end to publish it). This would allow the author to publish one paper in the journal a year. The lifetime membership is only valid as long as you meet certain participation requirements, which at minimum is reviewing at least one article a year. Reviewing in this case can mean as little as posting a comment to a published article. Without that, the author might have to pay the $99 fee again (though as yet it is of course unclear how strictly PeerJ will enforce this rule). The idea behind this is to “incentivize” community participation, a practice that has met with limited success in other arenas. Each author on a paper, up to 12 authors, must pay the fee before the article can be published. The Scholarly Kitchen blog did some math and determined that for most lab setups, publication fees would come to about $1,124 4, which is equivalent to other similar open access journals. Of course, some of those researchers wouldn’t have to pay the fee again; for others, it might have to be paid again if they are unable to review other articles.

Peer Review: Should it be open?

PeerJ, as the name and the lifetime membership model imply, will certainly be peer-reviewed. But, keeping with its innovative practices, it will use open peer review, a relatively new model. Peter Binfield explained in this interview PeerJ’s thinking behind open peer review.

…we believe in open peer review. That means, first, reviewer names are revealed to authors, and second, that the history of the peer review process is made public upon publication. However, we are also aware that this is a new concept. Therefore, we are initially going to encourage, but not require, open peer review. Specifically, we will be adopting a policy similar to The EMBO Journal: reviewers will be permitted to reveal their identities to authors, and authors will be given the choice of placing the peer review and revision history online when they are published. In the case of EMBO, the uptake by authors for this latter aspect has been greater than 90%, so we expect it to be well received. 5

In single blind peer review, the reviewers would know the name of the author(s) of the article, but the author would not know who reviewed the article. The reviewers could write whatever sorts of comments they wanted to without the author being able to communicate with them. For obvious reasons, this lends itself to abuse where reviewers might not accept articles by people they did not know or like or tend to accept articles from people they did like 6 Even people who are trying to be fair can accidentally fall prey to bias when they know the names of the submitters.

Double blind peer review in theory takes away the ability for reviewers to abuse the system. A link that has been passed around library conference planning circles in the past few weeks is the JSConf EU 2012 which managed to improve its ratio of female presenters by going to a double-blind system. Double blind is the gold standard for peer review for many scholarly journals. Of course, it is not a perfect system either. It can be hard to obscure the identity of a researcher in a small field in which everyone is working on unique topics. It also is a much lengthier process with more steps involved in the review process.  To this end, it is less than ideal for breaking medical or technology research that needs to be made public as soon as possible.

In open peer review, the reviewers and the authors are known to each other. By allowing for direct communication between reviewer and researcher, this speeds up the process of revisions and allows for greater clarity and speed 7.  Open peer review doesn’t affect the quality of the reviews or the articles negatively, it does make it more difficult to find qualified reviewers to participate, and it might make a less well-known researcher more likely to accept the work of a senior colleague or well-known lab.  8.

Given the experience of JSConf and a great deal of anecdotal evidence from women in technical fields, it seems likely that open peer review is open to the same potential abuse of single peer review. While  open peer review might make the rejected author able to challenge unfair rejections, this would require that the rejected author feels empowered enough in that community to speak up. Junior scholars who know they have been rejected by senior colleagues may not want to cause a scene that could affect future employment or publication opportunities. On the other hand, if they can get useful feedback directly from respected senior colleagues, that could make all the difference in crafting a stronger article and going forward with a research agenda. Therein lies the dilemma of open peer review.

Who pays for open access?

A related problem for junior scholars exists in open access funding models, at least in STEM publishing. As open access stands now, there are a few different models that are still being fleshed out. Green open access is free to the author and free to the reader; it is usually funded by grants, institutions, or scholarly societies. Gold open access is free to the end reader but has a publication fee charged to the author(s).

This situation is very confusing for researchers, since when they are confronted with a gold open access journal they will have to be sure the journal is legitimate (Jeffrey Beall has a list of Predatory Open Access journals to aid in this) as well as secure funding for publication. While there are many schemes in place for paying publication fees, there are no well-defined practices in place that illustrate long-term viability. Often this is accomplished by grants for the research, but not always. The UK government recently approved a report that suggests that issuing “block grants” to institutions to pay these fees would ultimately cost less due to reduced library subscription fees.  As one article suggests, the practice of “block grants” or other funding strategies are likely to not be advantageous to junior scholars or those in more marginal fields 9. A large research grant for millions of dollars with the relatively small line item for publication fees for a well-known PI is one thing–what about the junior humanities scholar who has to scramble for a few thousand dollar research stipend? If an institution only gets so much money for publication fees, who gets the money?

By offering a $99 lifetime membership for the lowest level of publication, PeerJ offers hope to the junior scholar or graduate student to pursue projects on their own or with a few partners without worrying about how to pay for open access publication. Institutions could more readily afford to pay even $250 a year for highly productive researchers who were not doing peer review than the $1000+ publication fee for several articles a year. As above, some are skeptical that PeerJ can afford to publish at those rates, but if it is possible, that would help make open access more fair and equitable for everyone.

Conclusion

Open access with low-cost paid up front could be very advantageous to researchers and institutional  bottom lines, but only if the quality of articles, peer reviews, and science is very good. It could provide a social model for publication that will take advantage of the web and the network effect for high quality reviewing and dissemination of information, but only if enough people participate. The network effect that made Wikipedia (for example) so successful relies on a high level of participation and engagement very early on to be successful [Davis]. A community has to build around the idea of PeerJ.

In almost the opposite method, but looking to achieve the same effect, this last week the Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP3) announced that after years of negotiations they are set to convert publishing in that field to open access starting in 2014. 10 This means that researchers (and their labs) would not have to do anything special to publish open access and would do so by default in the twelve journals in which most particle physics articles are published. The fees for publication will be paid upfront by libraries and funding agencies.

So is it better to start a whole new platform, or to work within the existing system to create open access? If open (and through a commenting s system, ongoing) peer review makes for a lively and engaging network and low-cost open access  makes publication cheaper, then PeerJ could accomplish something extraordinary in scholarly publishing. But until then, it is encouraging that organizations are working from both sides.

  1. Brantley, Peter. “Scholarly Publishing 2012: Meet PeerJ.” PublishersWeekly.com, June 12, 2012. http://www.publishersweekly.com/pw/by-topic/digital/content-and-e-books/article/52512-scholarly-publishing-2012-meet-peerj.html.
  2. Davis, Phil. “PeerJ: Silicon Valley Culture Enters Academic Publishing.” The Scholarly Kitchen, June 14, 2012. http://scholarlykitchen.sspnet.org/2012/06/14/peerj-silicon-valley-culture-enters-academic-publishing/.
  3. Hoyt, Jason. “What Does the ‘J’ in ‘PeerJ’ Stand For?” PeerJ Blog, August 22, 2012. http://blog.peerj.com/post/29956055704/what-does-the-j-in-peerj-stand-for.
  4. http://scholarlykitchen.sspnet.org/2012/06/14/is-peerj-membership-publishing-sustainable/
  5. Brantley
  6. Wennerås, Christine, and Agnes Wold. “Nepotism and sexism in peer-review.” Nature 387, no. 6631 (May 22, 1997): 341–3.
  7. For an ingenious way of demonstrating this, see Leek, Jeffrey T., Margaret A. Taub, and Fernando J. Pineda. “Cooperation Between Referees and Authors Increases Peer Review Accuracy.” PLoS ONE 6, no. 11 (November 9, 2011): e26895.
  8. Mainguy, Gaell, Mohammad R Motamedi, and Daniel Mietchen. “Peer Review—The Newcomers’ Perspective.” PLoS Biology 3, no. 9 (September 2005). http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1201308/.
  9. Crotty, David. “Are University Block Grants the Right Way to Fund Open Access Mandates?” The Scholarly Kitchen, September 13, 2012. http://scholarlykitchen.sspnet.org/2012/09/13/are-university-block-grants-the-right-way-to-fund-open-access-mandates/.
  10. Van Noorden, Richard. “Open-access Deal for Particle Physics.” Nature 489, no. 7417 (September 24, 2012): 486–486.
Categories
Uncategorized

The Digital Public Library of America: What Does a New Platform Mean for Academic Research?

Robert Darnton asked in the New York Review of Books blog nearly two years ago: “Can we create a National Digital Library?” 1 Anyone who recalls reference homework exercises checking bibliographic information for United States imprints versus British or French will certainly remember the United States does not have a national library in the sense of a library that collects all the works of that country and creates a national bibliography 2 Certain libraries, such as the Library of Congress, have certain prerogatives for collection and dissemination of standards 3, but there is no one library that creates a national bibliography. Such it was for print, and so it remains even more so for digital. So when Darnton asks that–as he goes on to illuminate further in his article–he is asking a much larger question about  libraries in the United States. European and Asian countries have created national digital libraries as part of or in addition to their national print libraries.  The question is: if others can do it, why can’t we? Furthermore, why can’t we join those libraries with our national digital library? The DPLA has  announced collaboration with Europeana, which has already had notable successes with digitizing content and making it and its metadata freely available. This indicates that we could potentially create a useful worldwide digital library, or at least a North American/European one.The dream of Paul Otlet’s universal bibliography seems once again to be just out of reach.

In this post, I want to examine what the Digital Public Library of America claims to do, and what approaches it is taking. It is still new enough and there are still enough unanswered questions to give any sort of final answer to whether this will actually be the national digital library. Nonetheless, there seems to be enough traction and, perhaps more importantly, funding that we should pay close attention to what is delivered in April 2013.

Can we reach a common vision about the nature of the DPLA?

The planning for the DPLA started in the  fall of 2010 when Harvard’s Berkman Center received a grant from the Sloan Foundation to begin planning the project in earnest. The initial idea was to digitize all the materials which it was legal to digitize, and create a platform that would be accessible to all people in the US (or nationally). Google had already proved that it was possible, so it seemed that with many libraries working together it would be concievable to repeat their sucesses, but with solely non-commerical motives  4.

The initials stages of planning brought out many different ideas and perspectives about the philosophical and practical components of the DPLA, many of which are still unanswered. The theme of debate that has emerged are whether the DPLA would be a true “public” library, and what in fact ought to be in such a library. David Rothman argues that the DPLA as described by Darnton would be a wonderful tool for making humanities research easy and viable for more people, but would not solve the problems of making popular e-books  accessible through libraries or getting students up-to-date textbooks. The latter two aims are much more challenging than getting access to public domain or academic materials because a lot more money is at stake 5.

One of the projects for the Audience and Content workstream is to figure out how average Americans might actually use a digital public library of America. One of the potential use cases is a student who can just use DPLA to write a whole paper on the Iriquois Nations. Teachers and librarians posted some questions about this in the comments, including questioning whether it is appropriate to tell students to use one portal for all research. We generally counsel students to check multiple sources–and getting students used to searching one place that happens to be appropriate for searching one topic may not work if the DPLA has nothing available on say, the latest computer technology.

Digital content and the DPLA

What content the DPLA will provide will surely become more clear over the following months. They have appointed Emily Gore as Director of Content, and continue to hold further working groups on content and audience. The DPLA website promises a remarkable vision for content:

The DPLA will incorporate all media types and formats including the written record—books, pamphlets, periodicals, manuscripts, and digital texts—and expanding into visual and audiovisual materials in concert with existing repositories. In order to lay a solid foundation for its collections, the DPLA will begin with works in the public domain that have already been digitized and are accessible through other initiatives. Further material will be added incrementally to this basic foundation, starting with orphan works and materials that are in copyright but out-of-print. The DPLA will also explore models for digital lending of in-copyright materials. The content that is contributed to or funded by the DPLA will be made available, including through bulk download, with no new restrictions, via a service available to libraries, museums, and archives in the United States, with use and reuse governed only by public law.  6

All of these models exist in one way or another already, however, so how is this something new?

The major purveyors of out of copyright digital book content are Google Books and HathiTrust. The potential problems with Google Books are obvious just in the name–Google is a publicly traded company with aspirations to be the hub of all world information. Privacy and availability, not to mention legality, are a few of the concerns. HathiTrust is a collective of research universities digitizing collections, many in concert with Google Books, but the full text of these books in a convenient format is generally only available to members of HathiTrust. HathiTrust faced a lawsuit from the Authors Guild about its digitization of orphan works, which is an issue the DPLA is also planning to address.

Other projects exist trying to make currently in copyright digital books more accessible, of which Unglue.it is probably best known. This requires a critical mass of people to actively work to pay to release a book into the public domain, and so may not serve the scholar with a unique research project. Some future plans for the DPLA include to obtain funds to pay authors for use–but this may or may not include releasing books into the public domain.

DPLA is not meant to include books alone. Planning so far suggests that books make a logical jumping off point. The “Concept Note” points out that “if it takes the sky as its limit, it will never get off the ground.” Despite this caution, ideally it would eventually be a portal to all types of materials already made available by cultural institutions, including datasets and government information.

Do we need another platform?

The first element of the DPLA is code–it will use open source technologies in developing a platform, and will release all code (and the tools and services this code builds) as open source software.  The so-called “Beta Sprint” that took place last year invited people to “grapple, technically and creatively, with what has already been accomplished and what still need to be developed…” 7. The winning “betas” deal largely with issues of interoperability and linked data. Certainly if a platform could be developed that solved these problems, this would be a huge boon to the library world.

Getting involved withe DPLA and looking to the future

While the governance structure is becoming more formal, there are plenty of opportunities to become involved with the DPLA. Six working groups (called workstreams) were formed to discuss content, audience, legal issues, business models, governance, and technical issues. Becoming involved with the DPLA is as easy as signing up for an  account on the wiki and noting your name and comments on the working group page in which are interested. You can also sign up mailing lists to stay involved in the project. Like many such projects, the work is done by the people who show up and speak up. If you read this and have an opinion on the direction the DPLA should take, it is not difficult to make sure your opinion gets heard by the right people.

Like all writing about the DPLA since the planning began, turning to a thought experiment seems the next logical rhetorical step. Let’s say that the DPLA succeeds to the point where all public domain books in the United States are digitized and available in multiple formats to any person in the country, and a significant number of in copyright works are also available. What does this mean for libraries as a whole? Does it make public libraries research libraries? How does it change the nature of research libraries? And lastly, will all this information create a new desire for knowledge among the American people?

References
  1. Darnton, Robert. “A Library Without Walls.” NYRblog, October 4, 2010. http://www.nybooks.com/blogs/nyrblog/2010/oct/04/library-without-walls/.
  2. McGowan, Ian. “National Libraries.” In Encyclopedia of Library and Information Sciences, Third Edition, 3850–3863.
  3. “Frequently Asked Questions – About the Library (Library of Congress).” Text, n.d. http://www.loc.gov/about/faqs.html#every_book
  4. Dillon, Cy. “Planning the Digital Public Library of America.” College & Undergraduate Libraries 19, no. 1 (March 2012): 101–107.
  5. Rothman, David H. “It’s Time for a National Digital-Library System.” The Chronicle of Higher Education, February 24, 2011, sec. The Chronicle Review. http://chronicle.com/article/Its-Time-for-a-National/126489/.
  6. “Elements of the DPLA.” Digital Public Library of America, n.d. http://dp.la/about/elements-of-the-dpla/.
  7. “Digital Public Library of America Steering Committee Announces ‘Beta Sprint’ ”, May 20, 2011. http://cyber.law.harvard.edu/newsroom/Digital_Public_Library_America_Beta_Sprint.