Bridging DataRescue Events with Libraries+

We've recently had a influx of librarians interested in hosting DataRescue Events, including people wanting to have events connected to library conferences. We' so excited about the enthusiasm these events have inspired. However, the DataRescue Event workflow used by so many wonderful events is imperfect and we'd love to leverage the expertise of these librarians to bridge the work of events to the work going into Libraries+ 

One of the most wonderful things about DataRescue Events is how it engages communities. "Hackathon" style events draw crowds and press that libraries are not used to seeing - and it's wonderful! People who attend and organize events rightly feel that their work there is a meaningful way to take action and they learn a lot from the experience. We absolutely want to keep this invaluable component to events alive while also making their work even more meaningful by employing better practices, increasing awareness of the broader issues, and taking advantage of the expertise of librarian organizers and participants. 

Working within the standard DataRescue web and data archiving workflow, there are some tweaks and additions you might consider. First, the way the workflow is currently organized, Describing comes at the end. Any data curation librarian will tell you that describing and documentation should happen throughout the data life cycle - starting at the beginning. One way you might tweak the workflow is to expand the Research piece of it - having participants add significantly more contextual information to the record for the data. You can also employ the DataRescuePDX workflow to a similar end. 

Another easy and important way to connect the standard DataRescue activities to the work of the Libraries+ Network is to have a Long Trail path that thinks about the problems of preserving federal information and how it might be done in more sustainable ways. Insights from this activity can be shared with us -even posted on our blog- to help inform the discussion at the May meeting and beyond. More about what this might look like to come in a subsequent blog post soon!

There have already been some events that have done some different workflows and activities in the name of DataRescue. The second DataRescueNH in Dover, NH taught web archiving skills to attendees. DataRescuePhilly and DataRescueDC had teach-ins and panels to connect the archiving activities to the larger context of the problem. Virginia Tech had a small informational DataRescue event. DataRescuePDX employed a completely different workflow to work on creating metadata for datasets so they'll be more discoverable, harvestable, and usable. DataRescue@UCSD will employ the UCSD Library Digital Collections to make data identified by their scientists available. Many of the events being planned for Endangered Data Week could also be considered DataRescue by improving skills and understanding related to vulnerable digital information. The way we see it, there isn't just one way to do a DataRescue Event - it's all about what your organization can bring to the table and what's best for your community. Whatever you're thinking about doing, we'd love to chat and help you have the best event possible! 

An experiment

Over the course of the last several months, as we've worked with a giant collaborative network of volunteers to save government information, we've learned about the value, vulnerability and variety of government data. As we've described elsewhere, we're now working on planning a meeting where data advocates, librarians, federal data producers, and researchers will gather to begin thinking through new approaches to safeguarding copies of federal data.

In order to further inform that conversation, we've begun an experiment with a few brave volunteer librarians to see what we can learn from one model of saving data.

This experiment is not designed to offer a solution to the problem of backing up federal information, but is instead an attempt to further understand the problem.

This experiment is based on our current understanding of the landscape of federal information. When we talk about federal data, we are, in fact, talking about data stored on plain HTML webpage, in visualized and embedded content, on ftp servers, in databases available only through query interfaces, as as files that conform more closely to what we might normally think about as research datasets.

In order to provide for long term backup and re-use of data, one goal might be to turn the data from webpages and query pages into datasets that include appropriate metadata and enough contextual information for future researchers to make use of them. In some cases, making a dataset will be as simple as downloading the relevant files from data.gov.

This looks like a full dataset from data.gov

This looks like a full dataset from data.gov

Other sorts of data might require some compiling, where context from webpages needs to be combined with data scraped from within a query interface to create a dataset that can be usefully backed up and stored.

 

The experiment.

This is what we've asked of these libraries.

  1. Identify a designated community for whom you are saving data.
    1. Pick a subset of data that would be useful for that community that you'll try to save.
      For the purpose of this experiment, we are focused on data rather than webpages.
  2. In order to back up data that will be meaningful to people in the future, you'll need to decide how to chunk up the data you're saving into pieces, and how to describe those pieces. Where does one dataset end and another begin? How much information do you need about the data you'll be backing up to to make your copy re-usable and citable in the future? What files, webpages, or additional material complement the dataset you've identified? These key questions are ones you will address as you create a model of the data you'll be saving so that they can be re-used.
     
  3. Gather the necessary files you've identified so that the federal data are effectively "backed up" in your system, and made available to the public.
    1. For those libraries with data repositories, we hope they'll make the data available through those repositories, while for libraries without repositories, we can offer space in the datarefuge storage for their data.
  4. For each dataset you've backed up, create a data.json file to share with the datarefuge project for sharing in our instance of ckan (that is, in the datarefuge.org catalog). Make whatever changes to the standard format are necessary to both point to the original and to your copy.
     
  5. Look for ways to include public involvement, advocacy and education in this process. Are there ways that some of this work could be done by volunteers, by citizen scientists?

We hope that, through this experiment, we'll learn a few things:

  • What are the challenges to making government sites into datasets for backup and re-use?
  • How might these processes fit into library workflow?

There will, of course, remain a number of open questions that will need to be solved through continued collaboration and experimentation. These include:

  • Can we find ways to share our work with the data producers at agencies so that future re-use and discovery is enhanced?
  • How can we ensure that our system comprehensively addresses the data system in the federal governement?
  • How frequently should data be backed up, and what commitment will institutions make if they attempt this system?
  • What kinds of funding models and storage architecture will work best?
  • What kind of advocacy work will be most successful in continuing support for these efforts?

From DataRefuge to Libraries+

There have been nearly 30 DataRescue Events as of this writing. Can you believe it? We're so proud and inspired by all the hard work of event organizers and attendees. You are amazing.

As you know, DataRefuge grew quickly from grassroots. As we've grown, we've been able to get a better view of the underlying problem that led us to need these efforts. We've discussed this in many places before, but to bring it home, I'll summarize again: Government information is not archived systematically or particularly well. 

This problem isn't confined to the government, it's true of most born-digital information. We take for granted that information on the web will remain on the web - especially if it's being maintained by a trusted source like the federal government. But it's never a good idea to keep all of your eggs in one basket.

This leaves a simple sounding solution: We need to put our government information eggs into some more baskets. This is where Climate Mirror and DataRescue Events come in. These amazing basket weaving brigades have done an amazing job collecting government information. However, in our view, these methods aren't the best way to ensure government information is archived going forward. Chickens keep laying eggs. So do other birds. And lizards. And spiders.

To address the issue of sustainability, we've been envisioning a reboot of the Federal Depository Library Program wherein libraries will take on the responsibility of archiving the data and information of specific agencies in a distributed, coordinated way. And the more we think about the problem, the more complex we see it is. People in the library community have been thinking about this problem for years. As have people in the open data community and people within government agencies and researchers from a variety of disciplines.

And yet the problem remains. Clearly this is not a problem we at DataRefuge are going to solve. We need to bring together the voices, view points, and knowledge of these different communities. That's the aim of the blossoming Libraries+ Network, where representatives from all these communities will join forces to map out this problem and start to envision realistic solutions at the kick off meeting May 8-9 this year. 

We really can't say enough about how amazing the work of DataRescuers and Climate Mirrorists has been. DataRefuge will continue to stand with the efforts of DataRescue and support the work of Storytelling going forward. We're also so excited to move forward alongside the Libraries+ Network. We hope however you can, you'll join us.

Originally posted on the PPEH Blog at http://www.ppehlab.org/blogposts/2017/3/9/datarefuge-update-quo-vadimus

A rare opportunity to make a long-term difference

This moment in history provides us with a rare opportunity to go beyond short-term data rescue and set the much needed foundation for the long-term future of preservation of government information.

Awareness of risk. At the moment, more people than ever are aware of the risk of relying solely on the government to preserve its own information. This was not true even six months ago. This awareness goes far beyond government information librarians and archivists. It includes the communities that use government information (our Designated Communities!) and the government employees who devote their careers to creating this information. It includes our colleagues, our professional organizations, and library managers.

This awareness is documented in the many stories in the popular press this year about massive “data rescue” projects drawing literally hundreds of volunteers. It is also demonstrated by the number of people nominating seeds (URLs) for the current End of Term harvest and the number of seeds nominated. These have increased by nearly an order of magnitude or more over 2012.

EOT Year nominators seeds
2008 26 457
2012 31 1476
2016 >392 11,377

Awareness of need for planning. But beyond the numbers, more people are learning first-hand that rescuing information at the end of its life-cycle can be difficult, incomplete, and subject to error and even loss. It is clear that last minute rescue is essential in early 2017. But it is also clear that, in the future, efficient and effective preservation requires planning. This means that government agencies need to plan for the preservation of their own information and they need to do so at the beginning of the life-cycle of that information — even before it is actually created.

Opportunity to create demonstrable value. This awareness provides libraries with the opportunity to lead a movement to change government information policies that affect long-term preservation of and access to government information. By promoting this change, libraries will be laying the groundwork for the long-term preservation of information that their communities value highly. This provides an exceptional opportunity to work with motivated and inspired user communities toward a common goal. This is good news at a time when librarians are eager to demonstrate the value of libraries.

A model exists. And there is more good news. The model for a long-term government information policy not only exists, but libraries are already very familiar with it. In 2010, federal granting agencies like NSF, National Institutes of Health and Department of Energy started requiring researchers who receive Federal grants to develop Data Management Plans (DMPs) for the data collected and analyzed during the research process. Thus, data gathered at government expense by researchers must have a Plan to archive that data and make it available to other researchers. The requirements for DMPs have driven a small revolution of data management in libraries.

Ironically, there is no similar requirement for government agencies to develop a plan for the long-term management of information they gather and produce. There are, of course, a variety of requirements for managing government “Records” but there are several problems with the existing regulations.

Gaps in existing regulations. The Federal Records Act and related laws and regulations cover only a portion of the huge amount of information gathered and created by the government. In the past, it was relatively easy to distinguish between “publications” and “Records” but, in the age of digital information, databases, and transactional e-government it is much more difficult to do so. Official executive agency “Records Schedules,” which are approved by the National Archives and Records Administration (NARA), define only a subset of information gathered and created by an agency as Records suitable for deposit with NARA. (It must be noted that NARA cannot guarantee that it will provide online access to even born-digital Records deposited with it.) Further, the implementation of those Records Schedules are subject to interpretation by executive agency political appointees who may not always have preservation as their highest priority. This can make huge swaths of valuable information ineligible for deposit with NARA as Records.

Government data, documents, and publications that are not deemed official Records have no long-term preservation plan at all. In the paper-and-ink world, many agency publications that did not qualify as Records were printed by or sent to the Government Publishing Office (GPO) and deposited in Federal Depository Library Program (FDLP) libraries around the country (currently 1,147 libraries). Unfortunately, a perfect storm of policies and procedures has blocked FDLP libraries from preserving this huge class of government information. A 1983 court decision (INS v. Chadha, 462 U.S. 919, 952) makes it impossible to require agencies to deposit documents with the Government Publishing Office (GPO) or FDLP. The 1980 Paperwork Reduction Act (44 U.S.C. §§ 3501–3521) and the Office of Management and budget (OMB)’s Circular A-130 have made it more difficult to distribute government information to FDLP libraries. The shift to born-digital information has decentralized publishing and distribution, and virtually eliminated best practices of meta-data creation and standardization. GPO’s own Dissemination and Distribution Policy has further (and severely) limited the information it will distribute to FDLP libraries. Together, this “perfect storm” has reduced the deposit of this class of at-risk government information into FDLP libraries by ninety percent over the last twenty years.

The Solution: Information Management Plans. To plug the gaps in existing regulations, government agencies should be required to treat their own information with as much care as data gathered by researchers with government funding. What is needed is a new regulation that requires agencies to have Information Management Plans (IMPs) for all the information they collect, aggregate, and create.

We have proposed to the OMB a modification to their policy OMB Circular A-130: Managing Information as a Strategic Resource that would require every government agency to have an Information Management Plan.

Every government agency must have an “Information Management Plan” for the information it creates, collects, processes, or disseminates. The Information Management Plan must specify how the agency’s public information will be preserved for the long-term including its final deposit in a reputable, trusted, government (e.g., NARA, GPO, etc.) and/or non-government digital repository to guarantee free public access to it.

Many Benefits! We believe that such a requirement would provide many benefits for agencies, libraries, archives, and the general public. We think it would do more to enhance long-term public access to government information than changes to Title 44 of the US Code (which codified the “free use of government publications”) could do.

  • It would make it possible to preserve information continuously without the need for hasty last-minute rescue efforts.
  • It would make it easier to identify and select information and preserve it outside of government control.
  • It would result in digital objects that are easier to preserve accurately and securely.
  • It would make it easy for government agencies to collaborate with digital repositories and designated communities outside the government for the long-term preservation of their information.
  • The scale of the resulting digital preservation infrastructure would provide an easy path for shared Succession Plans for Trusted Digital Repositories (TDRs) (Audit And Certification Of Trustworthy Digital Repositories [ISO Standard 16363]).

IMPs would provide these benefits through the practical response of vendors that provide software to government agencies. Those vendors would have an enormous market for flexible software solutions for the creation of digital government information and records that fit the different needs of different agencies for database management, document creation, content management systems, email, and so forth, while, at the same time, making it easy for agencies to output preservable digital objects and an accurate inventory of them ready for deposit as Submission Information Packages (SIPs) into TDRs.

Your advice?

We believe this is a reasonable suggestion with a good precedent (the DMPs), but we would appreciate hearing your opinions. Is A‑130 the best target for such a regulation? What is the best way to propose, promote, and obtain such a new policy? What is the best wording for such a proposed policy?

Summary

We believe we have a singular opportunity of awareness and support for the preservation of government information. We believe that this is an opportunity, not just to preserve government information, but also to demonstrate the leadership of librarians and archivists and the value of libraries and archives.

(This is the second of two posts about setting long-term goals. The first post is A Long-Term Goal For Creating A Digital Government-Information Library Infrastructure.)

Authors:

James A. Jacobs, Librarian Emeritus, University of California San Diego
James R. Jacobs, Federal Government Information Librarian, Stanford University

A Long-Term Goal For Creating A Digital Government-Information Library Infrastructure

Now that so many have done so much good work to rescue so much data, it is time to reflect on our long-term goals. This is the first of two posts that suggest some steps to take. The second post is A rare opportunity to make a long-term difference.

The amount of data rescue work that has already been done by DataRefuge, ClimateMirror, Environmental Data and Governance Initiative (EDGI) projects and the End of Term crawl (EOT) 2016 is truly remarkable. In a very practical sense, however, this is only the first stage in a long process. We still have a lot of work to do to make all the captured digital content (web pages, data, PDFs, videos, etc) discoverable and understandable and usable. We believe that the next step is to articulate a long-term goal to guide the next tasks.

Of course, we do already have broad goals but up to now those goals have by necessity been more short-term than long-term. The short-term goals that have driven so much action have been either implicit (“rescue data!”) or explicit (“to document federal agencies’ presence on the World Wide Web during the transition of Presidential administrations” [EOT]). These have been sufficient to draw librarian-, scientist-, hacker-, and public volunteers who have accomplished a lot! But, as the EOT folks will remind us, most of this work is volunteer work.

The next stages will require more resources and long-term commitments. Notable next tasks include: creating metadata, identifying and acquiring DataRefuge’s uncrawlable data, and doing Quality Assurance (QA) work on content that has been acquired. This work has begun. The University of North Texas, for example, has created a pilot crowdsourcing project to catalog a cache of EOT PDFs and is looking for volunteers. This upcoming work is essential in order to make content we rescue and acquire discoverable and usable and to ensure that the content is preserved for the long-term.

As we look to the long-term, we turn to the two main international standards for long-term preservation: OAIS (Reference Model For An Open Archival Information System) and TDR (Audit And Certification Of Trustworthy Digital Repositories). Using the terminology of those standards our current actions have focused on “ingest.” Now we have to focus on the other functions of a TDR: management, preservation, access, and use. We might say that what we have been doing is Data Rescue but what we will do next is Data Preservation which includes discovery, access and use.

Given that, here is our suggestion for a long-term goal:

Create a digital government-information library infrastructure in which libraries collectively provide services for collections that are selected, acquired, organized, and preserved for specific Designated Communities (DCs).

Adopting this goal will not slow down or interrupt existing efforts. It focuses on “Designated Communities” and the life-cycle of information and, by doing so, it will help prioritize our actions. By doing this, it will help attract libraries to participate in the next stage activities. It will also make long-term participation easier and more effective by helping participants understand where their activities lead, what the outcomes will be, and what benefits they will get tomorrow by investing their resources in these activities today.

How does simply adopting a goal do all that?

First, by expressing the long-term goal in the language of OAIS and TDR it assures participants that today’s activities will ensure long-term access to information that is important to their communities.

Second, by putting the focus on the users of the information it demonstrates to our local communities that we are doing this for them. This will help make it practical to invest needed resources in the necessary work. The goal focuses on users of information by explicitly saying that our actions have been and will be designed to provide content and services for specific user groups (Designated Communities in OAIS terminology).

Third, by focusing on an infrastructure rather than isolated projects, it provides an opportunity for libraries to benefit more by participating than by not participating.

The key to delivering these benefits lies in the concept of Designated Communities. In the paper-and-ink world, libraries were limited in who they could serve. “Users” had to be local; they had to be able to walk into our buildings. It was difficult and expensive to share either collections or services, so we limited both to members of our funding institution or a geographically-local community. In the digital world, we no longer have to operate under those constraints. This means that we can build collections for Designated Communities that are defined by discipline or subject or by how a community uses digital information. This is a big change from defining a community by its institutional affiliation or by its members’ geographical proximity to an institution or to each other.

This means that each participating institution can benefit from the contributions of all participating institutions. To use a simple example, if ten libraries each invested the cost of developing collections and services for two DCs, all ten libraries (and their local/institutional communities) would get the benefits of twenty specific collections and services. There are more than one thousand Federal Depository Library Program (FDLP) libraries.

Even more importantly, this model means that the information-users will get better collections of the information they need and will get services that are tailored to how they look for, select, and use that information.

This approach may seem unconventional to government information specialists who are familiar with agency-based collections and services. The digital world allows us to combine the benefits of agency-based acquisitions with DC-based collections and services.

This means that we can still use the agency-based model for much of our work while simultaneously providing collections for DCs. For example, it is probably always more efficient and effective to identify, select, and acquire information by focusing on the the output of an agency. It is certainly easier to ensure comprehensiveness with this approach. It is often easier to create metadata and do QA for a single agency at a time. And information content can be easily stored and managed using the same agency-based approach. And information stored by agency can be viewed and served (through use of metadata and APIs) as a single “virtual” collection for a Designated Community. Any given document, dataset, or database may show up in the collections of several DCs, and any given “virtual” collection can easily contain content from many agencies.

For example, consider how this approach would affect a Designated Community of economists. A collection built to serve economists would include information from multiple agencies (e.g., Commerce, Council of Economic Advisors, CBO, GAO, NEC, USDA, ITA, etc. etc.). When one library built such a collection and provided services for it, every library with economists would be able better serve their community of economists. And every economist at every institution would be able to more easily find and use the information she needs. The same advantages would be true for DCs based on kind of use (e.g. document-based reading; computational textual-analysis; GIS; numeric data analysis; re-purposing and combining datasets; etc.).

Summary

We believe that adopting this goal will have several benefits. It will help attract more libraries to participate in the essential work that needs to be done after information is captured. It will provide a clear path for planning the long-term preservation of the information acquired. It will provide better collections and services to more users more efficiently and effectively than could be done by individual libraries working on their own. It will demonstrate the value of libraries to our local user-communities, our parent institutions, and funding agencies.

Authors:

James A. Jacobs, Librarian Emeritus, University of California San Diego
James R. Jacobs, Federal Government Information Librarian, Stanford University

Recording: Latest Lessons Learned

Laurie Allen, Assistant Director for Digital Scholarship, at Penn Libraries walks us through an overview of what our colleagues engaged in data rescue events have learned and how academic research libraries can complement those efforts.

Data rescue events are a bottom-up strategy to get as much data as we can when working with people with a wide variety of skills sets (not necessarily library-related skill sets) during a limited time-frame.

We are proposing research libraries can complement this with a top-down strategy. Librarians know how government agency data is organized, what types of information researchers need, and can target the work of downloading sets to be conducted as part of their routine work.

We are seeking a few research libraries who would be willing to commit to specific agencies and collaborating on a shared workflow as a pilot.

Emerging Ideas for Ways Libraries Can Contribute

After the initial webinar on Monday, we heard constructive feedback and good questions raised from colleagues at a number of libraries.  The scoop: Some of us have more resources than others — and finding the best way to contribute quickly and effectively isn’t obvious. In the spirit of trying to reflect back 4 levels of data rescue efforts that we are hearing, below is an approach that strives to balance flexibility with interest/resources in a way that remains true to the principle of “systematically grounded action.”

Here’s an overview:

Here’s the cycle:

 

Here are some details…

 

We look forward to hearing what you think. Thank you for all you are doing!

You're Invited

We hope you can join a collaborative project that leverages the talent and energy of librarians in addressing a wicked problem: Preserving born-digital government data.

Given the successes of the #DataRefuge project to rescue climate and environmental data, librarians have started to connect, ask, act, and contemplate collective action for more types of data. Let’s figure it out together!

Join us for a 30-minute webinar to kick-off collaborations:
Monday, February 6 @ 12:15 pm ET

Recording available

To join the collaboration, fill out this form on behalf of your library or archives that is related to this US Federal Agencies Coordination spreadsheet.

Here are some background documents that led us to reach out the ARL to convene people and energy toward positive action:

Leveraging Libraries (pdf, 1/27/17)

Libraries Network Overview (pdf, 2/1/17)

Chain of Custody (github, 2/1/17)

Hope to see you online!