Cambridge University Libraries inaugural Digital Preservation Policy

The inaugural Cambridge University Libraries Digital Preservation Policy has been published last week. Somaya Langley (Cambridge Policy & Planning Fellow) provides some insight into the policy development process and announces a policy event in London, presented in collaboration with Edith (Oxford Policy & Planning Fellow) to be held in early December 2018.


In December 2016, I started the digital preservation policy development process for Cambridge University Library (CUL), which has finally culminated in a published policy.

Step one

Commencing with a ‘quick and dirty’ policy gap analysis at CUL, what I discovered was not so much that there were some gaps in their existing policy landscape but rather that there was a dearth of much-needed policies. The gap analysis at CUL found that a few key policies did exist for different audiences (some intended to guide CUL, some to guide researchers and some meant for all staff and researchers working at the University of Cambridge). While my counterpart at Oxford found there was duplication in their policies across Bodleian Libraries and the University of Oxford, I mostly found chasms.

Next step

The second step in the policy development process was attempting to meet an immediate need from staff, by adding some “placeholder” digital preservation statements into the Collection Care and Conservation Policy that was currently under review. In the longer term, while it might be ideal to combine a preservation policy into one (encompassing the conservation and preservation of physical and digital collection items), CUL’s digital preservation maturity and skill capabilities are too low at present. Focus needed to be really drawn to how to manage digital content, hence the need for a separate Cambridge University Libraries Digital Preservation Policy.

That said, like everything else I’ve been doing at Cambridge, it needed to be addressed holistically. And policy is no exception. Being able to undertake about two full weeks of work (spanning several months in early 2017) contributing to the review of the Collection Care and Conservation Policy has meant including some statements in this policy that will support better care for digital (and audiovisual) content still remaining on carriers (that are yet to be transferred).

Collaborative development

Then in June 2017 we moved onto undertaking policy development collaboratively. Part of this was to do an international digital preservation policy review – looking at dozens of different policies (and some strategies). Edith wrote about the policy development process back in middle of last year.

The absolute lion’s share of the work was carried out by my Oxford counterparts, Edith and Sarah. Due to other work priorities, I didn’t have much available time during this stage. This is why it is so important to have a team – whether this is a co-located team or distributed across an organisation or multiple organisations – when working in the digital preservation space. I really can’t thank them enough for carrying the load for this task.

Policy template

My contribution was to develop a generic policy template, for use in both our organisations. For those that know me, you will know I prefer to ‘borrow and adapt’ rather than reinvent the wheel. So I used the layout of policies from a previous workplace and constructed a template for use by CUL and the Bodleian Libraries. I was particularly keen to ensure what I developed was generic, so that it could be used for any type of policy development in future.

This template has now been provided to the Digital Preservation Coalition, who will make it available with other documents in the coming years – so that some of this groundwork doesn’t have to be carried out by every other organisation still needing to do digital preservation policy (or other policy) development. We found in our international digital preservation maturity and resourcing survey (another blog post on this is still to follow), that there’s still at least 42% of organisations internationally, that do not have a digital preservation policy.

Who has a digital preservation policy?

What next?

Due to other work priorities, drafting the digital preservation policy didn’t properly commence until earlier this year. But by this point I had a good handle on my organisation’s specific:

  • Challenges and issues related to digital content (not just preservation and management concerns)
  • High-level ‘profile’ of digital collections, right across all content ‘classes’
  • Gaps in policy, standards, procedures and guidelines (PSPG) as well as strategy
  • Appreciation of a wide-range of digital preservation policies (internationally)
  • Digital preservation maturity (holistic, not just technical) – based on maturity assessments using several digital preservation maturity models
  • Governance (related to policy and strategy)
  • Language relevant to my organisation
  • Responsibilities across the organisation
  • Relevant legislation (UK/EU)

This formed my approach of how to draft the digital preservation policy, that would meet CUL’s needs.

Approach

I realised that CUL required a comprehensive policy, that would fill the many gaps that ideally other policies would cover. I should note that there are many ways of producing a policy, and it does have to be tailored to meet the needs of your organisation. (You can compare with Edith’s digital preservation policy for the Bodleian Libraries, Oxford.)

The next steps involved:

  • Gathering requirements (this had already taken place during 2017)
  • Setting out a high-level structure/list of points to address
  • Defining the stakeholder group membership (and ways of engaging with them)
  • Setting the frame of the task ahead
  • Agreeing on the scope (this changed from ‘Cambridge University Library’ to ‘Cambridge University Libraries’ – encompassing CUL’s affiliate and dependent libraries‘)

Then came the iterative process of:

  1. Drafting policy statements and principles
  2. Meeting with the stakeholder group and discussing the draft
  3. Gathering feedback on the policy draft (internally and externally)
  4. Incorporating feedback
  5. Circulating a new version of the draft
  6. Developing associated documentation (to support the policy)

Once a final version had been reached, this was followed by the approvals and ratification process.

What do we have?

Last week, the inaugural Cambridge University Libraries Digital Preservation Policy was published (which was not without a few more hurdles).

It has been an ‘on again, off again’ process that has taken 23 months in total. Now we can say that for CUL and the University of Cambridge, that:

“Long-term preservation of digital content is essential to the University’s mission of contributing to society through the pursuit of education, learning, and research.”

Which compliments some of our other CUL policies.

What now?

This is never the end of a policy process. Policy should be a ‘live and breathing’ process, with the policy document itself purely being there to keep a record of the agreed upon decisions and principles.

So, of course there is more to do. “But what’s that?”, I hear you say.

Join us

There is so much more that Edith and I would like to share with you about our policy development journey over the past two years of the Digital Preservation at Oxford and Cambridge (DPOC) project.

So much so that we’re running an event in London on Tuesday 4th December 2018 on Devising Your Digital Preservation Policy, hosted by the DPC. (There is one seat left – if you’re quick, that could be you).

We’re also lucky to be joined by two ‘provocateurs’ for the day:

  • Kirsty Lingstadt, Head of Digital Library and Deputy Director of Library and University Collections, University of Edinburgh
  • Jenny Mitcham, Head of Good Practice and Standards, Digital Preservation Coalition (who has just landed in her new role – congrats & welcome to Jenny!)

There is so much more I could say about policy development in relation to digital content, but I’ll leave it there. I do hope you get to hear Edith and I wax lyrical about this.

Thank-yous

Finally, I must thank my Cambridge Polonsky team members, Edith Halvarsson (my Oxford counterpart), plus Paul Wheatley and William Kilbride from the DPC. Policy can’t be developed in a void and their contributions and feedback have been invaluable.

Planning your (digital) funeral: for projects

Cambridge Policy & Planning Fellow, Somaya, writes about her paper and presentation from the Digital Culture Heritage Conference 2017. The conference paper, Planning for the End from the Start: an Argument for Digital Stewardship, Long-Term Thinking and Alternative Capture Approaches, looks at considering digital preservation at the start of a digital humanities project and provides useful advice for digital humanities researchers to use in their current projects.


In August I presented at the Digital Cultural Heritage 2017 international conference in Berlin (incidentally, my favourite city in the whole world).

Berlin - view from the river Spree. Photo: Somaya Langley

Berlin – view from the river Spree. Photo: Somaya Langley

I presented the Friday morning Plenary session on Planning for the End from the Start: an Argument for Digital Stewardship, Long-Term Thinking and Alternative Capture Approaches. Otherwise known as: ‘planning for your funeral when you are conceived’. This is a presentation that represents challenges faced by both Oxford and Cambridge and the thinking behind this has been done collaboratively by myself and my Oxford Policy & Planning counterpart, Edith Halvarsson.

We decided it was a good idea to present on this topic to an international digital cultural heritage audience, who are likely to also experience similar challenges as our own researchers. It is based on some common digital preservation use cases that we are finding in each of our universities.

The Scenario

A Digital Humanities project receives project funding and develops a series of digital materials as part of the research project, and potentially some innovative tools as well. For one reason or another, ongoing funding cannot be secured and so the PIs/project team need to find a new home for the digital outputs of the project.

Example Cases

We have numerous examples of these situations at Cambridge and Oxford. Many projects containing digital content that needs to be ‘rehoused’ are created in the online environment, typically as websites. Some examples include:

Holistic Thinking

We believe that thinking holistically right at the start of a project can provide options further down the line, should an unfavourable funding outcome be received.

So it is important to consider holistic thinking, specifically a Digital Stewardship approach (incorporating Digital Curation & Digital Preservation).

Models for Preservation

Digital materials don’t necessarily exist in a static form and often they don’t exist in isolation. It’s important to think about digital content as being part of a lifecycle and managed by a variety of different workflows. Digital materials are also subject to many risks so these also need to be considered.

Some models to frame thinking about digital materials:

Documentation

It is incredibly important to document your project and when handing over the responsibility of your digital materials and data, also handing over documentation to someone responsible for hosting or preserving your digital project will need to rely on this information. Also ensuring the implementation of standards, metadata schemas and persistent identifiers etc.

This can include providing associated materials, such as:

Data Management Plans

Some better use of Data Management Plans (DMPs) could be:

  • Submitting DMPs alongside the data
  • Writing DMPs as dot-points rather than prose
  • Including Technical Specifications such as information about code, software, software versions, hardware and other dependencies

An example of a DMP from Cambridge University’s Dr Laurent Gatto: Data Management Plan for a Biotechnology and Biological Sciences Research Council

Borrowing from Other Disciplines

Rather than having to ‘rebuild the wheel’, we should also consider borrowing from other disciplines. For example, borrowing from the performing arts we might provide similar documents and information such as:

  • Technical Rider (a list of requirements for staging a music gig and theatre show)
  • Stage Plots (layout of instruments, performers and other equipment on stage)
  • Input Lists (ordered list of the different audio channels from your instruments/microphones etc. that you’ll need to send to the mixing desk)

For digital humanities projects and other complex digital works, providing simple and straight forward information about data flows (including inputs and outputs) will greatly assist digital preservationists in determining where something has broken in the future.

Several examples of Technical Riders can be found here:

Approaches

Here are some approaches to consider in regards to interim digital preservation of digital materials:

Bundling & Bitstream Preservation

The simplest and most basic approach may be to just zip up files and undertake bitstream preservation. Bitstream preservation only ensures that the zeroes and ones that went into a ‘system’ come out as the same zeroes and ones. Nothing more.

Exporting / Migrating

Consider exporting digital materials and/or data plus metadata into recognised standards as a means of migrating into another system.

For databases, the SIARD (Software Independent Archiving of Relational Databases) standard may be of use.

Hosting Code

Consider hosting code within your own institutional repository or digital preservation system (if your organisation has access to this option) or somewhere like GitHub or other services.

Packing it Down & ‘Putting on Ice’

You may need to consider ‘packing up’ your digital materials and doing it in a way that you can ‘put it on ice’. Doing this in a way that – when funding is secured in the future – it can be somewhat simply be brought back to life.

An example of this is the the work that Peter Sefton, from the University of Sydney in Australia, has been trialling. Based on Omeka, he has created a version of code called OzMeka. This is an attempt at a standardised way of being able to handle research project digital outputs that have been presented online. One example of this is Dharmae.

Alternatively, the Kings Digital Lab, provide infrastructure for eResearch and Digital Humanities projects that ensure the foundations of digital projects are stable from the get-go and mitigates risks regarding longer-term sustainability of digital content created as part of the projects.

Maintaining Access

This could be done through traditional web archiving approaches, such as using tools Web Archiving Tools (Heritrix or HTTrack) or downloading video materials using Video Download Helper for video. Alternatively, if you are part of an institution, the Internet Archive’s ArchiveIt service may be something you want to consider and can work with your institution to implement this.

Hosted Infrastructure Arrangements

Finding another organisation to take on the hosting of your service. If you do manage to negotiate this, you will need to either put in place a contract or Memorandum of Understanding (MOU) as well as handing over various documentation, which I have mentioned earlier.

Video Screen Capture

A simple way of attempting to document a journey through a complex digital work (not necessarily online, this can apply to other complex interactive digital works as well), may be by way of capturing a Video Screen Capture.

Kymata Atlas - Video Screen Capture still

Kymata Atlas – Video Screen Capture still

Alternatively, recording a journey through an interactive website using the Webrecorder, developed by Rhizome, which will produce WARC web archive files.

Documenting in Context

Another means of understanding complex digital objects is to document the work in the context in which it was experienced. One example of this is the work of Robert Sakrowski and Constant Dullart, netart.database.

An example of this is the work of Dutch and Belgian net.artists JODI (Joan Heemskerk & Dirk Paesmans) shown here.

JODI - netart.database

JODI – netart.database

Borrowing from documenting and archiving in the arts, an approach of ‘documenting around the work‘ might be suitable – for example, photographing and videoing interactive audiovisual installations.

Web Archives in Context

Another opportunity to understand websites – if they have been captured by the Internet Archive – is viewing these websites using another tool developed by Rhizome, oldweb.today.

An example of the Cambridge University Library website from 1997, shown in a Netscape 3.04 browser.

Cambridge University Library website in 1997 via oldweb.today

Cambridge University Library website in 1997 via oldweb.today

Conclusions

While there is no one perfect solution and each have their own pros and cons, using an approach that combines different methods might make your digital materials available post the lifespan of your project. These methods will help ensure that digital material is suitably documented, preserved and potentially accessible – so that both you and others can use the data in an ongoing manner.

Consider:

  • How you want to preserve the data?
  • How you want to provide access to your digital material?
  • Developing a strategy including several different methods.

Finally, I think this excerpt is relevant to how we approach digital stewardship and digital preservation:

“No man is an island entire of itself; every man is a piece of the continent, a part of the main” – Meditation XVII, John Donne

We are all in this together and rather than each having to troubleshoot alone and building our own separate solutions, it would be great if we can work to our strengths in collaborative ways, while sharing our knowledge and skills with others.

DPASSH: Getting close to producers, consumers and digital preservation

Sarah shares her thoughts after attending the DPASSH (Digital Preservation in the Arts, Social Sciences and Humanities) Conference at the University of Sussex (14 – 15 June).


DPASSH is a conference that the Digital Repository Ireland (DRI) puts on with a host organisation. This year, it was hosted by the Sussex Humanities Lab at the University of Sussex, Brighton. What is exciting about this digital preservation conference is that it brings together creators (producers) and users (consumers) with digital preservation experts. Most digital preservation conferences end up being a bit of an echo chamber, full of practitioners and vendors only. But what about the creators and the users? What knowledge can we share? What can we learn?

DPASSH is a small conference, but it was an opportunity to see what researchers are creating and how they are engaging with digital collections. For example in Stefania Forlini’s talk, she discussed the perils of a content-centric digitisation process where unique print artefacts are all treated the same; the process flattens everything into identical objects though they are very different. What about the materials and the physicality of the object? It has stories to tell as well.

To Forlini, books span several domains of sensory experience and our digitised collections should reflect that. With the Gibson Project, Forlini and project researchers are trying to find ways to bring some of those experiences back through the Speculative W@nderverse. They are currently experimenting with embossing different kinds of paper with a code that can be read by a computer. The computer can then bring up the science fiction pamphlets that are made of that specific material. Then a user can feel the physicality of the digitised item and then explore the text, themes and relationships to other items in the collection using generous interfaces. This combines a physical sensory experience with a digital experience.

For creators, the decision of what research to capture and preserve is sometimes difficult; often they lack the tools to capture the information. Other times, creators do not have the skills to perform proper archival selection. Athanasios Velios offered a tool solution for digital artists called Artivity. Artivity can capture the actions performed on a digital artwork in certain programs, like Photoshop or Illustrator. This allows the artist to record their creative process and gives future researchers the opportunity to study the creative process. Steph Taylor from CoSector suggested in her talk that creators are archivists now, because they are constantly appraising their digital collections and making selection decisions.  It is important that archivists and digital preservation practitioners empower creators to make good decisions around what should be kept for the long-term.

As a bonus to the conference, I was awarded with the ‘Best Tweet’ award by the DPC and DPASSH. It was a nice way to round out two good, informative days. I plan to purchase many books with my gift voucher!

I certainly hope they hold the conference next year, as I think it is important for researchers in the humanities, arts and social sciences to engage with digital preservation experts, archivists and librarians. There is a lot to learn from each other. How often do we get our creators and users in one room with us digital preservation nerds?

Preserving research – update from the Cambridge Technical Fellow

Cambridge’s Technical Fellow, Dave, discusses some of the challenges and questions around preserving ‘research output’ at Cambridge University Library.


One of the types of content we’ve been analysing as part of our initial content survey has been labelled ‘research output’. We knew this was a catch-all term, but (according to the categories in Cambridge’s Apollo Repository), ‘research output’ potentially covers: “Articles, Audio Files, Books or Book Chapters, Chemical Structures, Conference Objects, Datasets, Images, Learning Objects, Manuscripts, Maps, Preprints, Presentations, Reports, Software, Theses, Videos, Web Pages, and Working Papers”. Oh – and of course, “Other”. Quite a bundle of complexity to hide behind one simple ‘research output’ label.

One of the categories in particular, ‘Dataset’, zooms the fractal of complexity in one step further. So far, we’ve only spoken in-depth to a small set of scientists (though our participation on Cambridge’s Research Data Management Project Group means we have a great network of people to call on). However, both meetings we’ve had indicate that ‘Datasets’ are a whole new Pandora’s box of complicated management, storage and preservation challenges.

However – if we pull back from the complexity a little, things start to clarify. One of the scientists we spoke to (Ben Steventon at the Steventon Group) presented a very clear picture of how his research ‘tiered’ the data his team produced, from 2-4 terabyte outputs from a Light Sheet Microscope (at the Cambridge Advanced Imaging Centre) via two intermediate layers of compression and modelling, to ‘delivery’ files only megabytes in size. One aspect of the challenge of preserving such research then, would seem to be one of tiering preservation storage media to match the research design.

(I believe our colleagues at the JISC, who Cambridge are working with on the Research Data Management Shared Service Pilot Project, may be way ahead of us on this…)

Of course, tiering storage is only one part of the preservation problem for research data: the same issues of acquisition and retention that have always been part of archiving still apply… But that’s perhaps where the ‘delivery’ layer of the Steventon Group’s research design starts to play a role. In 50 or 100 years’ time, which sets of the research data might people still be interested in? It’s obviously very hard to tell, but perhaps it’s more likely to be the research that underpins the key model: the major finding?

Reaction to the ‘delivered research’ (which included papers, presentations and perhaps three or four more from the list above) plays a big role, here. Will we keep all 4TBs from every Light Sheet session ever conducted, for the entirety of a five or ten-year project? Unlikely, I’d say. But could we store (somewhere cold, slow and cheap) the 4TBs from the experiment that confirmed the major finding?

That sounds a bit more within the realms of possibility, mostly because it feels as if there might be a chance that someone might want to work with it again in 50 years’ time. One aspect of modern-day research that makes me feel this might be true is the complexity of the dependencies between pieces of modern science, and the software it uses in particular. (Blender, for example, or Fiji). One could be pessimistic here and paint a negative scenario of ‘what if a major bug is found in one of those apps, that calls into question the science ‘above it in the chain’. But there’s an optimistic view, here, too… What if someone comes up with an entirely new, more effective analysis method that replaces something current science depends on? Might there not be value in pulling the data from old experiments ‘out of the archive’ and re-running them with the new kit? What would we find?

We’ll be able to address some of these questions in a bit more detail later in the project. However, one of the more obvious things talking to scientists has revealed is that many of them seem to have large collections of images that need careful management. That seems quite relevant to some of the more ‘close to home’ issues we’re looking at right now in The Library.

IDCC 2017 – data champions among us

Outreach and Training Fellow, Sarah, provides some insight into some of the themes from the recent IDCC conference in Edinburgh on the 21 – 22 February. The DPOC team also presented their first poster,”Parallel Auditing of the University of Cambridge and the University of Oxford’s Institutional Repositories,” which is available on the ‘Resource’ page.


Storm Doris waited to hit until after the main International Digital Curation Conference (IDCC) had ended, allowing for two days of great speakers. The conference focused on research data management (RDM) and sharing data. In Kevin Ashley’s wrap-up, he touched on data champions and the possibilities of data sharing as two of the many emerging themes from IDCC.

Getting researchers to commit to good data practice and then publish data for reuse is not easy. Many talks focused around training and engagement of researchers to improve their data management practice. Marta Teperek and Rosie Higman from Cambridge University Library (CUL) gave excellent talks on engaging their research community in RDM. Teperek found value in going to the community in a bottom-up, research led approach. It was time-intensive, but allowed the RDM team at CUL to understand the problems Cambridge researchers faced and address them. A top-down, policy driven approach was also used, but it has been a combination of the two that has been the most effective for CUL.

Higman went on to speak about the data champions initiative. Data champions were recruited from students, post-doctoral researchers, administrators and lecturers. What they had in common was their willingness to advocate for good RDM practices. Each of the 41 data champions was responsible for at least one training session year. While the data champions did not always do what the team expected, their advocacy for good RDM practice has been invaluable. Researchers need strong advocates to see the value in publishing their data – it is not just about complying with policy.

On day two, I heard from researcher and data champion Dr. Niamh Moore from University of Edinburgh. Moore finds that many researchers either think archiving their data is either a waste of time or are concerned about the future use of their data. As a data champion, she believes that research data is worth sharing and thinks other researchers should be asking,  ‘how can I make my data flourish?’. Moore uses Omeka to share her research data from her mid-90s project at the Clayoquot Sound peace camp called Clayoquot Lives. For Moore, benefits to sharing research data include:

  • using it as a teaching resource for undergraduates (getting them to play with data, which many do not have a chance to do);
  • public engagement impact (for Moore it was an opportunity to engage with the people previously interviewed at Clayoquot); and
  • new articles: creating new relationships and new research where she can reuse her own data in new ways or other academics can as well.

Opening up data and archiving leads to new possibilities. The closing keynote on day one discussed the possibilities of using data to improve the visitor experience for people at the British Museum. Data Scientist, Alice Daish, spoke of data as the unloved superhero. It can rescue organisations from questions and problems by providing answers, helping organisations make decisions, take actions and even provide more questions. For example, Daish has been able to wrangle and utilise data at the British Museum to learn about the most popular collection items on display (the Rosetta Stone came first!).

And Daish, like Teperek and Higman, touched on outreach as the only way to advocate for data – creating good data, sharing it, and using it to its fullest potential. And for the DPOC team, we welcome this advocacy; and we’d like to add to it and see that steps are also made to preserve this data.

Also, it was a great to talk about the work we have been doing and the next steps for the project—thanks to everyone who stopped by our poster!

Oxford Fellows (From left: Sarah, Edith, James) holding the DPOC poster out front of the appropriately named “Fellows Entrance” at the Royal College of Surgeons.