Electronic lab notebooks and digital preservation: part II

In her previous blog post on electronic lab notebooks (ELNs), Sarah outlined a series of research questions that she wanted to pursue to see what could be preserved from an ELN. Here are some of her results.


In my last post, I had a number of questions that I wanted to answering regarding the use of ELNs at Oxford, since IT Services is currently running a pilot with LabArchives.

Those questions were:

  1. Authenticity of research – are timestamps and IP addresses retained when the ELN is exported from LabArchives?
  2. Version/revision history – Can users export all previous versions of data? If not users, then can IT Services? Can the information on revision history be exported, even if not the data?
  3. Commenting on the ELN – are comments on the ELN exported? Are they retained if deleted in revision history?
  4. Export – What exactly can be exported by a user? What does it look like? What functionality do you have with the data? What is lost?

What did I find out?

I started first with looking at the IT Services’ webpage on ELNs. It mentions what you can download (HTML or PDF), but it doesn’t offer much more about the long-term retention of it. There’s a lot of useful advice on getting started with ELNs though and how to use the notebook.

In the Professional version that staff and academics can use offers two modes of export:

  • Notebook to PDF
  • Offline Notebook – HTML

When you request one of these functions, LabArchives will email it to the email address associated with your work. It should happen within 60 minutes. Then you will have 24 hours to download the file. So, the question is: what do you get with each?

PDF

There are two options when you go to download your PDF: 1) including comments and 2) including empty folders.

So, this means that comments are retained in the PDF and they look something like this:

It also means that where possible, previews of images and documents show up in the PDF. As do the latest timestamps.

What you lose is:

  • previous versions and revision history
  • the ability to use files – these will have to be downloaded and saved separately (but this was expected from a PDF)

What you get:

  • a tidy, printable version of a lab notebook in its most recent iteration (including information on who generated the PDF and when)

What the PDF cover of a lab notebook looks like.

Offline HTML version

In this version, you are delivered a zip file which contains a number of folders and documents.

All of the attachments are stored under the attachments folder, both as original and thumbnails (which are just low res jpegs used by LabArchives).

How does the HTML offline version stack up? Overall, the functionality for browsing is pretty good and latest timestamps are retained. You can also directly download the attachments on each page.

In this version, you do not get the comments. You also do not get any previous versions, only the latest files, updates and timestamps. But unlike the PDF, it is easy to navigate and the uploaded attachments can be opened, which have not been compressed or visibly changed.

I would recommend taking a copy of both versions, since each one offers some different functions. However, neither offer a comprehensive export. Still, the most recent timestamps are useful for authenticity, though checksums for files generated on upload and given you to in an HTML export in a manifest file would be even better.

Site-wide backup

Neither export option open to academics or staff allows a comprehensive version of the ELN. Something is lost in the export. But, what LabArchives does offer is an annual site-wide back up to local IT Services as part of their Enterprise agreement. That includes: all timestamps, comments and versions. The copy contains everything. This is promising, so all academics should be aware of this because they can then request a copy from IT Services. And they should be able to get a full comprehensive backup of their ELN. This also means that IT Services is also preserving a copy of the ELNs, like LabArchives.

So, we are going to follow up with IT Services, to talk about how they will preserve and provide access to these ELN backups as part of the pilot. Many of you will have similar conversations with your own IT departments over time, as you will need to work closely with them to ensure good digital preservation practices.

And these are some of the questions you may want to consider asking when talking with your IT department about the preservation of ELNs:

  • How many backups? Where are the backups stored? What mediums are being used? Are backups checked and restored as part of testing and maintenance? How often is the media refreshed?
  • What about fixity?
  • What about the primary storage? Is it checked or refreshed regularly? Is there any redundancy if that primary storage is online? If it is offline, how can it be requested by staff?
  • What metadata is being kept and created about the different notebooks?
  • What file formats are being retained? Is any data being stored on the different file formats? Presumably with research data, there would be a large variety of data.
  • How long are these annual backups being retained?
  • Is your IT department actively going to share the ELNs with staff?
  • If it is currently the PI and department’s responsibility to store physical notebooks, what will be the arrangement with electronic ones?

Got anything else you would ask your IT department when looking into preserving ELNs? Share in the comments below.

Project update: available datasets

Oxford’s Outreach and Training Fellow, Sarah, announces the first available datasets from the DPOC project. This is part of the project’s self-archiving initiative, where they will be making sure project outputs have a permanent home.


As the project begins to come to a close (or in my case, maternity leave starts next week), we’ve begun efforts to self-archive the project. We’ll be employing a variety of methods to clean out SharePoint sites and identify records with enduring value to the project. We’ll be crawling websites and Twitter to make sure we have a record for future digital preservation projects to utilise. Most importantly, we’ll give our project outputs a long-term home so they can be reused as necessary.

That permanent home is of course our institutional repositories. Our conference papers, presentations, posters, monograph chapters and journal articles will rest there. But so will numerous datasets and records of reports and other material that will be embargoed. I’ve started depositing my datasets already, into ORA (Oxford University Research Archive).

There are two new datasets now available for download:

You can also find links to them on the Project Resources page. As more project outputs are made available through institutional repositories, we’ll be making more announcements. And at the end of the project, we’ll do a full blog post on how we self-archived the DPOC project, so that the knowledge gained will not be lost after the project ends.


Any tips for how you self-archive a project? Share them in the comments.

How I got JHOVE running in a debugger

Cambridge’s Technical Fellow, Dave, steps through how he got JHOVE running in a debugger, including the various troubleshooting steps. As for what he found when he got under the skin of JHOVE—stay tuned.


Over the years of developing apps, I have come to rely upon the tools of the trade; so rather than read programming documentation, I prefer getting code running under a debugger and stepping through it, to let it show me what an app does. In my defence, Object Oriented code tends to get quite complicated, with various methods of one class calling unexpected methods of another… To avoid this, you can use Design Patterns and write Clean Code, but it’s also very useful to let the debugger show you the path through the code, too.

This was the approach I took when I took a closer look at JHOVE. I wanted to look under the hood of this application to help James with validating a major collection of TIFFs for a digitisation project by Bodleian Libraries and The Vatican Library.

Step 1: Getting the JHOVE code into an IDE

Jargon alert: ‘IDE’ – stands for ‘Integrated Development Environment’, which means: “… piece of software for writing, managing, sharing, testing and (in this instance) debugging code”.

So I had to pick the correct IDE to use… I already knew that JHOVE was a Java app: the fact it’s compiled as a Java Archive (JAR) was the giveaway, though if I’d needed confirmation, checking the coloured bar on the homepage of its GitHub repository would have told me, too.

Github project language analysis

Coding language analysis in a GitHub project

My Java IDE of choice is JetBrains’s IntelliJ IDEA, so the easiest way to get the code was to start a new project by Checking Out from Version Control, selecting the GitHub option and adding the URL for the JHOVE project (https://github.com/openpreserve/JHOVE). This copied (or ‘cloned’) all the code to my local machine.

Load from GitHub

Loading a project into IntelliJ IDEA directly from GitHub

GitHub makes it quite easy to manage code branches, i.e.: different versions of the codebase that can be developed in parallel with each other – so you can, say, fix a bug and re-release the app quickly in one branch, while taking longer to add a new feature in another.

The Open Preservation Foundation (who manage JHOVE’s codebase now) have (more or less) followed a convention of ‘branching on release’ – so you can easily debug the specific version you’re running in production by switching to the relevant branch… (…though version 1.7 seems to be missing a branch?) It’s usually easy to switch branches within your IDE – doing so simply pulls the code from the different branch down and loads it into your IDE, and your local Git repository in the background.

GitHub branches

Finding the correct code branch in GitHub. Where’s 1.7 gone?

Step 2: Finding the right starting point for the debugger

Like a lot of apps that have been around for a while, JHOVE’s codebase is quite large, and it’s therefore not immediately obvious where the ‘starting point’ is. At least, it isn’t obvious if you don’t READ the README file in the codebase’s root. Once you finally get around to doing that, there’s a clue buried quite near the bottom in the Project Structure section:

JHOVE-apps: The JHOVE-apps module contains the command-line and GUI application code and builds a fat JAR containing the entire Java application.

… so the app starts from within the jhove-apps folder somewhere. A little extra sniffing about and I found a class file in the src/main/java folder called Jhove.java, which contained the magic Java method:

public static void main (String [] args) {}

…which is the standard start point for any Java app (and several other languages too).

However, getting the debugger running successfully wasn’t just a case of finding the right entry point and clicking ‘run’ – I also had to setup the debugger configuration to pass the correct command-line arguments to the application, or it fell at the first hurdle. This is achieved in IntelliJ IDEA by editing the Run / Debug configuration. I set this up initially by right-clicking on the Jhove.java file and selecting Run JHOVE.main().

Running Jhove in IntelliJ

Running the Jhove class to start the application

The run failed (because I hadn’t added the command line arguments) but at least IntelliJ was clever enough to setup a new Run / Debug configuration (called Jhove after the class I’d run) that I could then add the Program Arguments to – in this case, the same command line arguments you’d run JHOVE with normally (e.g. the module you want to run, the handler you’d want to output the result with, the file you want to characterise etc etc).

Edit the run config

Editing the Run configuration in IntelliJ

I could then add a breakpoint to the code in the Jhove.main() method and off I went… Or did I?

Step 3: setting up a config file

So this gave me what I needed to start stepping through the code. Unfortunately, my first attempt didn’t get any further than the initial Jhove.main() method… It got all the way through, but then the following error occurred:

Cannot instantiate module: com.mcgath.jhove.module.PngModule

The clue for how to fix this was actually provided by the debugger as it ran, however, and provides a good example of the kind of insight you get from running code in debug mode in your IDE. Because the initial set of command-line parameters I was passing in from the Run / Debug configuration didn’t contain a “-c” parameter to set a config file, JHOVE was automagically picking up its configuration from a default location: i.e. the JHOVE/config folder in my user directory – which existed, with a config file, because I’d also installed JHOVE on my machine the easy way beforehand…)

Config file variable in debugger

Debugger points towards the config file mix-up

A quick look at this config showed that JHOVE was expecting all sorts of modules to be available to load, one of which was the ‘external’ module for PNG characterisation mentioned in the error message. This is included in the JHOVE codebase, but in a separate folder (jhove-ext-modules): the build script that pulls JHOVE together for production deployment clearly copes with copying the PNG module from this location to the correct place, but the IDE couldn’t find it when debugging.

So the solution? Put a custom config file in place, and remove the parts that referenced the PNG module. This worked a treat, and allowed me to track the code execution all the way through for a test TIFF file.

Adding a config file parameter

Adding an extra -c config file parameter and a custom config file.

Conclusion

Really, all the above, while making it possible to get under the skin of JHOVE, is just the start. Another blog post may follow regarding what I actually found when I ran through its processes and started to get and idea of how it worked (though as a bit of a spoiler, it wasn’t exactly pretty)…

But, given that JHOVE is more or less ubiquitous in digital preservation (i.e. all the major vended solutions wrap it up in their ingest processes in one way or another), hopefully more people will be encouraged to dive into it and learn how it works in more detail. (I guess you could just ‘read the manual’ – but if you’re a developer, doing it this way is more insightful, and more fun, too).

Electronic lab notebooks and digital preservation: part I

Outreach and Training Fellow, Sarah, writes about a trial of electronic lab notebooks (ELN) at Oxford. She discusses the requirements and purpose of the ELN trial and raises lingering questions around preserving the data from ELNs. This is part I of what will be a 2-part series.


At the end of June, James and I attended a training course on electronic lab notebooks (ELN). IT Services at the University of Oxford is currently running a trial of Lab Archives‘ ELN offering. This course was intended to introduce departments and researchers to the trial and to encourage them to start their own ELN.

Screenshot of a LabArchives electronic lab notebook

When selecting an ELN for Oxford, IT Services considered a number of requirements. Those that were most interesting from a preservation perspective included:

  • the ability to download the data to store in an institutional repository, like ORA-data
  • the ability to upload and download data in arbitrary formats and to have it bit-preserved
  • the ability to upload and download images without any unrequested lossy compression

Moving from paper-based lab notebooks to an ELN is intended to help a lot with compliance as well as collaboration. For example, the government requires every scientist to keep a record of every chemical used for their lifetime. This has a huge impact on the Chemistry Department; the best way to search for a specific chemical is to be able to do so electronically. There are also costs associated with storing paper lab notebooks. There’s also the risk of damage to the notebook in the lab. In some ways, an electronic lab notebook can solve some of those issues. Storage will likely cost less and the risk of damage in a lab scenario is minimised.

But how to we preserve that electronic record for every scientist for at least the duration of their life? And what about beyond that?

One of the researchers presenting on their experience using LabArchives’ ELN stated, “it’s there forever.” Even today, there’s still an assumption that data online will remain online forever. Furthermore, there’s an overall assumption that data will last forever. In reality, without proper management this will almost certainly not be the case. While IT Services will be exporting the ELNs for back up purposes, but management and retention periods for those exports were not detailed.

There’s also a file upload limit of 250MB per individual file, meaning that large datasets will need to be stored somewhere else. There’s no limit to the overall size of the ELN at this point, which is useful, but individual file limits may prove problematic for many researchers over time (this has already been an issue for me when uploading zip files to SharePoint).

After learning how researchers (from PIs to PhD students) are using ELNs for lab work and having a few demos on the many features of LabArchives’ ELN, we were left with a few questions. We’ve decided to create our own ELN (available to us for free at during the trial period) in order to investigate these questions further.

The questions around preserving ELNs are:

  1. Authenticity of research – are timestamps and IP addresses retained when the ELN is exported from LabArchives?
  2. Version/revision history – Can users export all previous versions of data? If not users, then can IT Services? Can the information on revision history be exported, even if not the data?
  3. Commenting on the ELN – are comments on the ELN exported? Are they retained if deleted in revision history?
  4. Export – What exactly can be exported by a user? What does it look like? What functionality do you have with the data? What is lost?

While there’s potential for ELNs to open up collaboration and curation in lab work by allowing notes and raw data to be kept together, and facilitating sharing and fast searching. However, the long-term preservation implications are still unclear and many still seem complacent about the associated risks.

We’re starting our LabArchives’ ELN now, with the hope of answering some of those questions. We also hope to make some recommendations for preservation and highlight any concerns we find.


Anyone have an experience preserving ELNs? What challenges and issues did you come across? What recommendations would you have for researchers or repository staff to facilitate preservation? 

Digital Preservation at Oxford Open Days

Oxford Fellow, Sarah, describes the DPOC team’s pop-up exhibition “Saving Digital,” held at the Radcliffe Science Library during Oxford Open Days #OxOpenDay. The post describes from the equipment and games the team showcased over the two days and some of the goals they had in mind for this outreach work.


On 27 June and 28 June, Oxford ran Open Days for prospective students. The city was alive with open doors and plenty of activity. It was the perfect opportunity for us to take our roadshow kit out and meet with prospective students with a pop-up exhibition called “Saving Digital”. The Radcliffe Science Library (RSL) on Parks Road kindly hosted the DPOC team and all of our obsolete media for two day in their lounge area.

The pop-up exhibition hosted at the RSL

We set up our table with a few goals in mind:

  • to educate prospective students about the rapid pace of technology and the concern about how we’re going to read digital data off them in the future (we educated a few parents as well!)
  • to speak with library and university staff about their digital dilemmas and what we at the digital preservation team could do about it
  • to raise awareness about the urgency and need of digital preservation in all of our lives and to inform more people about our project (#DP0C)

To achieve this, we first drew people in with two things: retro gaming and free stuff.

Last minute marketing to get people to the display. It worked!

Our two main games were the handheld game, Galaxy Invader 1000, and Frak! for the BBC Micro.

Frak! on the BBC Micro. The yellow handheld console to the right is Galaxy Invader 1000.

Galaxy Invader 1000 by CGL (1980) is a handheld game, which plays a version of Space Invaders. This game features a large multi-coloured display and 3 levels of skill. The whole game was designed to fit in 2 kilobytes of memory. 

Frak! (1984) was a game released for the BBC Micro in 1984 under the Aardvark software label. It was praised for excellent graphics and game play. In the side scrolling game, you play a caveman named Trogg. The aim of the game is to cross a series of platforms while avoiding dangers that include various monsters named Poglet and Hooter. Trogg is armed with a yo-yo for defence. 

Second, we gave them some digestible facts, both in poster form and by talking with them:

Saving Digital poster

Third, we filled the rest of the table with obsolete media and handheld devices from about the last forty years—just a small sample of what was available! This let them hold some of the media of the past, marvel over how little it could hold, but how much it could do for the time. And then we asked them how would they read the data off it today. That probably concerned parents more than their kids as several of them admitted to having important digital stuff either still on VHS or miniDV tapes, or on 3.5-inch disks! It got everyone thinking at least.

A lot of obsolete media all in one place.

Lastly, an enthusiastic team with some branded t-shirts made to emulate our most popular 1st generation badge, which was pink with a 3.5-inch disk in the middle. We gave away our last one during Open Days! But don’t worry, we have some great 2nd generation badges to collect now.

An enthusiastic team always helps. Especially if they are willing to demo the equipment.


A huge thank you to the RSL for hosting us for two days—we’ll be back on the 16th of July if you missed us and want to visit the exhibition! We’ll have a few extra retro games on hand and some more obsolete storage media!

Our poster was found on display in the RSL.

Update on the training programme pilot

Sarah, Oxford’s Outreach and Training Fellow, has been busy since the new year designing and a running a digital preservation training programme pilot in Oxford. It consisted of one introductory course on digital preservation and six other workshops. Below is an update on what she did for the pilot and what she has learnt over the past few months.


It’s been a busy few months for me, so I have been quiet on the blog. Most of my time and creative energy has been spent working on this training programme pilot. In total, there were seven courses and over 15 hours of material. In the end, I trialled the courses on over 157 people from Bodleian Libraries and the various Oxford college libraries and archives. Many attendees were repeats, but some were not.

The trial gave me an opportunity to test out different ideas and various topics. Attendees were good at giving feedback, both during the course and after via an online survey. It’s provided me with further ideas and given me the chance to see what works or what doesn’t. I’ve been able to improve the experience each time, but there’s still more work to be done. However, I’ve already learned a lot about digital preservation and teaching.

Below are some of the most important lessons I’ve learned from the training programme pilot.

Time: You always need more

I found that I almost always ran out of time at the end of a course; it left no time for questions or to finish that last demo. Most of my courses could have either benefited from less content, shorter exercises, or just being 30 minutes longer.

Based on feedback from attendees, I’ll be making adjustments to every course. Some will be longer. Some will have shorter exercises with more optional components and some will have slightly less content.

While you might budget 20 minutes for an activity, you will likely use 5-10 minutes more. But it varies every time due to the attendees. Some might have a lot of questions, but others will be quieter. It’s almost better to overestimate the time and end early than rush to cover everythhing. People need a chance to process the information you give them.

Facilitation: You can’t go it alone

In only one of my courses did I have to facilitate alone. I was run off my feet for the 2 hours because it was just me answering questions during  exercises for 15 attendees. It doesn’t sound like a lot, but I had a hoarse voice by the end from speaking for almost 2 hours!

Always get help with facilitation—especially for workshops. Someone to help:

  • answer questions during exercises,
  • get some of the group idea exercises/conversations started,
  • make extra photocopies or print outs, and
  • load programs and files onto computers—and then help delete them after.

It is possible to run training courses alone, but having an extra person makes things run smoother and saves a lot of time. Edith and James have been invaluable support!

Demos: Worth it, but things often go wrong

Demos were vital to illustrate concepts, but they were also sometimes clunky and time consuming to manage. I wrote up demo sheets to help. The demos relied on software or the Internet—both which can and will go wrong. Patience is key; so is accepting that sometimes things will not go right. Processes might take a long time to run or the course concludes before the demo is over.

The more you practice on the computer you will be using, the more likely things will go right. But that’s not always an option. If it isn’t, always have a back up plan. Or just apologise, explain what should have happened and move on. Attendees are generally forgiving and sometimes it can be turned into a really good teaching moment.

Exercises: Optional is the way to go

Unless you put out a questionnaire beforehand, it is incredibly hard to judge the skill level of your attendees. It’s best to prepare for all levels. Start each exercise slow and have a lot of optional work built in for people that work faster.

In most of my courses I was too ambitious for the time allowed. I wanted them to learn and try everything. Sometimes I wasn’t asking the right questions on the exercises either. Testing exercises and timing people is the only way to tailor them. Now that I have run the workshops and seen the exercises in action, I have a clearer picture of what I want people to learn and accomplish—now I just have to make the changes.

Future plans

There were courses I would love to run in the future (like data visualisation and digital forensics), but I did not have the time to develop. I’d like to place them on a roadmap for future training. As well as reaching out more to the Oxford colleges, museums and other departments. I would also like to tailor the introductory course a bit more for different audiences.

I’d like to get involved with developing courses like Digital Preservation Carpentry that the University of Melbourne is working on. The hands-on workshops excited and challenged me the most. Not only did others learn a lot, but so did I. I would like to build on that.

At the end of this pilot, I have seven courses that I will finalise and make available through a creative commons licence. What I learned when trying to develop these courses is that there isn’t always a lot of good templates available on the Internet to use as a starting point—you have to ask around for people willing to share.

So, I am hoping to take the work that I’ve done and share it with the digital preservation community. I hope they will be useful resources that can be reused and repurposed. Or at the very least, I hope it can be used as a starting point for inspiration (basic speakers notes included).

These will be available via the DPOC website sometime this summer, once I have been able to make the changes necessary to the slides and exercises—along with course guidance material. It has been a rewarding experience (as well as an exhausting one); I look forward to developing and delivering more digital preservation training in the future.

Digital preservation with limited resources

What should my digital preservation strategy be, if I do not have access to repository software or a DAMS system?

At Oxford, we recently received this question from a group of information professionals working for smaller archives. This will be a familiar scenario for many – purchasing and running repository software will require a regular dedicated budget, which many archives in the UK do not currently have available to them.

So what intermediate solutions could an archive put in place to better its chances of not losing digital collection content until such a time? This blog summarises some key points from meeting with the archivists, and we hope that these may be useful for other organisations who are asking the same question.


Protect yourself against human error

CC-BY KateMangoStar, Freepik

Human error is one of the major risks to digital content. It is not uncommon that users will inadvertently drag files/folders or delete content by mistake. It is therefore important to have strict user restrictions in place which limits who can delete, move, and edit digital collections. For this purpose you need to ensure that you have defined an “archives directory” which is separate from any “working directories” where users can still edit and actively work with content.

If you have IT support available to you, then speak to them about setting up new user restrictions.


Monitor fixity

CC-BY Dooder, Freepik

However, even with strong user restrictions in place, human error can occur. In addition to enforcing stronger user restrictions in the “archives directory”, tools like Fixity from AVP can be used to spot if content has been moved between folders, deleted, or edited. By running regular Fixity reports an archivist can spot any suspicious looking changes.

We are aware that time constraints are a major factor which inhibits staff from adding additional tasks to their workload, but luckily Fixity can be set to run automatically on a weekly basis, providing users with an email report at the end of the week.


Understand how your organisation does back-ups

CC-BY Shayne_ch13, Freepik

A common IT retention period for back-ups of desktop computers is 14 days. The two week period enables disaster recovery of working environments, to ensure that business can continue as usual. However, a 14 day back-up is not the same as preservation storage and it is not a suitable solution for archival collections.

In this scenario, where content is stored on a file system with no versioning, the archivist only has 14 days to spot any issues and retrieve an older back-up before it is too late. So please don’t go on holiday or get ill for long! Even with tools like Fixity, fourteen days is an unrealistic turn-around time (if the issue is at all spotted in the first place).

If possible, try and make the case to your organisation that you require more varied types of back-ups for the “archival directory”. These should include back-ups which are at least retained for a year. Using a mix of tape storage and/or cloud service providers can be a less expensive way of storing additional back-ups which do not require ongoing access. It is an investment which is worth making.

As a note of warning though – you are still only dealing in back-ups. This is not archival storage. If there are issues with multiple back-ups (due to for example transfer or hardware errors) you can still lose content. The longer term goal, once better back-ups are in place, should be to monitor the fixity of multiple copies of content from the “archival directory”. (For more information about the difference between back-ups used for regular IT purposes and storage for digital preservation see the DPC Handbook)


Check that your back-ups work
Once you have got additional copies of your collection content, remember to check that you can retrieve them again from storage.

Many organisations have been in the positions where they think they have backed up their content – only to find out that their back-ups have not been created properly when they need them. By testing retrieval you can protect your collections against this particular risk.


But… what do I do if my organisation does not do back-ups at all?
Although the 14 day back-up retention is common in many businesses, it is far from the reality which certain types of archives operate within. A small community organisation may for example do all its business on a laptop or workstation which is shared by all staff (including the archive).

This is a dangerous position to be in, as hardware failure can cause immediate and total loss. There is not a magic bullet for solving this issue, but some of the advice which Sarah (Training and Outreach Fellow at Bodleian Libraries) has provided in her Personal Digital Archiving Course could apply.

Considerations from Sarah’s course include:

  • Create back-ups on additional removable hard drive(s) and store them in a different geographical location from the main laptop/workstation
  • Make use of free cloud storage limits (do check the licenses though to see what you are agreeing to – it’s not where you would want to put your HR records!)
  • Again – remember to check your back-ups!
  • For digitized images and video, consider using the Internet Archive’s Gallery as an additional copy (note that this is open to the public, and requires assigning a CC-BY license)  (If you like the work that the Internet Archive does – you can donate to them here )
  • Apply batch-renaming tools to file names to ensure that they contain understandable metadata in case they are separated from their original folders

(Email us if you would like to get a copy of Sarah’s lecture slides with more information)


Document all of the above

CC-BY jcomp, Freepik

Make sure to write down all the decisions you have made regarding back-ups, monitoring, and other activities. This allows for succession planning and ensures that you have a paper trail in place.


Stronger in numbers

CC-BY, Kjpargeter, Freepik

Licenses, contracts and ongoing management is expensive. Another venue to consider is looking to peer organisations to lower some of these costs. This could include entering into joint contracts with tape storage providers, or consortium models for using repository software. An example of an initiative which has done this is the NEA (Network Electronic Archive) group which has been an established repository for over ten years supporting 28 small Danish archives.


Summary:
These are some of the considerations which may lower the risk of losing digital collections. Do you have any other ideas (or practical experience) of managing and preserving digital collections with limited resources, and without using a repository or DAMS system?

Closing the digitization gap

MS. Canon. Misc. 378, fol. 136r

Bodleian Digital Library’s Digitization Assistant, Tim, guest blogs about the treasures he finds while migrating and preparing complete, high-fidelity digitised items for Digital Bodleian. The Oxford DPOC Fellows feel lucky to sit across the office from the team that manages Digital Bodleian and so many of our amazing digitized collections.


We might spend most of our time on an industrial estate here at BDLSS, but we still get to do a bit of treasure-hunting now and then. Our kind has fewer forgotten ruins or charming wood-panelled reading rooms than we might like, admittedly – it’s more a rickety MySQL databases and arcane php scripts affair. But the rewards can be great. Recent rummages have turned up a Renaissance masterpiece, a metaphysical manuscript, and the legacy of a Polish queen.

Back in October, Emma wrote about our efforts to identify digital images held by the Bodleian which would make good candidates for Digital Bodleian, but for one reason or another haven’t yet made it onto the site. Since that post was published, we have been making good progress migrating images from our legacy websites, including the Oxford Digital Library and – coming soon to Digital Bodleian – our Luna collection of digitized slides. Many of the remaining undigitized images in our archive are unsuitable for the site, as they don’t constitute full image sets: we’re trying to keep Digital Bodleian a reserve for complete, high-fidelity digitized items, rather than a dumping-ground for fragmentary facsimiles. But among the millions of images are a few sets of fully-photographed books and manuscripts still waiting to be showcased to the public on our digital platform.


A recent Digital Bodleian addition: the Notitia Dignitatum, a hugely important Renaissance copy of a late-Roman administrative text (MS. Canon. Misc. 378).

Identifying these full-colour, complete image sets isn’t as easy as we’d like, thanks to some slightly creaky legacy databases, and the sheer volume of material versus limited staff time. An approach mentioned by Emma has, however, yielded some successes. Taking suggestions from our curators – and, more recently, our Twitter followers  – we’ve been able draw up a digitization wishlist, which also serves as a list of targets for when we go ferreting around in the archive. Most haven’t been fully photographed, but we’ve turned up a clutch of exciting items from these efforts.

Finding the images is only half the hunt, though. To present the digital facsimiles usefully, we need to give them some descriptive metadata. Digital Bodleian isn’t intended to be a catalogue, but we like to provide some information about an item where we have it, and make our digitized collections discoverable, as well as giving context for non-experts. But as with finding images, locating useful metadata isn’t always simple.

Most of the items on Digital Bodleian sit within the Bodleian’s Special Collections. Each object is unique, requiring the careful attention of an expert to be properly catalogued. For this reason, modern cataloguing efforts focus on subsets of the collections. For those not covered by these, often the only published descriptions (if any) are in 19th century surveys – which can be excellent, but can be terse, or no longer up-to-date. Other descriptions and scholarly analyses are spread around a variety of published and unpublished material, some of it available in a digital form, most of it not. This all presents a challenge when it comes to finding information to go along with items on Digital Bodleian: much as we’d like to be, Emma and I aren’t yet experts on the entirety of all the periods, areas and traditions represented in the Bodleian’s holdings.


Another item pulled from the Bodleian’s image archive: a finely decorated 16th-century Book of Hours (MS. Douce 112).

Happily, our colleagues responsible for curating these collections are engaged in constant, dogged efforts to make descriptions more accessible. Especially useful to those of us unable to pop into the Weston to rifle through printed finding aids are a set of TEI-based electronic catalogues*, developed in conjunction with BDLSS. These aim to provide systematically-structured digital catalogue entries for a variety of Western and Oriental Special Collections. They’re fantastic resources, but they represent ongoing cataloguing campaigns, rather than finished products. Nor do they cover all the Special Collections.

Our most valuable resource therefore remains the ever-patient curators themselves. They kindly help us track down information about the items we’re putting on Digital Bodleian from a sometimes-daunting array of potential sources, put us in touch with other experts where required, and are always ready to answer our questions when we need something clarified. This has been enormously helpful in providing descriptions for our new additions to the site.

With this assistance, and the help of our colleagues in the Imaging Studio, who provide similar expertise in tracking down the images, and try hard to squeeze in time to photograph items from the aforementioned wishlist, we’ve managed to get 25 new treasures onto Digital Bodleian since Emma’s post, on top of all the ongoing new photography and migration projects. This totals around 9,300 images altogether, and we have more items on the way (due soon are a couple of Mesoamerican codices and an Old Sundanese text printed on palm leaves from Java). Slowly, we’re closing the gap.

A selection of recent items we’ve dug up from our archives:

MS. Ashmole 304
MS. Ashmole 399
MS. Auct. D. inf. 2. 11
MS. Canon. Bibl. Lat. 61
MS. Canon. Misc. 213
MS. Canon. Misc. 378
MS. Douce 112
MS. Douce 134
MS. Douce 40
MS. Holkham misc. 49
MS. Lat. liturg. e. 17
MS. Lat. liturg. f. 2
MS. Laud Misc. 108
MS. Tanner 307

 

*Currently live are catalogues of medieval manuscripts, Hebrew manuscripts, Genizah fragments,  and union catalogues of Islamicate manuscripts and Shan Buddhist manuscripts in the United Kingdom. Catalogues of Georgian and Armenian manuscripts, to an older TEI standard, are still online and are currently undergoing conversion work. Similar, non-TEI-based resources for Incunables and some of our Chinese Special collections are also available.

Project update

A project update from Edith Halvarsson, Policy and Planning Fellow at Bodleian Libraries. 


Ms Arm.e.1, Folio 23v

Bodleian Libraries’ new digital preservation policy is now available to view on our website, after having been approved by Bodleian Libraries’ Round Table earlier this year.

The policy articulates Bodleian Libraries’ approach and commitment to digital preservation:

“Bodleian Libraries preserves its digital collections with the same level of commitment as it has preserved its physical collections over many centuries. Digital preservation is recognized as a core organizational function which is essential to Bodleian Libraries’ ability to support current and future research, teaching, and learning activities.”

 

Click here to read more of Bodleian Libraries’ policies and reports.

In other related news we are currently in the process of ratifying a GLAM (Gardens, Libraries and Museums) digital preservation strategy which is due for release after the summer. Our new digitization policy is also in the pipelines and will be made publicly available. Follow the DPOC blog for future updates.

Gathering the numbers: a maturity and resourcing survey for digital preservation

The ability to compare ourselves to peer institutions is key when arguing the case for digital preservation within our own organisations. However, finding up-to-date and correct information is not always straight forward.

The Digital Preservation at Oxford and Cambridge (DPOC) project has joined forces with the Digital Preservation Coalition (DPC) to gather some of the basic numbers that can assist staff in seeking to build a business case for digital preservation in their local institution.

We need your input to make this happen!

The DPOC and the DPC have developed a survey aimed at gathering basic data about maturity levels, staff resources, and the policy and strategy landscapes of institutions currently doing or considering digital preservation activities. (The survey intentionally does not include questions about the type or size of the data organisations are required to preserve.)

Completing the survey will only take 10-20 minutes of your time, and will help us better understand the current digital preservation landscape. The survey can be taken at: https://cambridge.eu.qualtrics.com/jfe/form/SV_brWr12R8hMwfIOh

Deadline for survey responses is: Thursday 31 May 2018.

For those wanting to know upfront what questions are asked in the survey – here is the full set of Survey Questions (PDF). Please keep in mind the survey is interactive and you may not see all of the questions when filling this in online (as the questions only appear in relation to your previous responses). Responses must be submitted through the online survey.

Anonymised data gathered as part of this maturity and resourcing survey will be made available via this DPOC website.

For any questions about the survey and its content, please contact: digitalpreservation@lib.cam.ac.uk