Voltaire’s comments on Frederick II’s L’Art de la guerre, Clement Draper’s depictions of chemical processes, Herman Melville’s pencil scores, or Samuel Beckett’s reading traces… these are all what we define as marginalia: the reader’s markings in the margins of a book. These markings are difficult to pin down in terms more specific than scribbles, references, and thoughts captured on a page. There is no apparent common rule that groups them together and specifies how they should be understood as a whole, even though they are often studied as an ensemble or a genre. Furthermore, the line – if there is a line – that defines the margins themselves is not always evident, and that is why scholars are constantly questioning what marginalia are, while trying to differentiate between the primary text and its annotations. As Laura Estill acknowledges in her article ‘Encoding the edge: manuscript marginalia and the TEI’, ‘perhaps there are easier distinctions to be made when marginalia is handwritten in printed books – although even then, in the case of authorial revisions, stop-press corrections, or (say) Whitman’s notes in another book, there is no easy answer as to what is “marginal”’.
A discussion of what exactly this marginal space is and how it interacts with the text is crucial when considering the central query of the Editing and Digitising Marginalia workshop: how can the marginalia of source material be encoded as fully, accurately, and helpfully as possible? By trying to define the purpose and character of Voltaire’s, Draper’s, Melville’s and Beckett’s marginalia, Nicholas Cronk, Gillian Pink, and Dan Barker; and Zoe Screti, Christopher Ohge, and Dirk Van Hulle respectively delved into the challenges of digitally editing marginalia, which requires a completely different framework of analysis compared to pre-digital editions or even digital facsimile editions. Following on from the OCTET colloquium on Writers’ Libraries, this workshop explored the importance of studying authors through their reading practices. It focused on the editorial choices behind digitally encoding marginalia, with the added layer of complexity that derives both from the difficulties and the possibilities of the digital medium.
When designing a data model that could represent marginalia as a key component of Voltaire’s complete works, for example, the verbal elements were comparatively easier to encode than the non-verbal marks. Voltaire used different materials to underline, draw, and mark the pages he was reading, or he folded, licked, and stuck them together. How can these practices possibly be translated into the digital sphere? For this digital project, the source material came from the transcribed print volumes of the Corpus des notes marginales de Voltaire, which were themselves one step removed from the original source material, since they had already undergone an editorial process that transformed the original squiggles into typeset signs.
Dan Barker, the Digital Consultant at the Voltaire Foundation, explained in his presentation ‘The aim of digitising OCV’ how he had created a system of mark types to record these marks in order to reproduce source material fully, accurately, and helpfully. He classified a mark according to nodes (the points where the lines meet or cross) or edges (uninterrupted lines) to convey their nature, presence, and relationship to the text. Even if the method does not account for the colour, medium, intensity, or even authorship of marginal marks, readers will be able to search for specific classifications of marks and see if Voltaire used them more than once and where. It is a process that operates within the principles proposed by Gillian Pink of what a new-born digital edition of a manuscript should be: legible, containing both visual and non-verbal elements, and searchable, taking into account the modernisation of the transcription to avoid the potential pitfalls of searching for idiosyncratic spellings.
The issue of searchability was further discussed by Zoe Screti, a postdoctoral researcher at the Voltaire Foundation, in her paper ‘Alchemical marginalia written in prison and cataloguing marginalia’. The quantity and diversity of Clement Draper’s marginalia, in the shape of memory aids, summaries, symbols, diagrams, or eyewitness accounts, are not reflected in the catalogue entries of his archival materials. That discrepancy points towards an incompatibility in the way catalogues were built and the questions that scholars are asking now, hence why Screti is updating the system with usability and consistency in mind, both of which aim to make sources of marginalia accessible and discoverable.
She has access to a subset of Voltaire’s manuscripts and is cataloguing them from scratch, which provides her with a decision-making margin that others might not be able to work with. They are also small in size, allowing for a detailed granularity that would be difficult to obtain if working with Draper’s notebooks, for example. But the challenges of ensuring that catalogues keep up with the pace of research on marginalia remain, in big and small collections alike. If we want to be able to locate specific categories of marginalia, as is the case with Voltaire’s non-verbal markings, and include nuances in our current search and text analysis tools, they need to appear in the catalogue entries, and that means going beyond filters and single codes.
Finally, both Melville’s and Beckett’s marginalia are representative of common methodological issues in terms of how to create a uniform TEI data model. As Christopher Ohge explained in his talk entitled ‘Melville’s Marginalia Online, with some general provocations’, there is no solution that covers all cases of marginalia encoding, and that is why current projects have very different data models. He provided an overview of those differences, showing how in Keats’s Paradise Lost, a Digital Edition or Whitman’s marginalia to Thoreau’s A week on the Concord and Merrimack Rivers, marginalia are wedged into the hierarchy of the existing text to make it work within different structures, while Archaeology of Reading has a bespoke XML tagging structure with a marginalia attribute.
But changing content IDs and crossing over the hierarchy of line elements or having a general term that does not include subtleties is not the methodological solution chosen for Melville’s Marginalia Online. This research tool uses software developed by the Whitman Project to generate the page coordinates of the already uploaded facsimile images, to find a page directly with a word search. Melville’s marginalia are encoded in a <div> tag with several attribute values, so as to include all detail and information. The question posed by Ohge then was as follows: how much context is needed to understand marginalia, and how much granularity?
In an intervention entitled ‘Editing Beckett’s Marginalia’, Dirk Van Hulle answered by stating that it depends on the author, the type of marginalia they wrote, and the resources available for the digital project that provides such context. One of the key elements that digital marginalia allows, as is the case with Beckett, is an insight not only into the reader himself, but the underlying structure of all his drafts and notebooks: a network of markings that, in turn, puts into context how his reading engendered his writing.
In order to make that network visible and searchable, one of the solutions going forward is to use IIIF (International Image Interoperability Framework) as a means of engaging with marginalia. Making resources IIIF compliant ensures they are interoperable with other software, as well as easy to maintain as an online resource with which scholars can interact. It is also culturally inclusive, as it operates on a ‘blank canvas’ principle meaning that non-codex objects can be presented in full.
IIIF image viewers could potentially work with improving transcription software, such as Transkribus, to allow for comprehensive resources that can display an image of the page with all its marginalia, paratext, and physical attributes as well as an interactive description and viewable transcription. The ability to describe elements of a text accurately and efficiently via pinpointing areas that have their own locus of metadata, as IIIF is capable of, means that more effort can be devoted to accurate scholarship, which is precisely what Gillian Pink stated in her paper ‘Editing Voltaire’s commentary on Frederick II’s L’Art de la guerre – third time lucky?’ She proposed, for example, to use different colours for the different hands that worked on the manuscript (Frederick II, his secretary, and Voltaire) as a way to take advantage of annotation possibilities with IIIF. However, the question remains: how can we decide which textual blocks should be transcribed as a unit in order to properly represent Voltaire’s marginalia?
The various contributions to the Editing and Digitising Marginalia workshop helped us sketch some answers to this question. Nonetheless, many threads were left to pull, ensuring that, hopefully, there will be another workshop to show how all the projects have built on existing methods while defying their own limits and scope, so that we keep rediscovering authors through the marginal notes that they left.
– Joana Roque