The Digitizing Enlightenment Symposium was held from 12-13 July, 2016 at Western Sydney University. Convened with the collaboration of Glenn Roe (ANU), the Digitizing Enlightenment Symposium brought together venerable and youthful, big and small, personal and collaborative digital humanities projects that focused on 18th-century France. The meeting preceded the “launch” of the ARC project “Mapping Print, Charting Enlightenment” at Western Sydney University during the 20th George Rudé Seminar.
Along with Glenn Roe, Simon Burrows, Jason Ensor, and Katie McDonough welcomed participants for the first instance of a series of meetings for scholars who engage with DH as they study and teach the French 18th century.
Day one included presentations from the major 18th c. French DH projects such as ARTFL, Electronic Enlightenment, Mapping the Republic of Letters, the Comédie Française Registers Project (with its predecessor CESAR), and the French Book Trade in Enlightenment Europe.
We continued with introductions to new projects such as Alicia Montoya’s “Middlebrow Enlightenment”/MEDIATE and a digital edition of all 18th c. French romanesque works (the third generation, if you will, of this work).
After lunch, we dove into the history of specific methods and tools currently or potentially useful in these projects:
- Glenn Roe started us off using the ARTFL Encyclopédie as test for the question “Is humanities data Big Data?” He walked us through the ways that Diderot also faced data model challenges and introduced new text mining technologies as a way to compare a new map of Encyclopédie article classifications with the 18th c. discipline-oriented Système Figuré des connaissances humaines. With the entire article corpus at hand, we can begin to look at the Encyclopédie based on cross-references rather than subject classifications.
- Nicole Coleman provided a brief history of work at what is now the Humanities + Design Lab at Stanford and how humanistic research questions drove the development of the visualization tools Palladio and Breve. Frustration with out-of-the-box software and even with the assumptions behind data viz science led the H+D team to develop tools that accommodate for and even highlight uncertainty and incompleteness in datasets.
- Katie McDonough (that’s me) proposed a cross-project historical gazetteer for 18th c. French geodata expanding on work that has been done by the LaDéHis group at the EHESS. Some might say that 2016 is the year of linked historical geo data. As many of these large French DH projects enter their next phases, how we conceptualize the spatial elements and capacities of our datasets and tools is a key issue. We have an opportunity now to collaborate with historical geodata research within and beyond France.
- Dan Edelstein shared work on Procope, a prosopographical metadata ontology for early modern social network analysis. In collaboration with Claude Willan, the H+D Lab is using this metadata as part of a project to explore “who” is the Enlightenment. Procope (like its sister project Fibra at the H+D Lab) would provide researchers with access to existing authority files (e.g. VIAF).
- Using the Comédie Française Registers Project as a case study, Christopher York discussed how humanists use faceted browsing for research, and what the advantages are of adopting a column-centric model over a row-centric relational database model. (Side note: If the past five years or so have seen many DH practitioners, sometimes with little initial technical expertise, critically engage with and develop specialized viz tools, are we now seeing those same kinds of researchers peer into the “black box” of the dataset and database design? Before analysis and representation comes the presentation of the evidence. If we are better informed about the form and function of databases, we can understand database work as part of scholarly communication in the humanities.)
- Matje van de Camp discussed, from her perspective as a computational linguist, what the future holds for OCR (Optical Character Recognition) of imperfectly scanned materials. Comparing the results of an experiment on 18th c. text using ABBYY FineReader, Tesseract, and Ocular software, she determined that Ocular (which was actually developed at the University of Texas for historical documents) paired with a variety of adaptations (within and post-OCR) provides the best results. Above all, having a larger corpus of training data would improve results significantly. In the mean time, expert annotation remains one of the best post-OCR options.
- Simon Burrows walked us through the history of the FBTEE project. Expanding from its origins as a digitized version of data on clients, events, and books from one publisher’s network (the STN), FBTEE is now one piece of the larger “Mapping Print, Charting Enlightenment” project that tackles data on the entire lifespan of books, from production to transmission.
- Craig Pett from Gale-Cengage introduced us to their efforts to share metadata from ECCO to researchers whose institutions subscribe to the database.
- To wrap up our afternoon, Jason Ensor bravely took us outside of 18th c. France to demonstrate how his participation in “Mapping Print, Charting Enlightenment” (including the Manuscripts tool you see below) is informing book history in Australia and beyond.
Day two began with presentations from three newcomers to 18th c. French DH. Elizabeth Bond discussed a project she collaborates on with Robert Bond to perform topic modeling analysis on letters to the editor in provincial newspapers in the late 18th c. Bryan Banks described a text mining approach to the study of the “Protestant Enlightenment.” Finally, Bill Weber demonstrated how the CFRP has transformed his research on the repertoire of the Opera and Comédie Française in Paris. Check out the Storify collections of tweets from both days of the conference to read more about the presentations.
The Symposium concluded with a collective discussion about the future of DH for eighteenth-century studies, collaborations, and other opportunities in teaching and research. In keeping with the French tradition of end-of-conference summaries, Greg Brown (UNLV/Oxford Studies in the Enlightenment) commented on the proceedings.
What to make of the meeting? Above all, it was groundbreaking to have these researchers all in one room. In what is a typical problem for many fields, it can be very hard to bring together DH projects sharing chronological and/or geographical space in face-to-face conversation. We are often faced with the choice of attending French studies conferences or Digital Humanities conferences. To have a venue that brings these two interests into the same space is invaluable.
Our wrap-up discussion narrowed in on three requests:
- To replicate this symposium in the future (Yes, this is in the works! Future events will be announced on this blog.)
- To improve DH training opportunities geared towards students and scholars of the 18th c.
- To work towards project interoperability
Finally, in terms of the goals of the projects themselves, there is the question of how we define ourselves. Are we producing datasets, methodologies, and visualization tools for scholars of the Enlightenment, the Revolution, (and/) or the 18th c.? This kind of framing question is not to be ignored as we move forward. Retaining “FBTEE” as the name of what is becoming a massive collection book history datasets and visualizations covering the long eighteenth century, for example, raises questions about what counts as the “Enlightenment” (not to mention “European”). The ways that each researcher answers this question will differ, and we do not all have to reach the same conclusion. But we should consider these issues in the interest of future work to simplify how projects with different chronological and geographical constraints might interact.
Are you working on a project in 18th c./Enlightenment/Revolutionary studies that involves DH? Let us know on Twitter by using the #DigitizingEnlightenment hashtag or send a message to k.mcdonough (at) westernsydney.edu.au or @khetiwe24