DHBenelux, June 8-10, 2016, Belval, Luxembourg

DHBenelux16 This year, the DHBenelux Conference took place on the Belval campus in Luxembourg, an amalgam of futuristic and industrial scenery and of technology-based supplies from writing on the walls to automatic book scanning.

The pre-conference workshop I attended on day 1, Exploring Old Maps, brought into discussion questions related to the digitisation of old and historical maps, georeferencing and overlaying on modern maps, methodologies for text (labels) and forms (e.g. streets, buildings) recognition or for landscape change analysis, as well as viewpoints on crowd-sourced map enrichment and users’ engagement in geohumanities.

Day 2 and 3, between the two keynotes – one examining the relationship among data, algorithms and human interpretation, the other the modelling of ideas in a computational framework – included five thematic parallel sessions and a posters/demos exhibition. The parallel sessions that I could attend contained talks on Historical Research and Network Analysis, Digital Textual Analysis and Digital Literary Analysis. The presentations in these categories covered a variety of topics such as:

  • social network analysis for studying:
    • individual relationships inside institutions for intellectual cooperation,
    • relations and interactions between producers and consumers of creative goods,
    • the role of interpersonal and group ties in the so-called “survival” networks under difficult historical conditions,
    • the disambiguation of person names in modern religious history;
  • textual analysis techniques for:
    • detecting patterns in news writing,
    • measuring the popularity of newspaper articles,
    • reconstructing the rhythm and prosody of old literary texts,
    • determining authorship in historical texts;
  • critical perspectives on large textual data collections and analytic tools in order to:
    • define and formulate the limits of an “ideal” digital corpus,
    • combine scalability and annotation layers requirements for supporting retrieval and analysis infrastructures,
    • compare and evaluate the accuracy of different digital tools for narratological analysis;
  • tools building for:
    • supporting text normalisation, search and analysis of historical and literary texts from the distant past,
    • linking entities in “encyclopaedic” novels with corresponding disciplines in semantic Web repositories.

In this context, my co-authored contributions, a paper and a demo, dealt with a combination of Human Computer Interaction and digital textual analysis for interpreting usability tests responses and with a metaphorical didactic interpretation of historical writings on architecture via a model and interface for textual zooming.

One of the questions – proposed by the opening keynote speaker – discussed during the theme based lunch was (approximately): What is specific to the humanities in digital humanities, apart from content? A potential subject of reflection for the next DHBenelux 2017 in Utrecht, Netherlands.

Text Encoding Initiative, Conference and Members’ Meeting 2015, October 28-31, Lyon, France

The theme of this year’s TEI Conference was “Connect. Animate. Innovate” focusing on the idea of continuous innovation, interaction and sharing within the TEI and, at a larger scale, the Digital Humanities community.

TEIConference_Poster

The variety of topics and approaches presented at the parallel papers and posters sessions, SIG meeting and keynote I could attend reflected this focal idea.

Sections like Correspondence in the TEI, Encoding historical data in TEI (my abstract and presentation), Encoding orality and performance were centred both on:

  1. particular types of data (letters – from ancient Akkadian tablets to early 20-th century personal collections of letters in Ireland; historical documents – such as Western European Union’s archives on armament issues or the charters of the late medieval period in Spain; conceptual artists’ notebooks and oral music from Maghreb);
  2. dedicated tools and methods for encoding and sharing these kinds of data (e.g. Web services for scholarly letter editions, modular and collaborative platforms for publishing historical structured data or for the transcription and annotation of oral phenomena).

Other sections, Interoperability and the TEI, Abstracting the TEI, Interchange and the TEI, Presentation and engagement with the TEI, Hermeneutics and the TEI, along with displaying a large array of data (dialectal and prosopographical databases, encoded plays, Antiquity textual resources, art history collections, literary editions, etc.) addressed a set of questions such as:

  • software engineering best practices for supporting interoperability ;
  • sustainability and longevity of the TEI conceptual model and the need of an abstract concept independent of its implementation (TEI versus XML);
  • sonification and its potential use in comparing texts;
  • analysis of the submerged process of scholarly annotation in a TEI archive;
  • algorithmic approaches to the visualisation of complex textual structures in TEI;
  • building a model for Interlinked (inter-related, inter-dependent) Ordered Hierarchy of Content Objects – (I)OHCO;
  • creation of connected corpora based on TEI and Linked Open Data (LOD) technologies;
  • exploring the pedagogic potential of the TEI for developing multi-cultural literary projects at high school and early university level;
  • approaching, from a conceptual perspective, the application of the critical apparatus model to variants of iconographic representation;
  • adopting an ontological model for TEI simple that allows interoperability between different vocabularies.

The same diversity of content characterised the posters and SIG sessions, with topics varying from tools (for stand-off markup, XML editing, visualisation, dissemination), historical and literary digital editions, to theoretical and practical considerations on TEI encoding, TEI integration with other standards (ISO) or TEI-based semantic and comparative studies on monolingual and multilingual corpora.

As the closing keynote pointed out, the TEI in particular and the digital scholarship in general reveals that we are in a new world now. When people start teaching Homer with access to manuscripts, we are talking about a new kind of scholarship.

Towards an XML-TEI Edition of Diplomatic Documents – Part 2. EVT Visualisation

One of the most common formats in digital documentary editions is the side-by-side layout (Pierazzo, 2014). It allows us to compare a digital facsimile with the transcription of the original document, a potentially useful feature for researchers interested in the “evidentiary value” (Kline and Perdue, 2013) of the consulted materials.

As part of a larger research project at the CVCE, Diplomacy within Western European Union (WEU), we are working on a digital edition using this dual layout. The edition will include XML-TEI P5 encoded documents and digital images of the original sources from the WEU archives. An overview of the collection and the corresponding workflow are presented in Part 1.

The work is in ongoing, and we are currently adapting the EVT (Edition Visualization Technology) framework to the CVCE’s needs (existing and intended Web site architecture, different types of documents and projects, users’ requests, etc.). As shown in the demo, the tool provides a series of features such as side-by-side visualisation, navigation, image-text linking, “hot” areas highlighting and zoom-in/out in the image.

Within a CVCE – EVT team collaboration, a student from the University of Pisa, Italy, will be involved this summer, together with the CVCE’s DH Lab, in building the edition starting from the EVT model.

References

Kline, Mary-Jo. Holbrook Perdue, Susan. 2013. A Guide to Documentary Editing. Third Edition. Online version. ADE, UVA Press.

Pierazzo, Elena. 2014. “Digital Documentary Editions and the Others”. In Scholarly Editing: The Annual of the Association for Documentary Editing, Volume 35.

Towards an XML-TEI Edition of Diplomatic Documents – Part 1. The Fine-tuning Challenge

The project is part of a larger CVCE research project on the diplomacy within Western European Union (WEU). The corpus includes a selection of bilingual institutional documents (French, English) on the production and standardisation of armaments (1954 to 1982) from the Archives Nationales de Luxembourg, WEU collection. The documents (in a first phase, in French) were encoded in XML-TEI P5 in order to enable corpus analysis on specific elements, side by side visualisation of facsimiles and their transcriptions, as well as particular search engine capabilities (e.g. facets).

Several types of documents compose the corpus: meeting minutes, notes from the Secretary-General or Secretariat-General, memoranda and studies. Three categories of encoding are provided: metadata (title, author, availability date, origin place, confidentiality status, etc.), structural markup (headers, footers, sections, paragraphs, line breaks), content-related annotations (discourse of country/institutional representatives, names of persons, organisations, places, etc.).

Building the TEI corpus implied: OCR processing with ABBYY FineReader (one image file per typewritten page), Microsoft Word styling and OxGarage conversion from DOCX to XML-TEI P5, as well as semi-automatic enrichment by means of XSLT 2.0 and oXygen, Named Entity Recognition (NER) with GATE – General Architecture for Text Engineering and manual annotation. Experiments are currently ongoing for corpus analysis, using Textométrie TXM software (e.g. to discern specific linguistic patterns for the different country or institutional representatives) and Web publishing (focusing on potential adaptation of image-text edition tools as EVT – Edition Visualization Technology or query/index/browsing features provided by platforms like XTF – eXtensible Text Framework and KILN).

TEI_Workflow

In the current stage of the project (Web publication of the TEI corpus), given the particular research needs and the CVCE’s Back End/Front End architecture, an adaptation/combination of different technologies and open source tools seems more probable than the use of a single existing platform for this purpose. Although the design of the actual TEI publication framework is still work in progress, the main challenge of the whole process in creating the digital edition from the initial images has become apparent, i.e. to fine-tune the ratio of automatic versus manual processing and the implied modular, configurable mechanism based on “relatively steady/changing”, “core/project specific” and “mandatory/optional” dichotomies.