DHBenelux, June 8-10, 2016, Belval, Luxembourg

DHBenelux16 This year, the DHBenelux Conference took place on the Belval campus in Luxembourg, an amalgam of futuristic and industrial scenery and of technology-based supplies from writing on the walls to automatic book scanning.

The pre-conference workshop I attended on day 1, Exploring Old Maps, brought into discussion questions related to the digitisation of old and historical maps, georeferencing and overlaying on modern maps, methodologies for text (labels) and forms (e.g. streets, buildings) recognition or for landscape change analysis, as well as viewpoints on crowd-sourced map enrichment and users’ engagement in geohumanities.

Day 2 and 3, between the two keynotes – one examining the relationship among data, algorithms and human interpretation, the other the modelling of ideas in a computational framework – included five thematic parallel sessions and a posters/demos exhibition. The parallel sessions that I could attend contained talks on Historical Research and Network Analysis, Digital Textual Analysis and Digital Literary Analysis. The presentations in these categories covered a variety of topics such as:

  • social network analysis for studying:
    • individual relationships inside institutions for intellectual cooperation,
    • relations and interactions between producers and consumers of creative goods,
    • the role of interpersonal and group ties in the so-called “survival” networks under difficult historical conditions,
    • the disambiguation of person names in modern religious history;
  • textual analysis techniques for:
    • detecting patterns in news writing,
    • measuring the popularity of newspaper articles,
    • reconstructing the rhythm and prosody of old literary texts,
    • determining authorship in historical texts;
  • critical perspectives on large textual data collections and analytic tools in order to:
    • define and formulate the limits of an “ideal” digital corpus,
    • combine scalability and annotation layers requirements for supporting retrieval and analysis infrastructures,
    • compare and evaluate the accuracy of different digital tools for narratological analysis;
  • tools building for:
    • supporting text normalisation, search and analysis of historical and literary texts from the distant past,
    • linking entities in “encyclopaedic” novels with corresponding disciplines in semantic Web repositories.

In this context, my co-authored contributions, a paper and a demo, dealt with a combination of Human Computer Interaction and digital textual analysis for interpreting usability tests responses and with a metaphorical didactic interpretation of historical writings on architecture via a model and interface for textual zooming.

One of the questions – proposed by the opening keynote speaker – discussed during the theme based lunch was (approximately): What is specific to the humanities in digital humanities, apart from content? A potential subject of reflection for the next DHBenelux 2017 in Utrecht, Netherlands.

histoGraph live demo + network teaching + Connect! workshop in Turku, Finland

Following an invitation by my colleage Kimmo Elo, a contemporary historian and social scientist with an interest in network analysis and text analysis, I went to meet members of Turku’s DH community, taught an introductory workshop on network data extraction and visualization and gave the first public live demo of the redeveloped histoGraph at the Connect! Perspectives in Digital Humanities workshop series.

Cathedral_of_Turku_1814_jpg

Continue reading

Text Encoding Initiative, Conference and Members’ Meeting 2015, October 28-31, Lyon, France

The theme of this year’s TEI Conference was “Connect. Animate. Innovate” focusing on the idea of continuous innovation, interaction and sharing within the TEI and, at a larger scale, the Digital Humanities community.

TEIConference_Poster

The variety of topics and approaches presented at the parallel papers and posters sessions, SIG meeting and keynote I could attend reflected this focal idea.

Sections like Correspondence in the TEI, Encoding historical data in TEI (my abstract and presentation), Encoding orality and performance were centred both on:

  1. particular types of data (letters – from ancient Akkadian tablets to early 20-th century personal collections of letters in Ireland; historical documents – such as Western European Union’s archives on armament issues or the charters of the late medieval period in Spain; conceptual artists’ notebooks and oral music from Maghreb);
  2. dedicated tools and methods for encoding and sharing these kinds of data (e.g. Web services for scholarly letter editions, modular and collaborative platforms for publishing historical structured data or for the transcription and annotation of oral phenomena).

Other sections, Interoperability and the TEI, Abstracting the TEI, Interchange and the TEI, Presentation and engagement with the TEI, Hermeneutics and the TEI, along with displaying a large array of data (dialectal and prosopographical databases, encoded plays, Antiquity textual resources, art history collections, literary editions, etc.) addressed a set of questions such as:

  • software engineering best practices for supporting interoperability ;
  • sustainability and longevity of the TEI conceptual model and the need of an abstract concept independent of its implementation (TEI versus XML);
  • sonification and its potential use in comparing texts;
  • analysis of the submerged process of scholarly annotation in a TEI archive;
  • algorithmic approaches to the visualisation of complex textual structures in TEI;
  • building a model for Interlinked (inter-related, inter-dependent) Ordered Hierarchy of Content Objects – (I)OHCO;
  • creation of connected corpora based on TEI and Linked Open Data (LOD) technologies;
  • exploring the pedagogic potential of the TEI for developing multi-cultural literary projects at high school and early university level;
  • approaching, from a conceptual perspective, the application of the critical apparatus model to variants of iconographic representation;
  • adopting an ontological model for TEI simple that allows interoperability between different vocabularies.

The same diversity of content characterised the posters and SIG sessions, with topics varying from tools (for stand-off markup, XML editing, visualisation, dissemination), historical and literary digital editions, to theoretical and practical considerations on TEI encoding, TEI integration with other standards (ISO) or TEI-based semantic and comparative studies on monolingual and multilingual corpora.

As the closing keynote pointed out, the TEI in particular and the digital scholarship in general reveals that we are in a new world now. When people start teaching Homer with access to manuscripts, we are talking about a new kind of scholarship.

Conference “(Retro)Digitalisate – Kommentarkultur – Big Data” 8-9 October

In this blog post I highlight a few aspects which were of particular interest for me during the conference (Retro)Digitalisate – Kommentarkultur – Big Data: Zum Stand des Digitalen in den Geisteswissenschaften. The concept behind the conference was quite intrigueing: No classical paper presentations as an attempt to stop people from saying what they always say. Instead the organizers opted for a combination of panel discussions and keynotes. The conference’s subtitle describes the angle: “The state of Digital in the Humanities”. I was invited to talk about tools and the (re-)use of digital materials and would like to thank the organizers Lilian Landes and Norbert Kunz for the invitation.

cropped-RKB-Banner_NEU_1000x288.jpg

As far as I could tell, the conference brought together people from very diverse fields of work (libraries, publishing, humanities research, IT, lexicology…) who mixed together in four panels with the topics “Digital publishing”, “Tools”, “Communication” and “Post-publications”. In the end however, the issues which were really up for debate found their way in all of them: What counts as Digital Humanities? Which consequences does this have for the humanities? Are the rapid changes we are experiencing changes for the worse, better or are they just changes?

The four panels made it very clear that “digital” is changing the humanities and makes me agree even more with Mareike König who calls the humanities digital already. But following the discussions also made it very clear that at this stage, old models of publications, career planning, self-identification are in a state of profound transformation which is experienced in a positive way by some and in a negative way by others. At this stage there is no way to predict where this transformation will lead the humanities and continued cuts in funding in- and outside the universities are reasons to be concerned. This is also reflected in one common angle in all of the keynotes and panelists: Sitting it out is not an option.

Conference “Europa baut auf Biographien”, Vienna 6-8 October 2015

This blog posts highlights just a few aspects from the conference “Europa baut auf Biographien”, especially those which are particularly close to my field. I was there to talk about networks.

National Library of Vienna

National Biography projects are seen as trusted repositories for information on selected personalities in a nation’s history. In most cases, three conditions need to be met in order to be considered for an entry: significant accomplishments, death and – since these projects typically ran/run for very long periods of time and move alphabetically: having a surname which starts with the right letter. This procedure is of course pragmatic in the pre-digital world but it also meant that Konrad Bloch, a biochemist and nobel prize winner who died in the year 2000, never made it to the German national biography.

Apart from these selection biases and sometimes outdated information the biographies remain a valuable resource for scholars and the public. It was great to see how keen national and regional biographies projects in Germany, Austria and Switzerland were to interlink their entries by means of VIAF and GND ids.

Growth and enrichment of national biographies were of particular interest: Paul Arthur’s keynote referred to a recent change in biography-scholarship: A move towards a biography understood as a network of intersecting lives and activities. His own efforts in this domain for example seeks to aggregate biographical data from different parts of the population, all branded similarly: there is Indigenous Australia, Obituaries Australia, People Australia, Indigenous Australia, Women Australia, Labour Australia as well as a crowd-based project HuNI which lets the public add relations between objects from Australia’s cultural heritage.

Piek Vossen presented their work in BiographyNet and focused on the automated extraction of additional metadata by means of text analysis. Their accomplishments are impressive and a short video on the site describes the project in detail.

In my interpretation and tightly knit to the current wave of networking thinking, a number of talks circled around the idea that biographies should no longer be thought of as isolated depictions of the great but reinterpreted as collective, explicitly European efforts. This is a big, great and most of all: fundable idea.

#CitizenHums : My thoughts on the 2 day event

From the periphery, I have always been interested in crowdsourcing and have often thought that it would be an interesting way to extend the www.bombsight.org apps that I developed back in 2012,.ie would it be useful for the public to engage in the crowd collection, co-curation of memories, photographs and documents to add to the formal archival resources. Furthermore, here at the CVCE we have been considering if it would be suitable to explore whether crowd-sourcing would be a helpful way of transcribing hand written and other documents,  from our Pierre Werner publications or even for error detection and validation of automatic OCR and entity extraction. In our histoGraph tool, the notion of expert crowds has been implemented to annotate and validate automatic face recognition of European Figures from face-recognition algorithms that processed our picture collection.

Therefore,  I was interested to find out more about crowd-sourcing projects and research in the Humanities. At the beginning of September I was lucky return to London to attend the Crowdsourcing for the Humanities in the 21st Century event, organised by Kings College London.

The theme of the 2 days of presentation and discussions was “Citizen Humanities Comes of Age: Crowdsourcing for the Humanities in the 21st Century”. The event was made up of a mix of interdisciplinary researchers from diverse backgrounds such as Archaeology, Ecology, Archival Research, Social Sciences and Humanities, which made for fascinating discussion.

For me some of the points of interest collated from discussions and speakers include:

  • There is a need for more research around the ethical considerations of crowd-sourcing activities. Is it possible to recognise the skills acquired and time contributed to such humanities projects?
  • Crowds alone do not sustain projects, it is essential to build a community from the bottom up (this reminds me of the very successful Open Street Map community);
  • Projects should not just be about technology and data capture – there needs to be greater emphasis on the research questions and managing social experiences.
  • There is an increasing need to nuture and prioritise human factors for improving likelihood of success.
  • Is it possible to develop crowd-sourcing pipelines that mix automation, human-intervention and machine learning to further develop crowd-sourcing activities.
  • Project design should enable the framing of dynamic research questions that emerge as projects evolve.
  • All projects have difficulty in overcoming the long-tail where the highest volume of contributions is made by a small proportion of all the contributors.
  • Successful projects should consider integrating a task ecosystem that provide a mix of different tasks taking different lengths of time to complete- appealing to non-experts and experts depending of the type of task and so eliciting different types of contributions.
  • Lots of projects fail but rarely are there discussions of why or how they fail (for me I think lessons learned from failure can be as interesting as those that succeed)

crowsourcing pics

Digital Humanities at Oxford Summer School

photoThis blog post presents a short summary of the CVCE DH Lab participation at the Digital Humanities at Oxford Summer School from 20 to 24 July 2015.

“Managing modern data for academic research” was the focus of the workshop “Humanities Data: Curation, Analysis, Access and Reuse” organized by the Digital Humanities at Oxford Summer School from 20 to 24 July 2015.

Data constituted the heart of this session. With the following topics discussed: importance of maintaining research data and digital information to preserve its meaning and usefulness.

This session was actually an opportunity to explore data concepts and practices, with an emphasis on digital humanities data curation. Data curation appears as the active and ongoing management of data through its lifecycle of interest. After an introduction to  the Conceptual Frameworks of Humanities Data Curation, the workshop covered several topics including Metadata Normalization with OpenRefine, Information Organization, Data Modelling, Big Data and Data Analysis, as well as workflows (personal and institutional Workflows) and research objects. One section focused on the languages OWL and OWL 2. Case studies included examples from the Hathi Trust, EEBO-TCP and the BUDDAH project.

I found the discussion about associated with defining data, dataset, issue of provenance, new challenges and new opportunities raised by data curation and the comparison between data curation and digital preservation was particularly interesting due to it relevant to the work we are undertaking at CVCE.

This event was also the occasion to meet and talk to the other participants and speakers from all over the world. A special time was the peer-reviewed poster session. An exhibition of 30 posters presenting different works from researchers highlighting diversity and state of the art.

More information on http://dhoxss.humanities.ox.ac.uk/2015/humanitiesdata.html

Making histoGraph open source

As many of you already know, histoGraph is a web platform designed to help researchers to explore large multimedia archives akin to what we have here at the CVCE. The histoGraph tool has two main goals:

  1. To enable users to find and identify the most relevant documents for research
  2. To discover  document connections between persons.

In the last few months our designer/developer Daniele Guido, has started the redesign and development of the tool: this screencast shows some of the new features of Histograph as well as identifying the benefits and limitations of the new design.

We can now explore the “neighborhood” of a person in terms of other co-occurring people and documents, follow the paths that connect one person to one another; or simply search for resources and pose questions related to the person or the document.

Still under development, the open source version of Histograph starts from this assumption and basically serves as recommendation system. The original database has been enriched with CVCE text documents and been transposed to a graph database in order to speed up all network related computation. Moreover, a simple and powerful text extraction chain has been added:  thanks to the powerful Yago disambiguation engine, all text captions, titles and abstracts have been annotated with Named Entities in at least two languages.

Additionally, dates of both pictures and text documents have been reconciled to ISO standard; geographical places to latitude and longitude by using Geonames and Google Geocoding API; finally, person entities have been enriched with related dbpedia information. We will shortly be updating the data curation workflow to  assess and validate the quality of the automatic extraction of these entities and we plan further develop our concept of expert crowdsourcing techniques.

We will use the blog to update you on progress so watch this space.

Sneak Preview of the new histoGraph at Sunbelt 2015

We presented the first screenshots of the all-new histoGraph during the poster session at this year’s INSNA Sunbelt conference on Social Network Analysis in Brighton. histoGraph facilitates the crowdbased indexation and exploration of multimedia cultural heritage materials. Click on the poster to get an idea of how this works and looks like.

We have outlined our ideas for the future development of histoGraph in this short article titled Interactive Networks for Digital Cultural Heritage Collections – Scoping the Future of HistoGraph (Cite).

CVCE_poster_histograph_A4_vf-1

Should I do Social Network Analysis?

Network theories and methods have recently gained wide-spread attention. Graph visualizations look great and will easily grab the attention of any audience. However for those who are new to the field, there is a steep learning curve. Novices need to learn to master a variety of skills starting from network theory to systematic data collection, appropriate means to compute and visualize them and finally the challenge to link back the results of any analyses to their original research questions.

This flow chart is meant to help people with the hard decision whether or not it is worthwhile and feasible to engage with network analysis. You may also think of these questions as a simple way to test the quality of existing projects.

This is my first draft of this flow chart, so nothing is set in stone and I am happy to consider any criticism you leave in the comments.

To get a first idea of Social Network Analysis concepts, you may want to take a look at the Cheat Sheet: Social Network Analysis for Humanists.

shouldyoudonetworkanalysis_xxl

Cheat Sheet: Social Network Analysis for Humanists

Social Network Analysis concepts and methods are extremely powerful ways to describe complex social relations. The field however has developed their own their concepts, some of which require a little bit of translation. This cheat sheet should help with the very first steps.

cheat sheet illustration

Actor The entity which is described by a node, e.g. a person, an institution etc.
Alter/Alteri Actors in an ego network
Broker A node which is positioned e.g. between two clusters and can act as a “bottleneck”
Clique A fully interconnected group of nodes
Community A set of nodes which are relatively more connected to each other than to the rest of the graph
Co-occurrence network A network in which the edge between nodes is based on the fact that both appear together in the same context, usually in texts. An example would be two people who are mentioned in the same paragraph.
Data A set of individual pieces of information which are typically machine-readable
Data visualisation The visualisation of data in general. Network visualisation is one of many subfields.
Dyad A group of two connected nodes, the smallest possible network
Edge attribute Data which describes a certain aspect of an edge, for example how often two actors speak to each other
Edge or tie or arc or link What connects two nodes. There are slight differences between these terms but they are mostly used interchangeably
Ego network A network which contains all connections of an actor to their alteri and usually also the connections between the alteri
Graph (in a network and math. context): objects which are connected by links. However often used interchangeably with “network” even by experts.
Graph theory The mathematical description of nodes and ties
Historical Network Research SNA + historical research methods and questions
Hub A node with a degree far higher than the average in a network
Layout algorithms Algorithms which arrange a graph according to an underlying logical principle. Many different ones exist and can be fine-tuned by additional parameters. Layout algorithms help reveal patterns in the network data. So-called force-directed algorithms are used often: they consider ties to be “springs” which attract well-connected nodes and seek to avoid crossings between ties. The same data can look very differently with different layout algorithms and even the same algorithm will produce a different visualisation each time it is run; there is therefore not the “one” or “right” way to visualise a network.
Network dataset Typically machine-readable data which contains information on edges between nodes, on node attributes, on the graph itself.
Network metaphor The most common way to refer to networks in the humanities. It typically describes the observation that social relations have an effect on something or somebody without specifying it further.
Network theory The theoretical approach to social networks which often inspires research and the development of algorithms
Network visualizations Are visual representations of network data. Can take various forms, commonly used are node-link-diagrams and matrices
Node attribute Data which describes a certain aspect of a node, for example an actor’s age or gender
Node centrality Describes the extent to which a node is connected to other nodes within a network. Various algorithms exist to describe different aspects of such connectivity. To some extent centrality can be linked to abstract notions such as “influence”, “power” or “importance”.
Node or vertex Refers to the object which is connected to other objects in a graph. The terms are often used interchangeably
Relationships Any relationship between actors can be represented as an edge and two actors can be connected by multiple ties. When conceptualising network ties, it is important that they are defined clearly to allow comparison: What exactly is e.g. “friendship” or “collaboration” and when does it apply?
Software for network visualisation and analysis Helps to create, compute, visualise, modify network data. Many different tools exist for different purposes. Gephi, NodeXL, UCINET, Pajek are among the most well-known
Social Network Analysis (SNA) Is a cross-disciplinary field of research which is based on the axiom that the systematic study of relations between (mostly) humans (often exchanges of some kind) can help answer research questions.
Triad Similar to a dyad, a triad describes a group of three nodes. The number of completed triads in a graph can also provide insight into the structure of the network.
Unipartite (or 1-mode), bipartite (or 2-mode) ndn-partite network Unipartite networks describe relations between one type of nodes (e.g. people connected to people); bipartite networks describe relations between two types of nodes (e.g. people connected to organisations). A bipartite network can be projected into a unipartite network (people who were connected to the same organisation are now connected to each other). This concept can be expanded even further, in theory a network can have any number of types of actors. It is important to note that in these networks, nodes of the same type can never connect to each other, only to nodes of the other types.

 

Towards an XML-TEI Edition of Diplomatic Documents – Part 1. The Fine-tuning Challenge

The project is part of a larger CVCE research project on the diplomacy within Western European Union (WEU). The corpus includes a selection of bilingual institutional documents (French, English) on the production and standardisation of armaments (1954 to 1982) from the Archives Nationales de Luxembourg, WEU collection. The documents (in a first phase, in French) were encoded in XML-TEI P5 in order to enable corpus analysis on specific elements, side by side visualisation of facsimiles and their transcriptions, as well as particular search engine capabilities (e.g. facets).

Several types of documents compose the corpus: meeting minutes, notes from the Secretary-General or Secretariat-General, memoranda and studies. Three categories of encoding are provided: metadata (title, author, availability date, origin place, confidentiality status, etc.), structural markup (headers, footers, sections, paragraphs, line breaks), content-related annotations (discourse of country/institutional representatives, names of persons, organisations, places, etc.).

Building the TEI corpus implied: OCR processing with ABBYY FineReader (one image file per typewritten page), Microsoft Word styling and OxGarage conversion from DOCX to XML-TEI P5, as well as semi-automatic enrichment by means of XSLT 2.0 and oXygen, Named Entity Recognition (NER) with GATE – General Architecture for Text Engineering and manual annotation. Experiments are currently ongoing for corpus analysis, using Textométrie TXM software (e.g. to discern specific linguistic patterns for the different country or institutional representatives) and Web publishing (focusing on potential adaptation of image-text edition tools as EVT – Edition Visualization Technology or query/index/browsing features provided by platforms like XTF – eXtensible Text Framework and KILN).

TEI_Workflow

In the current stage of the project (Web publication of the TEI corpus), given the particular research needs and the CVCE’s Back End/Front End architecture, an adaptation/combination of different technologies and open source tools seems more probable than the use of a single existing platform for this purpose. Although the design of the actual TEI publication framework is still work in progress, the main challenge of the whole process in creating the digital edition from the initial images has become apparent, i.e. to fine-tune the ratio of automatic versus manual processing and the implied modular, configurable mechanism based on “relatively steady/changing”, “core/project specific” and “mandatory/optional” dichotomies.

 

Developing personalisation tools for ePublications

Digital Tools_51_43_42The CVCE has been working on the development of digital tools that enable our users to personalise content from our virtual research infrastructure on CVCE.eu. We are developing the idea of a Digital Toolbox where users will be able to customise content from all CVCE ePublications. The mid-term goal of the Digital Toolbox is to provide a personalised research infrastructure where users can create, collaborate and comment on their own personal ePublications.

Structure of an ePublication using MyPublications tool

Structure of a personal ePublication using MyPublications tool

In the later part of 2014 we launched, in beta version, the first version of the Digital Toolbox.  We have concentrated efforts on the development of the MyPublications authoring tool, development using a recently implemented Scrum methodology. The MyPublications tool will enable users of CVCE.eu to write ePublications on topics of interest based on personalised research, teaching and learning needs by organising and structuring CVCE resources, such as historical documents, press articles, photographs and other multimedia material alongside your own text. The tool includes a presentation viewer, a sim ple interface where you can sequentially step through your ePublication (like a book!).  In the coming weeks the developer of the tool, Frederic Reis, will also add more features including a share link feature which will enable a MyPublications to be shared with colleagues and friends.

To find out more have a look at our first introductory video.

What about the technology underpinning the tool?

simplified model of mypublications tech infrastructure

Simplified model of MyPublications technical infrastructure

In diagram shows a simplified model of the MyPublications technology derived from the research infrastructure. The CVCE data repository uses Alfresco content management system to manage, store and retrieve a wide variety of cultural heritage objects such as documents, photographs, cartoons and videos.  These objects are then served to the Java based open source Liferay documents and media portal which delivers our CVCE ePublications to the web and our half a million website users.

The tool then works through the client side and is written using a combination of HTML 5, CSS & JavaScript to create an interface that calls saves and embeds links to any type of cultural heritage object found within the CVCE collection. The resulting MyPublication is essentially a series of sequential object links and original texts (sections, sub sections and paragraphs) all stored alongside the metadata in a MYSQL database.

We will launch the full, tested version of the MyPublications tool in the early part of 2015. In the meantime if you want to be a beta tester please feel free to sign up.

Setting up Open Journal System for CVCE – Part I

This post gives an overview of the peculiarities of Open Journal System, an open source publication tool and might be of help when considered together with OJS’s documentation. OJS is developed by the Public Knowledge Project has become the chosen publishing platform for CVCE. OJS sports a variety of features which are of relevance for us: Multi-language support, xml integration and a clean and customizable look to it.

In this blog post I describe the process of setting up Transactions in Digital Humanities, the first publication by CVCE which publishes papers from last year’s DHLU conference.

I cannot really comment on the installation process itself since thankfully a colleague of mine in the IT department took on this task. There were a few hickups at first: some of the labels were not displayed correctly as shown in the screenshot below (not that this is not the final design).

OJS features a range of different user profiles, each with associated rights and access levels. Starting with “reader” and “author”, the roles include section editors, layout editors, proofreaders, editors, a journal manager and finally an admin. OJS explain their respective roles rather well. In an ideal world you would have a team of people, each with their respective task working on such a project who would seamlessly interact with each other and pass on tasks from one step to the next. In the real world, one person needs to manage a range of tasks. Thankfully, OJS lets users take on more than one role.

This can get confusing however: Once logged in you put on your author hat and submit an article using a detailed and nicely configurable submission process. This process includes the acknowledgement of copyright issues, metadata self-classification, file upload etc. Once an article is submitted, a section editor or an editor needs to assign a reviewer to the article. To do this, you need to click (in my mind: counter intuitively) on “My Journals” in the right navigation pane.

This brings you back to an overview page which bundles the different roles associated with your user id and provides handy updates on the progress of different publications.

As an editor you are free to assign reviewers, add their respective recommendations (accept/reject) and forward the task to layouters and proof-readers who are able to upload the edited versions of the files they have been working on. OJS stores these versions separately, albeit only the admin has access to the quite rudimentary file browser. OJS offers a quite handy, yet easily overlooked navigation through the publishing process:

You might find it hard to find the “Complete” commands which are associated with each task in the user interface, again in my mind counter intuitively, they are positioned in the middle of the page and are easily overlooked.

Once everything is proof-read and layouted, OJS asks you upload the final version of the file. I chose to edit an HTML file and to upload it directly in the Layout section, selecting the “Galley” option. Not that there seems to be an annoying bug which causes problems when parsing H2 (second level) captions. Replacing them with H3 (third level) captions in an editor solves this.

The test file I worked with came in ODT format. Images embedded in the text and exported to HTML will contain machine-created filenames which will not be recognized by OJS. Instead I embedded images manually in an editor, for example:

IMG SRC=”figure_1.jpg” NAME=”graphics1 WIDTH=741 HEIGHT=536>

The file will be uploaded and displayed above. Click “Edit” next to the uploaded file to change the label of the link to the uploaded file (I stick with “HTML”. Albeit a bit hidden, this editing section is quite useful for any revisions. Here you can replace the uploaded file in case you run into formatting problems and crucially: Upload pictures which accompany the text.

Note that once you close this editing section, you will jump back to the previous page but end up in the deviously similarly looking Copyediting section above the Layout section.

Continue by “Completing” the Proofreading section and the article should be published.

Helpful links:

Detailed User Guide

Frequently Asked Questions

OJS Forum

CVCE at Talk of Europe Creative Camp with CRP Lippmann & Lab1100

The Talk of Europe Creative Camp was organised by Max Kemman and Astrid van Aggelen and featured research projects and presentations by 16 researchers in Computer Science and the Humanities – a big thanks for all their hard work!

What do you do when you have all the debates in the EU parliament from 1992 onwards digitized, in RDF and you are just one SPARQL query away from them? If you have one week to work with specialists in network data visualization and management? First of all you need to bridge the gap between domain experts and ask how this data can be used to answer research questions in European studies and Computer Science. This turned out to be as hard as expected.

This week nevertheless yielded a method which helps us to detect unexpected speaker appointments by Members of the European Parliament (created by Fintan McGee of our neighbouring research institute CRP Lippmann) and an import functionality for Nodegoat which makes it easy to pull and visualise data from the Talk of Europe’s SPARQL endpoint (created by Pim van Bree and Geert Kessels of Lab1100). For CVCE this was an excellent opportunity to experiment with new ways of extracting and visualising information from structured data repositories.

For starters: A little of bit on networks

To explain our work a short intro is needed: Graphs like the one below are so-called 1-mode networks. This means that there is only one type of nodes (in this case people) connected to each other. Imagine that the people in this network are connected to each other because they all are members of the same sports club.

Mac OS X:Users:marten.duering:Desktop:1mode.png

In this example, a 1-mode network visualisations represents group affiliations by showing links between all members of a group. This works fine here but very quickly becomes cluttered when a group has a large number of members.

You can also represent this kind of information using a 2-mode network; a network which has two types of actors (people, sports club). It is important to mention that in 2-mode networks ties can only exist between actors of different types (person -> sport club), never within one type of actors (person -> person).

Mac OS X:Users:marten.duering:Desktop:2mode.png

2-mode networks are also sometimes called affiliation networks or bimodal networks and are particularly well-suited to identify overlaps between groups as illustrated in the graph above. In both examples it is clear that Laura is the person who is part of both groups. 2-mode visualisations are however often leaner than 1-mode networks since they represent an affiliation of a person by one tie alone and merely imply the links between members of a group.

Both 1-mode and 2-mode networks can be described mathematically as well. There are numerous ways to describe how central a node is in a network and how different nodes cluster together. Fintan’s work focuses on algorithms which are used for 2-mode networks and networks with more than 2 types of actors, so-called multimodal networks. 2-mode networks can be projected into 1-mode networks. This means that instead of 2 types of nodes we end up with only one. A tie between these nodes is added when – in our example – two people are members of the same club. This works just fine in the above examples. But as as Fintan highlights, projections can lead to a loss of information: One actor may be a member of three clubs and end up with the same connections as someone who only attends two clubs. Redundancy is one way to measure this loss of information.

Redundancy may reveal irregularities

Getting the opportunity to speak is a privilege in the EU parliament and very often the same people are chosen to speak on their area of expertise. But sometimes they don’t. Domain experts Frédéric Allemand (CVCE) and Bjørn Høyland (Oslo University) showed interest to detect the latter. It turned out that Redundancy, a concept developed in graph theory might do just that, as Fintan discovered during the week.

Drastically simplified, Redundancy describes how irreplaceable a node is in a 2-mode graph. In other words: Nodes which can be removed from a graph without changing its overall structure when it is projected into a single mode are considered to be highly redundant.

The graph below was created by Fintan and shows connections between Members of the European Parliament who spoke on certain agenda items in the year 2010 (527 nodes, 595 edges). An interactive version of this graph, realized with the sigmajs plugin for Gephi, is also available.

                Mac OS X:Users:marten.duering:Desktop:toe-fig1.png

Click here to launch an interactive version of this Graph

Only those MEPs are shown who have a low level of redundancy (<0.5) which means that speakers with very atypical combinations of topics on which they speak will be highlighted. An example is Mike Nattrass (United_Kingdom) who spoke on contrasting subjects such as:

  • Ban on commercial whaling (debate)

  • Implementation of the first railway package directives (debate)

  • Progress made on resettling Guantanamo detainees and on closing Guantanamo (debate)

  • Protection of animals used for scientific purposes (debate)

  • Welfare of laying hens (debate)

This is where our journey ends and domain experts need to take a close look at the data to identify the meaning of the graph and link back their observations to the context knowledge which is and remains essential for the humanist analysis of the parliamentary debates.

From SPARQL to Network via Nodegoat

For our second project, Geert Kessels and Pim van Bree of Lab1100 have developed a new feature for Nodegoat which allows users to load data directly from the Talk of Europe SPARQL endpoint (or any other). This will make it significantly easier to explore the debate data for anyone who can write a query. Nodegoat might well be the to-date easiest way to set up a relational database, fill it with data and visualise the data as a social or spatial network and to observe changes over time.

That said, it is no trivial task to write such a query. But its advantage lies in the precision with which one can download a very particular constellation of data points.

User can define custom import templates which define how the data will be organized and linked to each other in Nodegoat.

Once these two steps are sorted, the data is ready to be visualised. Nodegoat supports the visualisation of social graphs but also combinations of social graphs and maps. You can check out a few examples on their website and use this User Guide to setup your own project. Sadly we ran out of time to test this new feature and develop queries together with our domain expert.

All in all, this was a great opportunity to start a collaboration between the CVCE DH Lab, CVCE EIS and Fintan of CRP Lippmann and Geert and Pim of Lab1100.