DHBenelux, June 8-10, 2016, Belval, Luxembourg

DHBenelux16 This year, the DHBenelux Conference took place on the Belval campus in Luxembourg, an amalgam of futuristic and industrial scenery and of technology-based supplies from writing on the walls to automatic book scanning.

The pre-conference workshop I attended on day 1, Exploring Old Maps, brought into discussion questions related to the digitisation of old and historical maps, georeferencing and overlaying on modern maps, methodologies for text (labels) and forms (e.g. streets, buildings) recognition or for landscape change analysis, as well as viewpoints on crowd-sourced map enrichment and users’ engagement in geohumanities.

Day 2 and 3, between the two keynotes – one examining the relationship among data, algorithms and human interpretation, the other the modelling of ideas in a computational framework – included five thematic parallel sessions and a posters/demos exhibition. The parallel sessions that I could attend contained talks on Historical Research and Network Analysis, Digital Textual Analysis and Digital Literary Analysis. The presentations in these categories covered a variety of topics such as:

  • social network analysis for studying:
    • individual relationships inside institutions for intellectual cooperation,
    • relations and interactions between producers and consumers of creative goods,
    • the role of interpersonal and group ties in the so-called “survival” networks under difficult historical conditions,
    • the disambiguation of person names in modern religious history;
  • textual analysis techniques for:
    • detecting patterns in news writing,
    • measuring the popularity of newspaper articles,
    • reconstructing the rhythm and prosody of old literary texts,
    • determining authorship in historical texts;
  • critical perspectives on large textual data collections and analytic tools in order to:
    • define and formulate the limits of an “ideal” digital corpus,
    • combine scalability and annotation layers requirements for supporting retrieval and analysis infrastructures,
    • compare and evaluate the accuracy of different digital tools for narratological analysis;
  • tools building for:
    • supporting text normalisation, search and analysis of historical and literary texts from the distant past,
    • linking entities in “encyclopaedic” novels with corresponding disciplines in semantic Web repositories.

In this context, my co-authored contributions, a paper and a demo, dealt with a combination of Human Computer Interaction and digital textual analysis for interpreting usability tests responses and with a metaphorical didactic interpretation of historical writings on architecture via a model and interface for textual zooming.

One of the questions – proposed by the opening keynote speaker – discussed during the theme based lunch was (approximately): What is specific to the humanities in digital humanities, apart from content? A potential subject of reflection for the next DHBenelux 2017 in Utrecht, Netherlands.

histoGraph live demo + network teaching + Connect! workshop in Turku, Finland

Following an invitation by my colleage Kimmo Elo, a contemporary historian and social scientist with an interest in network analysis and text analysis, I went to meet members of Turku’s DH community, taught an introductory workshop on network data extraction and visualization and gave the first public live demo of the redeveloped histoGraph at the Connect! Perspectives in Digital Humanities workshop series.

Cathedral_of_Turku_1814_jpg

Continue reading

Text Encoding Initiative, Conference and Members’ Meeting 2015, October 28-31, Lyon, France

The theme of this year’s TEI Conference was “Connect. Animate. Innovate” focusing on the idea of continuous innovation, interaction and sharing within the TEI and, at a larger scale, the Digital Humanities community.

TEIConference_Poster

The variety of topics and approaches presented at the parallel papers and posters sessions, SIG meeting and keynote I could attend reflected this focal idea.

Sections like Correspondence in the TEI, Encoding historical data in TEI (my abstract and presentation), Encoding orality and performance were centred both on:

  1. particular types of data (letters – from ancient Akkadian tablets to early 20-th century personal collections of letters in Ireland; historical documents – such as Western European Union’s archives on armament issues or the charters of the late medieval period in Spain; conceptual artists’ notebooks and oral music from Maghreb);
  2. dedicated tools and methods for encoding and sharing these kinds of data (e.g. Web services for scholarly letter editions, modular and collaborative platforms for publishing historical structured data or for the transcription and annotation of oral phenomena).

Other sections, Interoperability and the TEI, Abstracting the TEI, Interchange and the TEI, Presentation and engagement with the TEI, Hermeneutics and the TEI, along with displaying a large array of data (dialectal and prosopographical databases, encoded plays, Antiquity textual resources, art history collections, literary editions, etc.) addressed a set of questions such as:

  • software engineering best practices for supporting interoperability ;
  • sustainability and longevity of the TEI conceptual model and the need of an abstract concept independent of its implementation (TEI versus XML);
  • sonification and its potential use in comparing texts;
  • analysis of the submerged process of scholarly annotation in a TEI archive;
  • algorithmic approaches to the visualisation of complex textual structures in TEI;
  • building a model for Interlinked (inter-related, inter-dependent) Ordered Hierarchy of Content Objects – (I)OHCO;
  • creation of connected corpora based on TEI and Linked Open Data (LOD) technologies;
  • exploring the pedagogic potential of the TEI for developing multi-cultural literary projects at high school and early university level;
  • approaching, from a conceptual perspective, the application of the critical apparatus model to variants of iconographic representation;
  • adopting an ontological model for TEI simple that allows interoperability between different vocabularies.

The same diversity of content characterised the posters and SIG sessions, with topics varying from tools (for stand-off markup, XML editing, visualisation, dissemination), historical and literary digital editions, to theoretical and practical considerations on TEI encoding, TEI integration with other standards (ISO) or TEI-based semantic and comparative studies on monolingual and multilingual corpora.

As the closing keynote pointed out, the TEI in particular and the digital scholarship in general reveals that we are in a new world now. When people start teaching Homer with access to manuscripts, we are talking about a new kind of scholarship.

Conference “(Retro)Digitalisate – Kommentarkultur – Big Data” 8-9 October

In this blog post I highlight a few aspects which were of particular interest for me during the conference (Retro)Digitalisate – Kommentarkultur – Big Data: Zum Stand des Digitalen in den Geisteswissenschaften. The concept behind the conference was quite intrigueing: No classical paper presentations as an attempt to stop people from saying what they always say. Instead the organizers opted for a combination of panel discussions and keynotes. The conference’s subtitle describes the angle: “The state of Digital in the Humanities”. I was invited to talk about tools and the (re-)use of digital materials and would like to thank the organizers Lilian Landes and Norbert Kunz for the invitation.

cropped-RKB-Banner_NEU_1000x288.jpg

As far as I could tell, the conference brought together people from very diverse fields of work (libraries, publishing, humanities research, IT, lexicology…) who mixed together in four panels with the topics “Digital publishing”, “Tools”, “Communication” and “Post-publications”. In the end however, the issues which were really up for debate found their way in all of them: What counts as Digital Humanities? Which consequences does this have for the humanities? Are the rapid changes we are experiencing changes for the worse, better or are they just changes?

The four panels made it very clear that “digital” is changing the humanities and makes me agree even more with Mareike König who calls the humanities digital already. But following the discussions also made it very clear that at this stage, old models of publications, career planning, self-identification are in a state of profound transformation which is experienced in a positive way by some and in a negative way by others. At this stage there is no way to predict where this transformation will lead the humanities and continued cuts in funding in- and outside the universities are reasons to be concerned. This is also reflected in one common angle in all of the keynotes and panelists: Sitting it out is not an option.

Conference “Europa baut auf Biographien”, Vienna 6-8 October 2015

This blog posts highlights just a few aspects from the conference “Europa baut auf Biographien”, especially those which are particularly close to my field. I was there to talk about networks.

National Library of Vienna

National Biography projects are seen as trusted repositories for information on selected personalities in a nation’s history. In most cases, three conditions need to be met in order to be considered for an entry: significant accomplishments, death and – since these projects typically ran/run for very long periods of time and move alphabetically: having a surname which starts with the right letter. This procedure is of course pragmatic in the pre-digital world but it also meant that Konrad Bloch, a biochemist and nobel prize winner who died in the year 2000, never made it to the German national biography.

Apart from these selection biases and sometimes outdated information the biographies remain a valuable resource for scholars and the public. It was great to see how keen national and regional biographies projects in Germany, Austria and Switzerland were to interlink their entries by means of VIAF and GND ids.

Growth and enrichment of national biographies were of particular interest: Paul Arthur’s keynote referred to a recent change in biography-scholarship: A move towards a biography understood as a network of intersecting lives and activities. His own efforts in this domain for example seeks to aggregate biographical data from different parts of the population, all branded similarly: there is Indigenous Australia, Obituaries Australia, People Australia, Indigenous Australia, Women Australia, Labour Australia as well as a crowd-based project HuNI which lets the public add relations between objects from Australia’s cultural heritage.

Piek Vossen presented their work in BiographyNet and focused on the automated extraction of additional metadata by means of text analysis. Their accomplishments are impressive and a short video on the site describes the project in detail.

In my interpretation and tightly knit to the current wave of networking thinking, a number of talks circled around the idea that biographies should no longer be thought of as isolated depictions of the great but reinterpreted as collective, explicitly European efforts. This is a big, great and most of all: fundable idea.

#CitizenHums : My thoughts on the 2 day event

From the periphery, I have always been interested in crowdsourcing and have often thought that it would be an interesting way to extend the www.bombsight.org apps that I developed back in 2012,.ie would it be useful for the public to engage in the crowd collection, co-curation of memories, photographs and documents to add to the formal archival resources. Furthermore, here at the CVCE we have been considering if it would be suitable to explore whether crowd-sourcing would be a helpful way of transcribing hand written and other documents,  from our Pierre Werner publications or even for error detection and validation of automatic OCR and entity extraction. In our histoGraph tool, the notion of expert crowds has been implemented to annotate and validate automatic face recognition of European Figures from face-recognition algorithms that processed our picture collection.

Therefore,  I was interested to find out more about crowd-sourcing projects and research in the Humanities. At the beginning of September I was lucky return to London to attend the Crowdsourcing for the Humanities in the 21st Century event, organised by Kings College London.

The theme of the 2 days of presentation and discussions was “Citizen Humanities Comes of Age: Crowdsourcing for the Humanities in the 21st Century”. The event was made up of a mix of interdisciplinary researchers from diverse backgrounds such as Archaeology, Ecology, Archival Research, Social Sciences and Humanities, which made for fascinating discussion.

For me some of the points of interest collated from discussions and speakers include:

  • There is a need for more research around the ethical considerations of crowd-sourcing activities. Is it possible to recognise the skills acquired and time contributed to such humanities projects?
  • Crowds alone do not sustain projects, it is essential to build a community from the bottom up (this reminds me of the very successful Open Street Map community);
  • Projects should not just be about technology and data capture – there needs to be greater emphasis on the research questions and managing social experiences.
  • There is an increasing need to nuture and prioritise human factors for improving likelihood of success.
  • Is it possible to develop crowd-sourcing pipelines that mix automation, human-intervention and machine learning to further develop crowd-sourcing activities.
  • Project design should enable the framing of dynamic research questions that emerge as projects evolve.
  • All projects have difficulty in overcoming the long-tail where the highest volume of contributions is made by a small proportion of all the contributors.
  • Successful projects should consider integrating a task ecosystem that provide a mix of different tasks taking different lengths of time to complete- appealing to non-experts and experts depending of the type of task and so eliciting different types of contributions.
  • Lots of projects fail but rarely are there discussions of why or how they fail (for me I think lessons learned from failure can be as interesting as those that succeed)

crowsourcing pics

Digital Humanities at Oxford Summer School

photoThis blog post presents a short summary of the CVCE DH Lab participation at the Digital Humanities at Oxford Summer School from 20 to 24 July 2015.

“Managing modern data for academic research” was the focus of the workshop “Humanities Data: Curation, Analysis, Access and Reuse” organized by the Digital Humanities at Oxford Summer School from 20 to 24 July 2015.

Data constituted the heart of this session. With the following topics discussed: importance of maintaining research data and digital information to preserve its meaning and usefulness.

This session was actually an opportunity to explore data concepts and practices, with an emphasis on digital humanities data curation. Data curation appears as the active and ongoing management of data through its lifecycle of interest. After an introduction to  the Conceptual Frameworks of Humanities Data Curation, the workshop covered several topics including Metadata Normalization with OpenRefine, Information Organization, Data Modelling, Big Data and Data Analysis, as well as workflows (personal and institutional Workflows) and research objects. One section focused on the languages OWL and OWL 2. Case studies included examples from the Hathi Trust, EEBO-TCP and the BUDDAH project.

I found the discussion about associated with defining data, dataset, issue of provenance, new challenges and new opportunities raised by data curation and the comparison between data curation and digital preservation was particularly interesting due to it relevant to the work we are undertaking at CVCE.

This event was also the occasion to meet and talk to the other participants and speakers from all over the world. A special time was the peer-reviewed poster session. An exhibition of 30 posters presenting different works from researchers highlighting diversity and state of the art.

More information on http://dhoxss.humanities.ox.ac.uk/2015/humanitiesdata.html

Towards an XML-TEI Edition of Diplomatic Documents – Part 2. EVT Visualisation

One of the most common formats in digital documentary editions is the side-by-side layout (Pierazzo, 2014). It allows us to compare a digital facsimile with the transcription of the original document, a potentially useful feature for researchers interested in the “evidentiary value” (Kline and Perdue, 2013) of the consulted materials.

As part of a larger research project at the CVCE, Diplomacy within Western European Union (WEU), we are working on a digital edition using this dual layout. The edition will include XML-TEI P5 encoded documents and digital images of the original sources from the WEU archives. An overview of the collection and the corresponding workflow are presented in Part 1.

The work is in ongoing, and we are currently adapting the EVT (Edition Visualization Technology) framework to the CVCE’s needs (existing and intended Web site architecture, different types of documents and projects, users’ requests, etc.). As shown in the demo, the tool provides a series of features such as side-by-side visualisation, navigation, image-text linking, “hot” areas highlighting and zoom-in/out in the image.

Within a CVCE – EVT team collaboration, a student from the University of Pisa, Italy, will be involved this summer, together with the CVCE’s DH Lab, in building the edition starting from the EVT model.

References

Kline, Mary-Jo. Holbrook Perdue, Susan. 2013. A Guide to Documentary Editing. Third Edition. Online version. ADE, UVA Press.

Pierazzo, Elena. 2014. “Digital Documentary Editions and the Others”. In Scholarly Editing: The Annual of the Association for Documentary Editing, Volume 35.

Making histoGraph open source

As many of you already know, histoGraph is a web platform designed to help researchers to explore large multimedia archives akin to what we have here at the CVCE. The histoGraph tool has two main goals:

  1. To enable users to find and identify the most relevant documents for research
  2. To discover  document connections between persons.

In the last few months our designer/developer Daniele Guido, has started the redesign and development of the tool: this screencast shows some of the new features of Histograph as well as identifying the benefits and limitations of the new design.

We can now explore the “neighborhood” of a person in terms of other co-occurring people and documents, follow the paths that connect one person to one another; or simply search for resources and pose questions related to the person or the document.

Still under development, the open source version of Histograph starts from this assumption and basically serves as recommendation system. The original database has been enriched with CVCE text documents and been transposed to a graph database in order to speed up all network related computation. Moreover, a simple and powerful text extraction chain has been added:  thanks to the powerful Yago disambiguation engine, all text captions, titles and abstracts have been annotated with Named Entities in at least two languages.

Additionally, dates of both pictures and text documents have been reconciled to ISO standard; geographical places to latitude and longitude by using Geonames and Google Geocoding API; finally, person entities have been enriched with related dbpedia information. We will shortly be updating the data curation workflow to  assess and validate the quality of the automatic extraction of these entities and we plan further develop our concept of expert crowdsourcing techniques.

We will use the blog to update you on progress so watch this space.

Sneak Preview of the new histoGraph at Sunbelt 2015

We presented the first screenshots of the all-new histoGraph during the poster session at this year’s INSNA Sunbelt conference on Social Network Analysis in Brighton. histoGraph facilitates the crowdbased indexation and exploration of multimedia cultural heritage materials. Click on the poster to get an idea of how this works and looks like.

We have outlined our ideas for the future development of histoGraph in this short article titled Interactive Networks for Digital Cultural Heritage Collections – Scoping the Future of HistoGraph (Cite).

CVCE_poster_histograph_A4_vf-1

Should I do Social Network Analysis?

Network theories and methods have recently gained wide-spread attention. Graph visualizations look great and will easily grab the attention of any audience. However for those who are new to the field, there is a steep learning curve. Novices need to learn to master a variety of skills starting from network theory to systematic data collection, appropriate means to compute and visualize them and finally the challenge to link back the results of any analyses to their original research questions.

This flow chart is meant to help people with the hard decision whether or not it is worthwhile and feasible to engage with network analysis. You may also think of these questions as a simple way to test the quality of existing projects.

This is my first draft of this flow chart, so nothing is set in stone and I am happy to consider any criticism you leave in the comments.

To get a first idea of Social Network Analysis concepts, you may want to take a look at the Cheat Sheet: Social Network Analysis for Humanists.

shouldyoudonetworkanalysis_xxl

DH Lab at DH Benelux

On 8–9 June 2015, the Digital Humanities Lab at the CVCE was very happy to participate in the second DH Benelux Conference hosted by the University of Antwerp.  See the presentations below. We contributed a total of four paper presentations and one poster describing our work. Dr Catherine (Kate) Jones discussed the concept of developing tools for personal narrative making by reusing CVCE cultural heritage objects, whilst Dr Marten During and Dr Florentina Armaselu presented the first outcomes of exploring TEI annotation and network analysis within diplomatic documents. Dr Lars Wieneke on behalf of Dr Florentina Armaselu described the results of our recent collaboration with the University of Pisa and their EVT (Edition Visualisation Technology) software for the creation of new TEI encoded digital editions.

MyPublications: Enabling personal authoring and narrative making

 

Europe’s Beginnings through the Looking Glass: Publishing Historical Documents on the Web Using EVT

 

Kate also presented results from the project she led prior to joining CVCE, which developed an augmented reality application to explore historical map data from the WW2 bomb census in London. She also participated in the lively closing panel discussion on the role of digital technology within the broader humanities discipline. Finally, the Director of the CVCE, Marianne Backes, co-authored a paper exploring the role of DARIAH and the Benelux.

dhbenlux2015

The conference was very well attended, with more than 150 participants, and represented the strength of digital humanities within the Benelux region and beyond. The conference showcased the depth and variety of digital humanities research through presentations in a number of related themes: (1) Cyber Culture, (2) Reflection & Criticism (3) Distant Readings, (4) Linked Open Data, (5) Digital Scholarly Editing, (6) Curation & Collection, (7) Geospatial Applications and (8) Networks & Topics. Excellent keynotes were delivered by William Noel, Director of the Kislak Center for Special Collections, Rare Books and Manuscripts at the University of Pennsylvania, and Elena Pierazzo, Professor of Italian Studies and Digital Humanities at Stendhal–Grenoble 3 University.

Next year the CVCE is looking forward to co-hosting the conference with the University of Luxembourg.

Cheat Sheet: Social Network Analysis for Humanists

Social Network Analysis concepts and methods are extremely powerful ways to describe complex social relations. The field however has developed their own their concepts, some of which require a little bit of translation. This cheat sheet should help with the very first steps.

cheat sheet illustration

Actor The entity which is described by a node, e.g. a person, an institution etc.
Alter/Alteri Actors in an ego network
Broker A node which is positioned e.g. between two clusters and can act as a “bottleneck”
Clique A fully interconnected group of nodes
Community A set of nodes which are relatively more connected to each other than to the rest of the graph
Co-occurrence network A network in which the edge between nodes is based on the fact that both appear together in the same context, usually in texts. An example would be two people who are mentioned in the same paragraph.
Data A set of individual pieces of information which are typically machine-readable
Data visualisation The visualisation of data in general. Network visualisation is one of many subfields.
Dyad A group of two connected nodes, the smallest possible network
Edge attribute Data which describes a certain aspect of an edge, for example how often two actors speak to each other
Edge or tie or arc or link What connects two nodes. There are slight differences between these terms but they are mostly used interchangeably
Ego network A network which contains all connections of an actor to their alteri and usually also the connections between the alteri
Graph (in a network and math. context): objects which are connected by links. However often used interchangeably with “network” even by experts.
Graph theory The mathematical description of nodes and ties
Historical Network Research SNA + historical research methods and questions
Hub A node with a degree far higher than the average in a network
Layout algorithms Algorithms which arrange a graph according to an underlying logical principle. Many different ones exist and can be fine-tuned by additional parameters. Layout algorithms help reveal patterns in the network data. So-called force-directed algorithms are used often: they consider ties to be “springs” which attract well-connected nodes and seek to avoid crossings between ties. The same data can look very differently with different layout algorithms and even the same algorithm will produce a different visualisation each time it is run; there is therefore not the “one” or “right” way to visualise a network.
Network dataset Typically machine-readable data which contains information on edges between nodes, on node attributes, on the graph itself.
Network metaphor The most common way to refer to networks in the humanities. It typically describes the observation that social relations have an effect on something or somebody without specifying it further.
Network theory The theoretical approach to social networks which often inspires research and the development of algorithms
Network visualizations Are visual representations of network data. Can take various forms, commonly used are node-link-diagrams and matrices
Node attribute Data which describes a certain aspect of a node, for example an actor’s age or gender
Node centrality Describes the extent to which a node is connected to other nodes within a network. Various algorithms exist to describe different aspects of such connectivity. To some extent centrality can be linked to abstract notions such as “influence”, “power” or “importance”.
Node or vertex Refers to the object which is connected to other objects in a graph. The terms are often used interchangeably
Relationships Any relationship between actors can be represented as an edge and two actors can be connected by multiple ties. When conceptualising network ties, it is important that they are defined clearly to allow comparison: What exactly is e.g. “friendship” or “collaboration” and when does it apply?
Software for network visualisation and analysis Helps to create, compute, visualise, modify network data. Many different tools exist for different purposes. Gephi, NodeXL, UCINET, Pajek are among the most well-known
Social Network Analysis (SNA) Is a cross-disciplinary field of research which is based on the axiom that the systematic study of relations between (mostly) humans (often exchanges of some kind) can help answer research questions.
Triad Similar to a dyad, a triad describes a group of three nodes. The number of completed triads in a graph can also provide insight into the structure of the network.
Unipartite (or 1-mode), bipartite (or 2-mode) ndn-partite network Unipartite networks describe relations between one type of nodes (e.g. people connected to people); bipartite networks describe relations between two types of nodes (e.g. people connected to organisations). A bipartite network can be projected into a unipartite network (people who were connected to the same organisation are now connected to each other). This concept can be expanded even further, in theory a network can have any number of types of actors. It is important to note that in these networks, nodes of the same type can never connect to each other, only to nodes of the other types.

 

Towards an XML-TEI Edition of Diplomatic Documents – Part 1. The Fine-tuning Challenge

The project is part of a larger CVCE research project on the diplomacy within Western European Union (WEU). The corpus includes a selection of bilingual institutional documents (French, English) on the production and standardisation of armaments (1954 to 1982) from the Archives Nationales de Luxembourg, WEU collection. The documents (in a first phase, in French) were encoded in XML-TEI P5 in order to enable corpus analysis on specific elements, side by side visualisation of facsimiles and their transcriptions, as well as particular search engine capabilities (e.g. facets).

Several types of documents compose the corpus: meeting minutes, notes from the Secretary-General or Secretariat-General, memoranda and studies. Three categories of encoding are provided: metadata (title, author, availability date, origin place, confidentiality status, etc.), structural markup (headers, footers, sections, paragraphs, line breaks), content-related annotations (discourse of country/institutional representatives, names of persons, organisations, places, etc.).

Building the TEI corpus implied: OCR processing with ABBYY FineReader (one image file per typewritten page), Microsoft Word styling and OxGarage conversion from DOCX to XML-TEI P5, as well as semi-automatic enrichment by means of XSLT 2.0 and oXygen, Named Entity Recognition (NER) with GATE – General Architecture for Text Engineering and manual annotation. Experiments are currently ongoing for corpus analysis, using Textométrie TXM software (e.g. to discern specific linguistic patterns for the different country or institutional representatives) and Web publishing (focusing on potential adaptation of image-text edition tools as EVT – Edition Visualization Technology or query/index/browsing features provided by platforms like XTF – eXtensible Text Framework and KILN).

TEI_Workflow

In the current stage of the project (Web publication of the TEI corpus), given the particular research needs and the CVCE’s Back End/Front End architecture, an adaptation/combination of different technologies and open source tools seems more probable than the use of a single existing platform for this purpose. Although the design of the actual TEI publication framework is still work in progress, the main challenge of the whole process in creating the digital edition from the initial images has become apparent, i.e. to fine-tune the ratio of automatic versus manual processing and the implied modular, configurable mechanism based on “relatively steady/changing”, “core/project specific” and “mandatory/optional” dichotomies.

 

Developing personalisation tools for ePublications

Digital Tools_51_43_42The CVCE has been working on the development of digital tools that enable our users to personalise content from our virtual research infrastructure on CVCE.eu. We are developing the idea of a Digital Toolbox where users will be able to customise content from all CVCE ePublications. The mid-term goal of the Digital Toolbox is to provide a personalised research infrastructure where users can create, collaborate and comment on their own personal ePublications.

Structure of an ePublication using MyPublications tool

Structure of a personal ePublication using MyPublications tool

In the later part of 2014 we launched, in beta version, the first version of the Digital Toolbox.  We have concentrated efforts on the development of the MyPublications authoring tool, development using a recently implemented Scrum methodology. The MyPublications tool will enable users of CVCE.eu to write ePublications on topics of interest based on personalised research, teaching and learning needs by organising and structuring CVCE resources, such as historical documents, press articles, photographs and other multimedia material alongside your own text. The tool includes a presentation viewer, a sim ple interface where you can sequentially step through your ePublication (like a book!).  In the coming weeks the developer of the tool, Frederic Reis, will also add more features including a share link feature which will enable a MyPublications to be shared with colleagues and friends.

To find out more have a look at our first introductory video.

What about the technology underpinning the tool?

simplified model of mypublications tech infrastructure

Simplified model of MyPublications technical infrastructure

In diagram shows a simplified model of the MyPublications technology derived from the research infrastructure. The CVCE data repository uses Alfresco content management system to manage, store and retrieve a wide variety of cultural heritage objects such as documents, photographs, cartoons and videos.  These objects are then served to the Java based open source Liferay documents and media portal which delivers our CVCE ePublications to the web and our half a million website users.

The tool then works through the client side and is written using a combination of HTML 5, CSS & JavaScript to create an interface that calls saves and embeds links to any type of cultural heritage object found within the CVCE collection. The resulting MyPublication is essentially a series of sequential object links and original texts (sections, sub sections and paragraphs) all stored alongside the metadata in a MYSQL database.

We will launch the full, tested version of the MyPublications tool in the early part of 2015. In the meantime if you want to be a beta tester please feel free to sign up.