Network Visualisations of 38.000 Letters of 19th Century Intellectuals

CORE Admin

Every bit of information that is entered into nodegoat can immediately be published through a public user interface. This allows the Encyclopedia of Romantic Nationalism in Europe to instantly publish articles and a wide range of research data. This data also includes a set of over 38.000 letters that can be queried through the public user interface. In this blogpost we discuss the steps we took to allow visitors to dynamically explore this dataset.

The Study Platform on Interlocking Nationalisms (SPIN) at the University of Amsterdam has created a dataset of metadata of over 38.000 letters of nineteenth century intellectuals. This data has been manually entered and imported semi-automatically (geo-referencing and disambiguating people was largely done by hand). Sources include a range of publications of letters, like Breve fra og til Carl Christian Rafn, med en biographi, plus two existing datasets: (1) the metadata of over 18.000 letters of Jacob and Wilhelm Grimm were provided by the Arbeitsstelle Grimm-Briefwechsel Berlin, and (2) the metadata of over 14.000 letters of Sir Walter Scott were provided by the Millgate Union Catalogue of W. Scott Correspondence; courtesy prof. Millgate & National Library of Scotland. The remaining 6.000 letters were entered by hand by SPIN, based on publications of letters of various other intellectuals throughout Europe. This means that the dataset is a combination of a number of personal networks and that we have an overrepresentation of letters sent by the people at the center of these personal networks.

This dataset is part of the Encyclopedia of Romantic Nationalism in Europe (ERNiE). ERNiE will include over 1.500 articles on topics and people associated with the era of romantic nationalism (e.g. Dress, design : Romanian, Karadžić, Vuk Stefanović, Felicia Hemans). ERNiE also includes other materials like monuments, architecture, art, and currency. ERNiE is coordinated by SPIN. The editor of ERNiE is Joep Leerssen.[....]

Continue readingComment

Upcoming nodegoat workshops in Ghent & Washington D.C. (and more)

CORE Admin

Next week there will be a nodegoat workshop at the 'DARIAH-EU Annual Meeting' in Gent. This event will take place on 10-13 October. The nodegoat workshop will be on Tuesday 11 October from 14:00 to 15:30. You can find the full program here.

There will also be a nodegoat workshop at the conference 'Creating Spatial Historical Knowledge. New Approaches, Opportunities and Epistemological Implications of Mapping History Digitally'. This conference is organised by the German Historical Institute in Washington DC. The conference takes place on 20-22 October. The nodegoat workshop will be on Thursday 20 October from 14.15 to 16.00. This workshop requires individual registration. The full program of the conference can be found here.

We have proposed a session at the THATCamp Amsterdam on Linked Data challenges. Together with Ingeborg van Vugt we plan to discuss the benefits and difficulties of Linked Data in the humanities.

After a stimulating Virtual Heritage Network conference last year in Maynooth, we look forward to this year's conference in Cork. The conference will take place on 8-10 December.[....]

Continue readingComment

nodegoat Community Meeting, Mundaneum 1 July

CORE Admin

There will be a nodegoat community meeting at the Mundaneum (Paul Otlet ftw) in Mons (Belgium) on July 1. This meeting is an initiative of the TIC project at the University of Ghent in cooperation with DARIAH-BE. The meeting follows on the doctoral workshop 'Tracing Mobilities & Socio-political Activism. 19th-20th centuries' that takes place at the Mundaneum between June 29 and July 1.

The nodegoat community meeting will start with a general introduction on the current status of nodegoat and upcoming new features. Next, we will have four presentations of projects that make use of nodegoat:

See the full program here (the nodegoat meeting is on the last page of the PDF).

Continue readingComment

Members of the US House of Representatives - Wikidata

CORE Admin

The following interactive visualisation explores the movements of 10.896 Representatives of the United States Congress, from Roger Sherman's birth in 1721 up until all its members in 2015. The Representatives move from their place of birth to their place of education and finally to their possible place of death. Click here to open the interactive visualisation.

Last April, we gave a talk at the tenth Historical Network Research workshop in Düsseldorf about the 'Reversed Classification' functionality in nodegoat. To illustrate what you can accomplish with this functionality, we queried Wikidata to get a dataset of all the members of the US House of Representatives, including their date and place of birth and death, their professions, and the institutes where they took their education. We used this data to perform a reversed classification process that groups the representatives into career politicians or politicians with a heterogeneous career. From there, you could start looking at geographical patterns or educational backgrounds of these groups. See a graph of this network with these two 'career' nodes included here (canvas).

The diachronic geographical visualisation of all this data in nodegoat turns out to be a nice bonus.

Continue readingComment

nodegoat Workshop in Düsseldorf 28-04-2016

CORE Admin
Düsseldorf, Assumulator / CC BY-SA 3.0

The tenth Historical Network Research workshop will be in Düsseldorf from 28-04-2016 to 30-04-2016. They have set up an exciting programme on the theme 'Fakten verknüpfen, Erkenntnisse gewinnnen? Wissenschaftsgeschichte in Historischer Netzwerkanalyse'.

On the first day, we will host a nodegoat workshop. This workshop will last half a day and is titled 'Advanced HNR' (it will run in parallel with an introductory historical network research workshop by Martin Stark). Since we only have half a day, we encourage participants who have not used nodegoat before to watch our three tutorials that cover basic functionalities of nodegoat.[....]

Continue readingComment

A Wikidata/DBpedia Geography of Violence

CORE Admin

We have taken data available in Wikidata and DBpedia on 'Military Conflicts' to create this interactive visualisation in nodegoat:

Wikidata

From the outside, it can be a challenge to keep up with all the developments within the ever expanding universe of wiki*/*pedia. So it's good to be reminded now and then of all the structured data that has become available thanks to their efforts:

This looks pretty neat, especially since Wikidata currently has over 947 million triples in their data store. Since battles usually have a place and a date, it would be nice to import this data into a data design in nodegoat and visualise these battles through time and space (diachronic geospatiality ftw).[....]

Continue readingComment

Data modeling and database development for historians (slides)

CORE Admin

This week we gave a two-day workshop on data modeling and database development for historians. This workshop was part of the course Databases for young historians. This course was sponsored by the Huizinga Instituut, Posthumus Instituut, Huygens-ING  and the Amsterdam Centre for Cultural Heritage and Identity (ACHI, UvA) and was hosted by Huygens-ING.

We had a great time working with a group of historians who were eager to learn how to conceptualise data models and how to set up databases. We discussed a couple of common issues that come up when historians start to think in terms of 'data':

  • How to determine the scope of your research?
  • How to deal with unknown/uncertain primary source material?
  • How to use/import 'structured' data?
  • How to reference entries in a dataset and how to deal with conflicting sources?
  • How to deal with unique/specific objects in a table/type?

These points were taken by the horns (pun intended) when every participant went on to conceptualise their data model. To get a feel for classical database software (tables, primary keys, foreign keys, forms,  etc..), they set up a database in LibreOffice Base. Finally, each participant created their own data model in nodegoat and presented their model and first bits of data.[....]

Continue readingComment

Linked Data vs Curation Island

CORE Admin

You can now use nodegoat to query SPARQL endpoints like Wikidata, DBpedia, the Getty Vocabularies (AAT, ULAN, TGN), and the British Museum. Through the nodegoat graphic interface you query linked data resources and store their URIs within your dataset. This means that you can search all people in Wikidata using the string 'Rembrandt' and select the URI of your choice (e.g. 'https://www.wikidata.org/wiki/Q5598'). By doing so, you add external identifiers to your dataset and introduce a form of authority control in your data. This will help to disambiguate objects (like persons/artworks with similar names) and also enhances the interoperability of your dataset. Both these aspects make it easier to share and reuse datasets.

These two advantages (data disambiguation and data interoperability) are useful for researchers who work on small(-ish) but complex datasets. Researchers who feel that 'automated' research processes are unattainable for them as their data may be dispersed, heterogeneous, incomplete, or only available in an analogue format, are more likely to rely on something like the old fashioned card catalogue system in which all relevant objects and their varying attributes and relations are described. Luckily, we can also use digital tools to create and maintain card catalogues (databases). For a historian who is mapping the art market of a seventeenth century Dutch town, a database is a very powerful tool to store and analyse all objects (persons, artworks etc.) and the relations between these objects. Still, if no external identifiers are used, this dataset is nothing but a curated island (even if the data is published!).


Curation Island

Curation & Linked Data

The process we describe here aims to connect the craftsmanship of research in the humanities to the interconnected world of massive repositories, graph databases and authority files. Other useful purposes of linked data resources for the humanities have already been described extensively, like using aggregation queries to analyse large collections, thesaurus comparison/matching, or performing automated metadata reconciliation as described by the Free Your Metadata initiative.[....]

Continue readingComment