Connect your nodegoat environment to Wikidata, BnF, Transkribus, Zotero, and others

CORE Admin

The nodegoat Guides have been extended with a new section on 'Ingestion Processes'. An Ingestion Process allows you to query an external resource and ingest the returned data in your nodegoat environment. Once the data is stored in nodegoat, it can be used for tagging, referencing, filtering, analysis, and visualisation purposes.

You can ingest data in order to gather a set of people or places that you intend to use in your research process. You can also ingest data that enriches your own research data. Any collection of primary sources or secondary sources that have been published to the web can be ingested as well. This means that you can ingest transcription data from Transkribus, or your complete (or filtered) Zotero library.

The development of the Ingestion Process was part of the project 'Dynamic Data Ingestion (DDI)' (presented in this workshop series) and builds upon the Linked Data Resource feature (initially commissioned by the TIC-project in 2015 and extended in collaboration with ADVN in 2019).

Every nodegoat user is able to make use of these features. Next to the examples listed below, every endpoint that outputs JSON or XML can be queried. nodegoat data can be exported in CSV and ODT formats, or published via the nodegoat API as JSON and JSON-LD.

Wikidata

The first two guides deal with setting up a data model for places and people, and ingesting geographical and biographical data from Wikidata: 'Ingest Geographical Data', 'Ingest Biographical Data'. A number of SPARQL-queries are needed to gather the selected data. As writing these queries can be challenging, we have added two commented queries (here and here) that explain the rationale behind the queries.

These first two guides illustrate a common point in working with relational data (e.g. coming from graph databases, or relational databases): you need to first ingest the referenced Objects (in this case universities) before you can make references to these Objects (in this case people attending the universities).

A Chronological Visualisation that allows you to explore the distribution in time of the ingested data.

The third guide covers the importance of external identifiers. External identifiers can be added manually, as described in the guide 'Add External Identifiers', or ingested from a resource like Wikidata, as described in the newly added guide 'Ingest External Identifiers'.[....]

Continue readingComment

Extended nodegoat Documentation & Guides

CORE Admin

We have added various new sections to the nodegoat documentation and have published this via a new publication platform on nodegoat.net: nodegoat.net/documentation. Next to a revision of the existing content, this update also brings documentation on new features such as Ingestion Processes and Reconciliation Processes.

We have also republished the Guides using the same publication platform: nodegoat.net/guides. This makes publishing new Guides much easier, so expect to see new content there as well. We have added one new Guide already: after feedback on the lack of a general introduction to the basic principles of nodegoat we have published the Guide 'Basic Principles'.

The new and existing content can now also be searched via nodegoat.net/search. Use this to find Blog Posts, Use Cases, Documantion Sections, or Guides that mention things like tags or apis.[....]

Continue readingComment

Linking your Historical Sources to Open Data: workshop series organised by COST Action NEP4DISSENT

CORE Admin
Social visualisation of a subset of people in the COURAGE registry (in green) enriched with data from Wikidata: publications (in red) and publishing houses (in purple). The size of the nodes of the publishing houses is determined by their PageRank value.

The workshop series ‘Linking your Historical Sources to Open Data’ organised by the COST Action NEP4DISSENT aims to help researchers to connect their research data to existing Linked Open Data resources. These connections will ensure that research data remains interoperable and allow for the ingestion of various relevant Linked Open Data resources.

In two workshop sessions we will discuss the basic principles of Linked Open Data and show you how your project can benefit from this. We will do this by setting up a nodegoat environment and connect this to Linked Open Data resources. Data that has been collected in the COURAGE registry will be used to demonstrate how these connections can be set up. The COURAGE registry can be explored here, the data is available for download here. If you already have a configured nodegoat environment, you can use this during the workshop.[....]

Continue readingComment

nodegoat Workshop series organised by the SNSF SPARK project "Dynamic Data Ingestion"

CORE Admin
Geographic visualisation of a dataset collected as part of the SNSF SPARK project 'Dynamic Data Ingestion': geographical origins of medieval scholars stored in the university history databases Projet Studium Parisiense, ASFE Bologna, Repertorium Academicum Germanicum, and Ottocentenario Universita di Padova.

nodegoat has been extended with new features that allow you to ingest data from external resources. You can use this to enrich your dataset with contextual data from sources like Wikidata, or load in publications via a library API or SPARQL endpoint. This extension of nodegoat has been developed as part of the SNFS SPARK project 'Dynamic Data Ingestion (DDI): Server-side data harmonization in historical research. A centralized approach to networking and providing interoperable research data to answer specific scientific questions'. This project has been initiated and led by Kaspar Gubler of the University of Bern.

Because this feature is developed in nodegoat, it can be used by any nodegoat user. And because the Ingestion processes can be fully customised, they can be used to query any endpoint that publishes JSON data. This new feature allows you to use nodegoat as a graphical user interface to query, explore, and store Linked Open Data (LOD) from your own environment.

These newly developed functionalities built upon the Linked Data Resource feature that was added to nodegoat in 2015. This initial development was commissioned by the TIC-project at the Ghent University and Maastricht University. This feature was further extended in 2019 during a project of the ADVN.

Workshop Series

We will organise a series of four virtual workshops to share the results of the project and explore nodegoat's data ingestion capabilities. These workshops will take place on 28-04-2021, 05-05-2021, 12-05-2021, and 26-05-2021. All sessions take place between 14:00 and 17:00 CEST. The workshops will take place using Zoom and are recorded so you can watch a session to catch up.

The first two sessions will provide you with a general introduction to nodegoat: in the first session you will learn how to configure your nodegoat environment, while the second session will be devoted to importing a dataset. In the third session you will learn how to run ingestion processes in order to enrich any dataset by using external data sources. The fourth session will be used to query other data sources to ingest additional data.[....]

Continue readingComment

How to store uncertain data in nodegoat: ambiguous identities

CORE Admin

This blog post is part of a series on storing uncertain datat in nodegoat: 'How to store uncertain data in nodegoat', 'Incomplete source material', 'Conflicting information', 'Ambiguous identities'.

There are many entities that share a name. This is often the case for cities (e.g. Springfield), or people (e.g. Francis Bacon). When you encounter such a name in a source, the context usually provides you with enough clues to know which of the entities is meant. However, in some cases the context is too vague or the entities too similar to be certain. In these cases you need to resort to interpretation and disambiguation. This is genuine scholarly work, since you always have to interpret your sources.

This blog post will describe a case in which disambiguation is needed. We will use the example of a research process that aims to reconstruct scholarly networks in the 17th and 18th century. In a research process that deals with scholarly networks, the source material will largely consist of citations and mentions in documents.

The disambiguation process will be described by means of a snippet taken from a publication by an anonymous author in 1714 with the title 'An account of the Samaritans; in a letter to J---- M------, Esq;' (ESTC Citation No. N16222).

This blog post uses the data model that was created in the nodegoat guide 'Create your first Type', and will use elements from the guide 'Add External Identifiers', and from the guide 'Add Source References'.

To store 'mentioned' statements, you can use the Type that was created in the guide 'Add Source References' and add a new Sub-Object in which mentions can be saved. To change the model, go to Model and edit the Type 'Publication'. Switch to the tab 'Sub-Object' and create a new Sub-Object with the name 'Mention'. Set the Date to 'None' and Location to 'None'. In the tab 'Description', click the green 'add' button twice to create three Sub-Object Descriptions. Name the first 'Person', the second 'Page Number', and the third 'Notes'. Set the value type for 'Person' to 'Reference: Type' and select the Type 'Person'. Set the value type for 'Page Number' to 'Integer' and set the value type for 'Notes' to 'Text'.

These settings are not set in stone. Adjust them so that they work for your project.[....]

Continue readingComment

How to store uncertain data in nodegoat: conflicting information

CORE Admin

This blog post is part of a series on storing uncertain data in nodegoat: 'How to store uncertain data in nodegoat', 'Incomplete source material', 'Conflicting information', 'Ambiguous identities'.

When you work your way through your source material, you might encounter two sources that deal with the same subject but contain contradictory data. In these cases, you usually have two options: you either choose one of the sources based on its reliability, or you make an interpretation that combines the data from both sources. To account for disagreement in your sources, a third option is to include both statements in your dataset. This blog post will show you how to include conflicting information in your nodegoat project.

This blog post uses the data model that was created in the nodegoat guide 'Create your first Type'. If you haven't set up a data model in your nodegoat environment yet, you can follow this guide to do so. If you already have a data model, you can apply the steps discussed below in your own data model.

The multiple births of John Chamberlayne

The Dictionary of National Biography of 1887 writes that John Chamberlayne was born 'about 1666'.


Leslie Stephen, Dictionary of National Biography, v. 10 (Elder Smith & Co., 1887), page 9. Available at wikisource.

In a more recent lemma, in the Oxford Dictionary of National Biography (2009 version), he is said to be born in '1668/9'.

The first source, published in 1887, does not give any details on the way in which the statement 'about 1666' was formulated. The second source, first published in 2004, states that he matriculated from Trinity College, Oxford, on 7 April 1685, aged sixteen. This fact has allowed the author to conclude that he must have been born in either 1668 or 1669. This assumption rests on two premises: that the matriculation record lists an exact age (and not an estimate) and that John Chamberlayne knew the date of his birthday precisely (which in the 17th century was not necessarily the case).[....]

Continue readingComment

How to store uncertain data in nodegoat: incomplete source material

CORE Admin

This blog post is part of a series on storing uncertain data in nodegoat: 'How to store uncertain data in nodegoat', 'Incomplete source material', 'Conflicting information', 'Ambiguous identities'.

You are often confronted with omissions or with inconclusive statements when you deal with historical source material. To let your dataset reflect the nature of your sources, it is important that you include these vague or uncertain statements in your data. This blog post will go over a number of strategies that will help you to deal with these cases in your nodegoat project.

A common scenario is a case where you lack information. When this happens, you can decide to leave a given description, date, or location empty. This gives you the ability to create a filter that finds the objects that have empty descriptions, dates, or locations.

In another situation you might encounter a source that is partially lacking. The source does provide you with some information, but is inconclusive about the certainty of the information. An example of this is the entry on John Chamberlayne in the Dictionary of National Biography:


Leslie Stephen, Dictionary of National Biography, v. 10 (Elder Smith & Co., 1887), page 9. Available at wikisource.

The first sentence of his entry reads: "CHAMBERLAYNE, JOHN (1666–1723), miscellaneous writer, a younger son of Edward Chamberlayne [q. v.], was born about 1666, probably in or near London."

We will discuss four strategies for accommodating this uncertain source: a true/false statement on certainty, a scale on the level of certainty, entering chronology statements, and entering geometries.[....]

Continue readingComment

How to store uncertain data in nodegoat

CORE Admin

This blog post is part of a series on storing uncertain data in nodegoat: 'How to store uncertain data in nodegoat', 'Incomplete source material', 'Conflicting information', 'Ambiguous identities'.

Most scholars think about their research material in terms of nuances, vagueness, and uniqueness, whereas data is perceived as binary, strict, and repetitive. However, working with a digital tool does not mean that you can only work with binary oppositions or uncontested timestamps. On the contrary: by creating a good data model, you are able to include nuances, irregularities, contradictions, and vagueness in your database. A good data model is capable of making these insights and observations explicit. Instead of smoothing out irregularities in the data by simplifying the data model, the model should be adjusted to reflect the existing vagueness, conflicts, and ambiguities.

Before you start to adjust your data model to accommodate uncertainty, you should first try to determine the causes for uncertainty in your data. Most forms of uncertainty in data can be grouped in three categories: incomplete source material, conflicting information, or ambiguous identities.

These types of uncertainty can be dealt with in different ways. The next three blog posts will walk you through a number of possible solutions. The described strategies are not the only possible solutions: each research question is unique and may call for a solution of its own.

Incomplete source material

When the information you need is not available, incomplete, or vague you have to decide if you want to leave the respective parts in your data empty or enter data based on inference or conjecture. Read the blog post 'How to store uncertain data in nodegoat: incomplete source material' to learn how to deal with incomplete source material.

Conflicting information

You might encounter conflicting source material. Two sources might differ about the name of a person, or the date of an event. To account for all possible perspectives, you can include the conflicting statements in your data. Read the blog post 'How to store uncertain data in nodegoat: conflicting information' to learn how to deal with conflicting information.[....]

Continue readingComment