Connect your nodegoat environment to Wikidata, BnF, Transkribus, Zotero, and others

CORE Admin

The nodegoat Guides have been extended with a new section on 'Ingestion Processes'. An Ingestion Process allows you to query an external resource and ingest the returned data in your nodegoat environment. Once the data is stored in nodegoat, it can be used for tagging, referencing, filtering, analysis, and visualisation purposes.

You can ingest data in order to gather a set of people or places that you intend to use in your research process. You can also ingest data that enriches your own research data. Any collection of primary sources or secondary sources that have been published to the web can be ingested as well. This means that you can ingest transcription data from Transkribus, or your complete (or filtered) Zotero library.

The development of the Ingestion Process was part of the project 'Dynamic Data Ingestion (DDI)' (presented in this workshop series) and builds upon the Linked Data Resource feature (initially commissioned by the TIC-project in 2015 and extended in collaboration with ADVN in 2019).

Every nodegoat user is able to make use of these features. Next to the examples listed below, every endpoint that outputs JSON or XML can be queried. nodegoat data can be exported in CSV and ODT formats, or published via the nodegoat API as JSON and JSON-LD.

Wikidata

The first two guides deal with setting up a data model for places and people, and ingesting geographical and biographical data from Wikidata: 'Ingest Geographical Data', 'Ingest Biographical Data'. A number of SPARQL-queries are needed to gather the selected data. As writing these queries can be challenging, we have added two commented queries (here and here) that explain the rationale behind the queries.

These first two guides illustrate a common point in working with relational data (e.g. coming from graph databases, or relational databases): you need to first ingest the referenced Objects (in this case universities) before you can make references to these Objects (in this case people attending the universities).

A Chronological Visualisation that allows you to explore the distribution in time of the ingested data.

The third guide covers the importance of external identifiers. External identifiers can be added manually, as described in the guide 'Add External Identifiers', or ingested from a resource like Wikidata, as described in the newly added guide 'Ingest External Identifiers'.[....]

Continue readingComment

Extended nodegoat Documentation & Guides

CORE Admin

We have added various new sections to the nodegoat documentation and have published this via a new publication platform on nodegoat.net: nodegoat.net/documentation. Next to a revision of the existing content, this update also brings documentation on new features such as Ingestion Processes and Reconciliation Processes.

We have also republished the Guides using the same publication platform: nodegoat.net/guides. This makes publishing new Guides much easier, so expect to see new content there as well. We have added one new Guide already: after feedback on the lack of a general introduction to the basic principles of nodegoat we have published the Guide 'Basic Principles'.

The new and existing content can now also be searched via nodegoat.net/search. Use this to find Blog Posts, Use Cases, Documantion Sections, or Guides that mention things like tags or apis.[....]

Continue readingComment

Geographic visualisation of biographies of scholars. Tobias Winnerling (Heinrich-Heine-Universität Düsseldorf), project: "Wer Wissen schafft. Gelehrter Nachruhm und Vergessenheit 1700 – 2015".

Social Network Graph of the network around Dutch engineer Cornelis Meijer. Project: "Mapping Notes and Nodes in Networks".