Linking your Historical Sources to Open Data: workshop series organised by COST Action NEP4DISSENT

CORE Admin
Social visualisation of a subset of people in the COURAGE registry (in green) enriched with data from Wikidata: publications (in red) and publishing houses (in purple). The size of the nodes of the publishing houses is determined by their PageRank value.

The workshop series ‘Linking your Historical Sources to Open Data’ organised by the COST Action NEP4DISSENT aims to help researchers to connect their research data to existing Linked Open Data resources. These connections will ensure that research data remains interoperable and allow for the ingestion of various relevant Linked Open Data resources.

In two workshop sessions we will discuss the basic principles of Linked Open Data and show you how your project can benefit from this. We will do this by setting up a nodegoat environment and connect this to Linked Open Data resources. Data that has been collected in the COURAGE registry will be used to demonstrate how these connections can be set up. The COURAGE registry can be explored here, the data is available for download here. If you already have a configured nodegoat environment, you can use this during the workshop.[....]

Continue readingComment

nodegoat Workshop series organised by the SNSF SPARK project "Dynamic Data Ingestion"

CORE Admin
Geographic visualisation of a dataset collected as part of the SNSF SPARK project 'Dynamic Data Ingestion': geographical origins of medieval scholars stored in the university history databases Projet Studium Parisiense, ASFE Bologna, Repertorium Academicum Germanicum, and Ottocentenario Universita di Padova.

nodegoat has been extended with new features that allow you to ingest data from external resources. You can use this to enrich your dataset with contextual data from sources like Wikidata, or load in publications via a library API or SPARQL endpoint. This extension of nodegoat has been developed as part of the SNFS SPARK project 'Dynamic Data Ingestion (DDI): Server-side data harmonization in historical research. A centralized approach to networking and providing interoperable research data to answer specific scientific questions'. This project has been initiated and led by Kaspar Gubler of the University of Bern.

Because this feature is developed in nodegoat, it can be used by any nodegoat user. And because the Ingestion processes can be fully customised, they can be used to query any endpoint that publishes JSON data. This new feature allows you to use nodegoat as a graphical user interface to query, explore, and store Linked Open Data (LOD) from your own environment.

These newly developed functionalities built upon the Linked Data Resource feature that was added to nodegoat in 2015. This initial development was commissioned by the TIC-project at the Ghent University and Maastricht University. This feature was further extended in 2019 during a project of the ADVN.

Workshop Series

We will organise a series of four virtual workshops to share the results of the project and explore nodegoat's data ingestion capabilities. These workshops will take place on 28-04-2021, 05-05-2021, 12-05-2021, and 26-05-2021. All sessions take place between 14:00 and 17:00 CEST. The workshops will take place using Zoom and are recorded so you can watch a session to catch up.

The first two sessions will provide you with a general introduction to nodegoat: in the first session you will learn how to configure your nodegoat environment, while the second session will be devoted to importing a dataset. In the third session you will learn how to run ingestion processes in order to enrich any dataset by using external data sources. The fourth session will be used to query other data sources to ingest additional data.[....]

Continue readingComment

nodegoat API

CORE Admin

As a result of our cooperation with nodegoat's institutional partners, we have been able to develop a RESTful API for nodegoat.

The API provides an additional interface to query and store data to your Projects in nodegoat. We have integrated the API with nodegoat's core functionalities and have optimised it for large operations. The API can also be used to update the data Model, which allows you to update specific attributes of a Type, or upload a whole data Model with multiple Type templates in one go.

You can use the Project settings to configure what parts of your data are exposed through the API. The API can be configured to require authentication or allow for public access.

Documentation for the API is available via the nodegoat Documentation. To learn how you can query your data to use it in other applications, see: https://nodegoat.gitbooks.io/documentation/content/usage/API/query.html. To learn about storing your data using the API, see: https://nodegoat.gitbooks.io/documentation/content/usage/API/store.html.

In case you want to use the API with your own research data, get in touch!

We have enabled the API for a demo domain. You can access this domain by logging in to nodegoat.net with the username 'demo' and password 'demo'. The following cURL commands give you a JSON package with the information that has been entered on the French intellectual Ernest Renan. You can also click on the URL to view the output in your web browser.

curl http://demo.nodegoat.io/project/1/data/type/4/object?search=renan -X GET

or

curl http://demo.nodegoat.io/ngAQ3A96sAJ3kMmZiAQD3 -X GET

output:[....]

Continue readingComment