Thanks to the Allmaps project, it is now possible to use any map that has been published as a IIIF image as a background map in your geographic visualisations in nodegoat.
The International Image Interoperability Framework (IIIF) is a set of open standards for publishing digital objects, maintained by a consortium of cultural institutions. The list of institutions that publish their digitised maps as IIIF images is constantly growing. This overview provides a number of examples of available resources. The David Rumsey Map Collection also contains a large number of maps that have been published as IIIF images.
nodegoat Users have been able to use (historical) maps that are published as XYZ-tiles. We have now updated our Guide 'Use a Historical Map' to describe the steps you need to take to use IIIF images as a background map for your geographic visualisations in nodegoat. The Guide uses an example of a historical map published in the Digital Collections of Leiden University Libraries.
The nodegoat Guides have been extended with a new section on 'Ingestion Processes'. An Ingestion Process allows you to query an external resource and ingest the returned data in your nodegoat environment. Once the data is stored in nodegoat, it can be used for tagging, referencing, filtering, analysis, and visualisation purposes.
You can ingest data in order to gather a set of people or places that you intend to use in your research process. You can also ingest data that enriches your own research data. Any collection of primary sources or secondary sources that have been published to the web can be ingested as well. This means that you can ingest transcription data from Transkribus, or your complete (or filtered) Zotero library.
Every nodegoat user is able to make use of these features. Next to the examples listed below, every endpoint that outputs JSON or XML can be queried. nodegoat data can be exported in CSV and ODT formats, or published via the nodegoat API as JSON and JSON-LD.
Wikidata
The first two guides deal with setting up a data model for places and people, and ingesting geographical and biographical data from Wikidata: 'Ingest Geographical Data', 'Ingest Biographical Data'. A number of SPARQL-queries are needed to gather the selected data. As writing these queries can be challenging, we have added two commented queries (here and here) that explain the rationale behind the queries.
These first two guides illustrate a common point in working with relational data (e.g. coming from graph databases, or relational databases): you need to first ingest the referenced Objects (in this case universities) before you can make references to these Objects (in this case people attending the universities).
The third guide covers the importance of external identifiers. External identifiers can be added manually, as described in the guide 'Add External Identifiers', or ingested from a resource like Wikidata, as described in the newly added guide 'Ingest External Identifiers'.[....]
We have added various new sections to the nodegoat documentation and have published this via a new publication platform on nodegoat.net: nodegoat.net/documentation. Next to a revision of the existing content, this update also brings documentation on new features such as Ingestion Processes and Reconciliation Processes.
We have also republished the Guides using the same publication platform: nodegoat.net/guides. This makes publishing new Guides much easier, so expect to see new content there as well. We have added one new Guide already: after feedback on the lack of a general introduction to the basic principles of nodegoat we have published the Guide 'Basic Principles'.
The new and existing content can now also be searched via nodegoat.net/search. Use this to find Blog Posts, Use Cases, Documantion Sections, or Guides that mention things like tags or apis.[....]