Use nodegoat to create new datasets collaboratively or alone. Explore data by means of spatial and temporal visualisations. The built-in network analysis tools reveal patterns and central nodes.
Together with the Research School Political History we will run a workshop with the title ‘Data management and analysis for historical research in nodegoat’ on 23 October 2023. The workshop takes place between 10:00 and 17:00 at the Oost-Indisch Huis in Amsterdam. This is an in-person event and registration is required. Registration deadline is 9 October.
Thanks to the Allmaps project, it is now possible to use any map that has been published as a IIIF image as a background map in your geographic visualisations in nodegoat.
The International Image Interoperability Framework (IIIF) is a set of open standards for publishing digital objects, maintained by a consortium of cultural institutions. The list of institutions that publish their digitised maps as IIIF images is constantly growing. This overview provides a number of examples of available resources. The David Rumsey Map Collection also contains a large number of maps that have been published as IIIF images.
nodegoat Users have been able to use (historical) maps that are published as XYZ-tiles. We have now updated our Guide 'Use a Historical Map' to describe the steps you need to take to use IIIF images as a background map for your geographic visualisations in nodegoat. The Guide uses an example of a historical map published in the Digital Collections of Leiden University Libraries.
On Wednesday 11 October 2023 we will run a nodegoat Workshop at Stockholm University. The workshop will take place between 10.00 and 16.30 at Bergsmannen, Stockholm University. This is an in-person event and registration is required. Registration deadline is 27 September.
The Scope functionality is used throughout nodegoat to traverse your data model and select elements to be included in a visualisation, analysis, or export. With the Scope, you can limit or expand your data selection. In a prosopographical analyses you might want to include all educational institutes related to one person, plus all the relations of these institutes, while omitting all other personal relations of a person. Follow this Guide to learn how to configure a Scope.
Chronology Statements that you make in nodegoat allow you to specify what you mean by a statement like 'circa'. Instead of using qualitative statements about vagueness, Chronology Statements provide you with a way of making quantitative statements about vagueness. Chronology Statements also allow you to make relational date statements: 'the date point is between the sending of letter X and the sending of letter Y'. Follow this Guide to learn how to store uncertain dates by using Chronology Statements and follow this Guide to learn how to store relational dates by using Chronology Statements.
The temporally-aware dynamic network analysis functionality makes the temporal options offered by the Chronology Statements available on any level of a Scope. This allows you to apply and pass temporality to time-bound connections in any of a Scope's paths. The dates from Chronology Statements can be sourced from every step in the traversal: ascendant or descendant nodes, and combinations. Selected configurations can be applied on any/all of the connections/edges: outbound or inbound directionality, and combinations.
Example: Academic Connections
With this functionality it is now possible to dynamically generate networks of people who attended the same educational institute at the same time, without specifying any dates in a filter. The temporally-aware dynamic network analysis functionality applies the initial date on every other relationship that appears on a specified path:
Two persons shown having an overlapping academic connection out of four persons.
The obvious benefit of this approach is the scalability of this functionality, as it allows you to quickly scrutinise complex networks based on time-bound connections:[....]
This year the Leipzig Research Centre Global Dynamics (ReCentGlobe) has set up a nodegoat Grow installation to service two multi-year research projects. The project 'Die Produktion von Weltwissen im Umbruch' uses nodegoat to analyse the globalisation of knowledge production by mapping the development of Area Studies and Global Studies in the German context over the past 15 years. The project 'African non-military conflict intervention practices' uses nodegoat to build a comprehensive database of non-military interventions since 2004 by the African Union and by Regional Economic Communities.
As a result of this collaboration, the ReCentGlobe initiative organises a public nodegoat workshop within the framework of the Digital Lab infrastructure. The workshop will take place at the ReCentGlobe institute on 25 July 2023. More information about the programme and registration can be found here.
The TIC-Collaborative project of Ghent University and Maastricht University has published a dataset on international social reform congresses and organisations (1846-1914). This dataset has been created and maintained in a nodegoat installation at Ghent University since early 2014.
The data has been published as CSV files in the 'IISH Data Collection' repository and can be downloaded here. The dataset contains 1206 organisations, 1052 publications, 23247 people, 1690 conferences, and 35609 conferences attendance statements. All these statements have been enriched with spatial-temporal attributes which allows for the diachronic and geographic analysis and exploration of the relational data.[....]
nodegoat has been extended with four new features in the past months. These new features were commissioned by three research projects from Switzerland, Slovenia, and The Netherlands. All nodegoat users can now make use of these features.
Data Model Viewer
This feature has been commissioned by the Historical Institute of the University of Bern for the REPAC project.
When the complexity of the data model that you have implemented in nodegoat grows, it might be challenging to maintain an overview of all the Object Types, Object Descriptions, Sub-Objects, Sub-Object Descriptions, and all the relationships in between these elements. You can now generate an overview of all the elements of your data model that have been enabled in a Project.
Go to 'Management' and click the name of a Project that is listed in the overview of Projects. Set the 'Mode' to 'References Only' to hide all non-relational elements. Set the 'Size' to 'Full Height' to expand the height beyond the size of the window. You can specify a DPI value and download a 'png' version of the generated overview. To enhance the legibility of the graph you can reposition elements by means of dragging and dropping.[....]
The nodegoat Guides have been extended with a new section on 'Ingestion Processes'. An Ingestion Process allows you to query an external resource and ingest the returned data in your nodegoat environment. Once the data is stored in nodegoat, it can be used for tagging, referencing, filtering, analysis, and visualisation purposes.
You can ingest data in order to gather a set of people or places that you intend to use in your research process. You can also ingest data that enriches your own research data. Any collection of primary sources or secondary sources that have been published to the web can be ingested as well. This means that you can ingest transcription data from Transkribus, or your complete (or filtered) Zotero library.
Every nodegoat user is able to make use of these features. Next to the examples listed below, every endpoint that outputs JSON or XML can be queried. nodegoat data can be exported in CSV and ODT formats, or published via the nodegoat API as JSON and JSON-LD.
Wikidata
The first two guides deal with setting up a data model for places and people, and ingesting geographical and biographical data from Wikidata: 'Ingest Geographical Data', 'Ingest Biographical Data'. A number of SPARQL-queries are needed to gather the selected data. As writing these queries can be challenging, we have added two commented queries (here and here) that explain the rationale behind the queries.
These first two guides illustrate a common point in working with relational data (e.g. coming from graph databases, or relational databases): you need to first ingest the referenced Objects (in this case universities) before you can make references to these Objects (in this case people attending the universities).
A Chronological Visualisation that allows you to explore the distribution in time of the ingested data.
The third guide covers the importance of external identifiers. External identifiers can be added manually, as described in the guide 'Add External Identifiers', or ingested from a resource like Wikidata, as described in the newly added guide 'Ingest External Identifiers'.[....]
We have added various new sections to the nodegoat documentation and have published this via a new publication platform on nodegoat.net: nodegoat.net/documentation. Next to a revision of the existing content, this update also brings documentation on new features such as Ingestion Processes and Reconciliation Processes.
We have also republished the Guides using the same publication platform: nodegoat.net/guides. This makes publishing new Guides much easier, so expect to see new content there as well. We have added one new Guide already: after feedback on the lack of a general introduction to the basic principles of nodegoat we have published the Guide 'Basic Principles'.
The new and existing content can now also be searched via nodegoat.net/search. Use this to find Blog Posts, Use Cases, Documantion Sections, or Guides that mention things like tags or apis.[....]
CORE AdminSocial visualisation of a subset of people in the COURAGE registry (in green) enriched with data from Wikidata: publications (in red) and publishing houses (in purple). The size of the nodes of the publishing houses is determined by their PageRank value.
The workshop series ‘Linking your Historical Sources to Open Data’ organised by the COST Action NEP4DISSENT aims to help researchers to connect their research data to existing Linked Open Data resources. These connections will ensure that research data remains interoperable and allow for the ingestion of various relevant Linked Open Data resources.
In two workshop sessions we will discuss the basic principles of Linked Open Data and show you how your project can benefit from this. We will do this by setting up a nodegoat environment and connect this to Linked Open Data resources. Data that has been collected in the COURAGE registry will be used to demonstrate how these connections can be set up. The COURAGE registry can be explored here, the data is available for download here. If you already have a configured nodegoat environment, you can use this during the workshop.[....]
Because this feature is developed in nodegoat, it can be used by any nodegoat user. And because the Ingestion processes can be fully customised, they can be used to query any endpoint that publishes JSON data. This new feature allows you to use nodegoat as a graphical user interface to query, explore, and store Linked Open Data (LOD) from your own environment.
We will organise a series of four virtual workshops to share the results of the project and explore nodegoat's data ingestion capabilities. These workshops will take place on 28-04-2021, 05-05-2021, 12-05-2021, and 26-05-2021. All sessions take place between 14:00 and 17:00 CEST. The workshops will take place using Zoom and are recorded so you can watch a session to catch up.
The first two sessions will provide you with a general introduction to nodegoat: in the first session you will learn how to configure your nodegoat environment, while the second session will be devoted to importing a dataset. In the third session you will learn how to run ingestion processes in order to enrich any dataset by using external data sources. The fourth session will be used to query other data sources to ingest additional data.[....]
There are many entities that share a name. This is often the case for cities (e.g. Springfield), or people (e.g. Francis Bacon). When you encounter such a name in a source, the context usually provides you with enough clues to know which of the entities is meant. However, in some cases the context is too vague or the entities too similar to be certain. In these cases you need to resort to interpretation and disambiguation. This is genuine scholarly work, since you always have to interpret your sources.
This blog post will describe a case in which disambiguation is needed. We will use the example of a research process that aims to reconstruct scholarly networks in the 17th and 18th century. In a research process that deals with scholarly networks, the source material will largely consist of citations and mentions in documents.
The disambiguation process will be described by means of a snippet taken from a publication by an anonymous author in 1714 with the title 'An account of the Samaritans; in a letter to J---- M------, Esq;' (ESTC Citation No. N16222).
To store 'mentioned' statements, you can use the Type that was created in the guide 'Add Source References' and add a new Sub-Object in which mentions can be saved. To change the model, go to Model and edit the Type 'Publication'. Switch to the tab 'Sub-Object' and create a new Sub-Object with the name 'Mention'. Set the Date to 'None' and Location to 'None'. In the tab 'Description', click the green 'add' button twice to create three Sub-Object Descriptions. Name the first 'Person', the second 'Page Number', and the third 'Notes'. Set the value type for 'Person' to 'Reference: Type' and select the Type 'Person'. Set the value type for 'Page Number' to 'Integer' and set the value type for 'Notes' to 'Text'.
These settings are not set in stone. Adjust them so that they work for your project.[....]
When you work your way through your source material, you might encounter two sources that deal with the same subject but contain contradictory data. In these cases, you usually have two options: you either choose one of the sources based on its reliability, or you make an interpretation that combines the data from both sources. To account for disagreement in your sources, a third option is to include both statements in your dataset. This blog post will show you how to include conflicting information in your nodegoat project.
This blog post uses the data model that was created in the nodegoat guide 'Create your first Type'. If you haven't set up a data model in your nodegoat environment yet, you can follow this guide to do so. If you already have a data model, you can apply the steps discussed below in your own data model.
The multiple births of John Chamberlayne
The Dictionary of National Biography of 1887 writes that John Chamberlayne was born 'about 1666'.
Leslie Stephen, Dictionary of National Biography, v. 10 (Elder Smith & Co., 1887), page 9. Available at wikisource.
In a more recent lemma, in the Oxford Dictionary of National Biography (2009 version), he is said to be born in '1668/9'.
The first source, published in 1887, does not give any details on the way in which the statement 'about 1666' was formulated. The second source, first published in 2004, states that he matriculated from Trinity College, Oxford, on 7 April 1685, aged sixteen. This fact has allowed the author to conclude that he must have been born in either 1668 or 1669. This assumption rests on two premises: that the matriculation record lists an exact age (and not an estimate) and that John Chamberlayne knew the date of his birthday precisely (which in the 17th century was not necessarily the case).[....]
"A relational database such as nodegoat is an excellent tool to create an overview of source material and explore all possible types of relationships that change in time and space."
On November 24 the Digital Humanities department and the Data Science Lab of the University of Bern organise the event 'nodegoat: Show & Tell Me More'. Users of the nodegoat installation of the university will present their ongoing projects.
Learning lots about the growing functionalities of @nodegoat - the relational web-based data management & network & geospatial visualization platform of @LAB1100, w/ support of @ClariahV
Full day of #nodegoat workshops at the Ghent Centre for Digital Humanities: a beginners workshop in the morning and a workshop for more advanced users in the afternoon. Great questions about data modelling, IIIF integrations, vague dates, and data publications.
We will be hosting a 2 part @nodegoat workshop on 16/11 @ugent to support researchers in using the relational web-based data management system & network & geospatial visualizations. Workshops will be given by @LAB1100 with the support of @ClariahV
Register now for our #nodegoat workshop next week at #UGent, organised by CLARIAH-VL and GhentCDH. We run a session for beginners in the morning and a session for more advanced users in the afternoon. See you there!
We're excited to host Pim van Bree & Geert Kessels (@LAB1100) for an in-person workshop, From Archives to Analysis: Data Management & Analysis for Humanities w/ @Nodegoat on Nov 2. Conveners: @SuphanKrmzltn @DJWrisley @burak_sayim
The nodegoat workshop at New York University Abu Dhabi was a lot of fun with both fellows and students joining throughout the day. Many thanks to Suphan and David for hosting. Slides can be found here 👇
Register for the nodegoat Workshop ‘Data management and analysis for historical research in nodegoat’ organised by the Research School Political History in Amsterdam on 23 October 2023. Registration deadline is 9 October.
Start of a new nodegoat Go project at DaSCH - Swiss National Data and Service Center for the Humanities. https://www.dasch.swiss/
Blog post about a new #nodegoat Guide: Use any @IIIF Published Map as a Background in your Geographic Visualisations
Register now for the virtual workshop 'An Introduction to nodegoat for Byzantinsts' by Jesse W. Torgerson (Wesleyan University) on Friday, October 13, 2023, 12:00–3:00 PM EDT.