nodegoat has been extended with four new features in the past months. These new features were commissioned by three research projects from Switzerland, Slovenia, and The Netherlands. All nodegoat users can now make use of these features.
Data Model Viewer
This feature has been commissioned by the Historical Institute of the University of Bern for the REPAC project.
When the complexity of the data model that you have implemented in nodegoat grows, it might be challenging to maintain an overview of all the Object Types, Object Descriptions, Sub-Objects, Sub-Object Descriptions, and all the relationships in between these elements. You can now generate an overview of all the elements of your data model that have been enabled in a Project.
Go to 'Management' and click the name of a Project that is listed in the overview of Projects. Set the 'Mode' to 'References Only' to hide all non-relational elements. Set the 'Size' to 'Full Height' to expand the height beyond the size of the window. You can specify a DPI value and download a 'png' version of the generated overview. To enhance the legibility of the graph you can reposition elements by means of dragging and dropping.[....]
The nodegoat Guides have been extended with a new section on 'Ingestion Processes'. An Ingestion Process allows you to query an external resource and ingest the returned data in your nodegoat environment. Once the data is stored in nodegoat, it can be used for tagging, referencing, filtering, analysis, and visualisation purposes.
You can ingest data in order to gather a set of people or places that you intend to use in your research process. You can also ingest data that enriches your own research data. Any collection of primary sources or secondary sources that have been published to the web can be ingested as well. This means that you can ingest transcription data from Transkribus, or your complete (or filtered) Zotero library.
Every nodegoat user is able to make use of these features. Next to the examples listed below, every endpoint that outputs JSON or XML can be queried. nodegoat data can be exported in CSV and ODT formats, or published via the nodegoat API as JSON and JSON-LD.
The first two guides deal with setting up a data model for places and people, and ingesting geographical and biographical data from Wikidata: 'Ingest Geographical Data', 'Ingest Biographical Data'. A number of SPARQL-queries are needed to gather the selected data. As writing these queries can be challenging, we have added two commented queries (here and here) that explain the rationale behind the queries.
These first two guides illustrate a common point in working with relational data (e.g. coming from graph databases, or relational databases): you need to first ingest the referenced Objects (in this case universities) before you can make references to these Objects (in this case people attending the universities).
We have also republished the Guides using the same publication platform: nodegoat.net/guides. This makes publishing new Guides much easier, so expect to see new content there as well. We have added one new Guide already: after feedback on the lack of a general introduction to the basic principles of nodegoat we have published the Guide 'Basic Principles'.
The new and existing content can now also be searched via nodegoat.net/search. Use this to find Blog Posts, Use Cases, Documantion Sections, or Guides that mention things like tags or apis.[....]
The workshop series ‘Linking your Historical Sources to Open Data’ organised by the COST Action NEP4DISSENT aims to help researchers to connect their research data to existing Linked Open Data resources. These connections will ensure that research data remains interoperable and allow for the ingestion of various relevant Linked Open Data resources.
In two workshop sessions we will discuss the basic principles of Linked Open Data and show you how your project can benefit from this. We will do this by setting up a nodegoat environment and connect this to Linked Open Data resources. Data that has been collected in the COURAGE registry will be used to demonstrate how these connections can be set up. The COURAGE registry can be explored here, the data is available for download here. If you already have a configured nodegoat environment, you can use this during the workshop.[....]
Because this feature is developed in nodegoat, it can be used by any nodegoat user. And because the Ingestion processes can be fully customised, they can be used to query any endpoint that publishes JSON data. This new feature allows you to use nodegoat as a graphical user interface to query, explore, and store Linked Open Data (LOD) from your own environment.
We will organise a series of four virtual workshops to share the results of the project and explore nodegoat's data ingestion capabilities. These workshops will take place on 28-04-2021, 05-05-2021, 12-05-2021, and 26-05-2021. All sessions take place between 14:00 and 17:00 CEST. The workshops will take place using Zoom and are recorded so you can watch a session to catch up.
The first two sessions will provide you with a general introduction to nodegoat: in the first session you will learn how to configure your nodegoat environment, while the second session will be devoted to importing a dataset. In the third session you will learn how to run ingestion processes in order to enrich any dataset by using external data sources. The fourth session will be used to query other data sources to ingest additional data.[....]
There are many entities that share a name. This is often the case for cities (e.g. Springfield), or people (e.g. Francis Bacon). When you encounter such a name in a source, the context usually provides you with enough clues to know which of the entities is meant. However, in some cases the context is too vague or the entities too similar to be certain. In these cases you need to resort to interpretation and disambiguation. This is genuine scholarly work, since you always have to interpret your sources.
This blog post will describe a case in which disambiguation is needed. We will use the example of a research process that aims to reconstruct scholarly networks in the 17th and 18th century. In a research process that deals with scholarly networks, the source material will largely consist of citations and mentions in documents.
The disambiguation process will be described by means of a snippet taken from a publication by an anonymous author in 1714 with the title 'An account of the Samaritans; in a letter to J---- M------, Esq;' (ESTC Citation No. N16222).
To store 'mentioned' statements, you can use the Type that was created in the guide 'Add Source References' and add a new Sub-Object in which mentions can be saved. To change the model, go to Model and edit the Type 'Publication'. Switch to the tab 'Sub-Object' and create a new Sub-Object with the name 'Mention'. Set the Date to 'None' and Location to 'None'. In the tab 'Description', click the green 'add' button twice to create three Sub-Object Descriptions. Name the first 'Person', the second 'Page Number', and the third 'Notes'. Set the value type for 'Person' to 'Reference: Type' and select the Type 'Person'. Set the value type for 'Page Number' to 'Integer' and set the value type for 'Notes' to 'Text'.
These settings are not set in stone. Adjust them so that they work for your project.[....]
When you work your way through your source material, you might encounter two sources that deal with the same subject but contain contradictory data. In these cases, you usually have two options: you either choose one of the sources based on its reliability, or you make an interpretation that combines the data from both sources. To account for disagreement in your sources, a third option is to include both statements in your dataset. This blog post will show you how to include conflicting information in your nodegoat project.
This blog post uses the data model that was created in the nodegoat guide 'Create your first Type'. If you haven't set up a data model in your nodegoat environment yet, you can follow this guide to do so. If you already have a data model, you can apply the steps discussed below in your own data model.
The multiple births of John Chamberlayne
The Dictionary of National Biography of 1887 writes that John Chamberlayne was born 'about 1666'.
In a more recent lemma, in the Oxford Dictionary of National Biography (2009 version), he is said to be born in '1668/9'.
The first source, published in 1887, does not give any details on the way in which the statement 'about 1666' was formulated. The second source, first published in 2004, states that he matriculated from Trinity College, Oxford, on 7 April 1685, aged sixteen. This fact has allowed the author to conclude that he must have been born in either 1668 or 1669. This assumption rests on two premises: that the matriculation record lists an exact age (and not an estimate) and that John Chamberlayne knew the date of his birthday precisely (which in the 17th century was not necessarily the case).[....]
You are often confronted with omissions or with inconclusive statements when you deal with historical source material. To let your dataset reflect the nature of your sources, it is important that you include these vague or uncertain statements in your data. This blog post will go over a number of strategies that will help you to deal with these cases in your nodegoat project.
A common scenario is a case where you lack information. When this happens, you can decide to leave a given description, date, or location empty. This gives you the ability to create a filter that finds the objects that have empty descriptions, dates, or locations.
In another situation you might encounter a source that is partially lacking. The source does provide you with some information, but is inconclusive about the certainty of the information. An example of this is the entry on John Chamberlayne in the Dictionary of National Biography:
The first sentence of his entry reads: "CHAMBERLAYNE, JOHN (1666–1723), miscellaneous writer, a younger son of Edward Chamberlayne [q. v.], was born about 1666, probably in or near London."
We will discuss four strategies for accommodating this uncertain source: a true/false statement on certainty, a scale on the level of certainty, entering chronology statements, and entering geometries.[....]
Most scholars think about their research material in terms of nuances, vagueness, and uniqueness, whereas data is perceived as binary, strict, and repetitive. However, working with a digital tool does not mean that you can only work with binary oppositions or uncontested timestamps. On the contrary: by creating a good data model, you are able to include nuances, irregularities, contradictions, and vagueness in your database. A good data model is capable of making these insights and observations explicit. Instead of smoothing out irregularities in the data by simplifying the data model, the model should be adjusted to reflect the existing vagueness, conflicts, and ambiguities.
These types of uncertainty can be dealt with in different ways. The next three blog posts will walk you through a number of possible solutions. The described strategies are not the only possible solutions: each research question is unique and may call for a solution of its own.
You might encounter conflicting source material. Two sources might differ about the name of a person, or the date of an event. To account for all possible perspectives, you can include the conflicting statements in your data. Read the blog post 'How to store uncertain data in nodegoat: conflicting information' to learn how to deal with conflicting information.[....]
The release of nodegoat 7.3 comes with a set of new features that have been developed in collaboration with various projects and institutes.
Repertorium Academicum Germanicum: Vague and Complex Dates
The Repertorium Academicum Germanicum (RAG) at the University of Bern has commissioned a major overhaul of the nodegoat dating functionality. With this development process, the core of nodegoat's date handling processes have been rewritten to account for date statements that are uncertain/cyclical/relational. These statements can be expressed using 'ChronoJSON' notation, to allow for a clean and understandable description of complex date statement. We used the EDTF format as a starting point for this development process, but had to conclude that this format was not equipped to make relational date statements or include custom periodisations (like 'Sommersemester').
With these new features nodegoat users can now make statements like 'letter X was sent two months after letter Y and two months before letter Z', or 'Person A graduated on one day two years before 1498 and two years after 1498'.
These features are completely integrated into all nodegoat's functionalities. This means that you can create complex filters that use relational or vague date statements, include these levels of vagueness in your visualisation, and make selections of data based on vague dates to perform network analytical calculations.[....]
In the next weeks nodegoat will be present at several conferences. Meet us in Mainz, Paris, Erfurt, or Pisa to learn more about nodegoat or discuss your nodegoat project with us.
Mainz: Networks Across Time and Space
During the 13th Workshop on Historical Network Research titled "Networks Across Time and Space" we will give a nodegoat workshop and present the recently developed analytical features of nodegoat. This event takes place on May 27th and 28th at the Akademie der Wissenschaften und der Literatur in Mainz.
Paris: Teaching History in the Digital Age – international perspectives #dhiha8
The MapModern project at the Universitat Oberta de Catalunya (UOC) which focuses on cross-border literary networks and cultural mediators in the hispanic world between 1908 and 1939, has recently published a dataset on translations and reviews in hispanic modernist journals. This dataset has been created in their nodegoat database and currently includes all translations or reviews from La Revista (Barcelona) from 1915 to 1936; the second period of Proa (Buenos Aires) from 1924 to 1926, and Sur (Buenos Aires) from 1931 to 1939. More data from Iberoamerican journals will be added in the future.
Laura Fólica and Ventsislav Ikoff have collected the data, Diana Roig Sanz is the PI of this project. With the help of the staff of the UOC library, the dataset has been made Dublin Core compliant. The dataset can be downloaded from the data repository of the UOC: http://hdl.handle.net/10609/86485, or from the EUDAT Collaborative Data Infrastructure which has assigned the dataset with a DOI: 10.23728/b2share.eb5c468d3dc3401c8b2fb4605d868a00. The suggested citation is: Translations and Reviews in Iberoamerican Modernist Periodicals (dataset) by Fólica, Laura; Ikoff, Ventsislav; Roig Sanz, Diana; Dec 12, 2018.[....]
Since nodegoat's conception in 2011 by LAB1100, in collaboration with Joep Leerssen of the University of Amsterdam, our web-based research environment is used in various configurations by individual scholars as well as by large scale collaborative research projects.
The first project that started to use nodegoat was the Study Platform on Interlocking Nationalisms for their Encyclopedia of Romantic Nationalism in Europe in 2011. Since then it has been used by over twenty institutional projects and we have provided over a thousand individual scholars with access to a free personal research environment on nodegoat.net. The institutional projects are hosted on a server of the institute, and are offered in combination with training, workshops, and support. The individual accounts are hosted on our own server, located in The Hague, The Netherlands.
Due to the flexibility of nodegoat, it can be used for a wide range of different kinds of research projects. This means that there is rarely only one project at a university or research institute that wants to use nodegoat as their primary tool of research. To be able to facilitate these multi-project configurations, we have been offering various installation packages and service level agreements in the past years. To streamline our services we have formalised these packages in three different nodegoat products: nodegoat One, nodegoat Grow, and nodegoat Go.
nodegoat One is suitable for institutes that want to run a single nodegoat project. nodegoat Grow is suitable for institutes that want to run a specific amount of nodegoat projects that each have their own database. nodegoat Go is suitable for institutes that want to offer any amount of nodegoat research environments to their staff and students.[....]
Based on this project, Pim van Bree and Geert Kessels founded the company LAB1100 to continue to work on data related topics within the realm of the humanities.
Since 2011 LAB1100 has developed an online research environment that is able to host research data as well as provide various modes of analysis and visualisation. This online research environment was initially called the 'Chrono Spatial Research Platform'. In 2013 its name changed to nodegoat (which is now a registered trademark in the EU and US).
From 2012 onwards, free individual hosted accounts have been provided to scholars who want to use nodegoat to host, analyse, and visualise their data. These accounts can be requested here. We currently provide over a thousand of individual scholars with a free personal research account. You can explore a number of use cases here.[....]