nodegoat is a web-based data management, network analysis & visualisation environment.
Using nodegoat, you can create and manage any number of datasets by use of a graphic user interface. Your custom data model autoconfigures the backbone of nodegoat's core functionalities.
Within nodegoat you are able to instantly analyse and visualise datasets. nodegoat allows you to enrich data with relational, geographical and temporal attributes. Therefore, the modes of analysis are inherently diachronic and ready-to-use for interactive maps and extensive trailblazing.
With this poster, nodegoat will be present at this year's ADHO DH conference in Montreal, Canada.
We also present a long paper on the iterative data modelling methodology. We'll talk about the benefits of this approach in relation to teaching data modelling and data modelling as a research practice. This presentation is based on the three blog posts we published earlier this year:
As a result of our cooperation with nodegoat's institutional partners, we have been able to develop a RESTful API for nodegoat.
The API provides an additional interface to query and store data to your Projects in nodegoat. We have integrated the API with nodegoat's core functionalities and have optimised it for large operations. The API can also be used to update the data Model, which allows you to update specific attributes of a Type, or upload a whole data Model with multiple Type templates in one go.
You can use the Project settings to configure what parts of your data are exposed through the API. The API can be configured to require authentication or allow for public access.
In case you want to use the API with your own research data, get in touch!
We have enabled the API for a demo domain. You can access this domain by logging in to nodegoat.net with the username 'demo' and password 'demo'. The following cURL commands give you a JSON package with the information that has been entered on the French intellectual Ernest Renan. You can also click on the URL to view the output in your web browser.
In the past years, we have given various nodegoat workshops to groups of scholars and students. Even though the entry level of the participants varied from workshop to workshop there were similar challenges that emerged every time. These challenges can be grouped into the following three questions:
What is a relational database?
My material is very vague/ambiguous/uncertain/contradictory/unique/special, how can I use this in a database?
How do I use the nodegoat interface?
Since most of the workshops we give are nodegoat-specific, we aim to teach participants how to do data modelling from within the nodegoat interface. Because of this, and as a result of the usual time constraints (often half a day), we have to leave the first two fundamental questions largely untouched. To remedy this, we have written two blog posts in which we aim to cover the first two questions. The third question is being addressed in the nodegoat video tutorials, the FAQ & forum, and in the near future the documentation.[....]
One of the most obvious questions to start with when working with structured data in the humanities is: what is data? Miriam Posner has captured this challenge in the title of her talk on this topic: 'Humanities Data: A Necessary Contradiction'. Oftentimes, scholars think about their research material in terms of nuances, vagueness, uniqueness, whereas data is perceived as binary, strict and repetitive. The realisation that nuances, vagueness, and uniqueness can also be captured by data in a database is something that has to grow over time.
As soon as we start talk about 'data' it is important to keep two things in mind. First, we should be ready to reflect on the fact that data oriented processes can dehumanise data. This process has been described by Scott Weingart in his essay on digitising and storing holocaust survivor stories. Even though we can efficiently organise large collections of data, the implications of this process have to be taken into account.[....]
At a certain moment in your research process, you might decide that you need to order your material in a structured format. A reason could be that there are too many different people in your body of research and it's becoming hard to keep track of them, let alone their different attributes. Another reason could be that you have repetitive sources, like letters or books, that you want to store and include in your analysis.
In the old days, you would get yourself a card catalogue and start reworking your notes onto these little handy cards.[....]
Every bit of information that is entered into nodegoat can immediately be published through a public user interface. This allows the Encyclopedia of Romantic Nationalism in Europe to instantly publish articles and a wide range of research data. This data also includes a set of over 38.000 letters that can be queried through the public user interface. In this blogpost we discuss the steps we took to allow visitors to dynamically explore this dataset.
Next week there will be a nodegoat workshop at the 'DARIAH-EU Annual Meeting' in Gent. This event will take place on 10-13 October. The nodegoat workshop will be on Tuesday 11 October from 14:00 to 15:30. You can find the full program here.
There will be a nodegoat community meeting at the Mundaneum (Paul Otlet ftw) in Mons (Belgium) on July 1. This meeting is an initiative of the TIC project at the University of Ghent in cooperation with DARIAH-BE. The meeting follows on the doctoral workshop 'Tracing Mobilities & Socio-political Activism. 19th-20th centuries' that takes place at the Mundaneum between June 29 and July 1.
The nodegoat community meeting will start with a general introduction on the current status of nodegoat and upcoming new features. Next, we will have four presentations of projects that make use of nodegoat:
The following interactive visualisation explores the movements of 10.896 Representatives of the United States Congress, from Roger Sherman's birth in 1721 up until all its members in 2015. The Representatives move from their place of birth to their place of education and finally to their possible place of death. Click here to open the interactive visualisation.
Last April, we gave a talk at the tenth Historical Network Research workshop in Düsseldorf about the 'Reversed Classification' functionality in nodegoat. To illustrate what you can accomplish with this functionality, we queried Wikidata to get a dataset of all the members of the US House of Representatives, including their date and place of birth and death, their professions, and the institutes where they took their education. We used this data to perform a reversed classification process that groups the representatives into career politicians or politicians with a heterogeneous career. From there, you could start looking at geographical patterns or educational backgrounds of these groups. See a graph of this network with these two 'career' nodes included here (canvas).
The diachronic geographical visualisation of all this data in nodegoat turns out to be a nice bonus.
On the first day, we will host a nodegoat workshop. This workshop will last half a day and is titled 'Advanced HNR' (it will run in parallel with an introductory historical network research workshop by Martin Stark). Since we only have half a day, we encourage participants who have not used nodegoat before to watch our three tutorials that cover basic functionalities of nodegoat.[....]
From the outside, it can be a challenge to keep up with all the developments within the ever expanding universe of wiki*/*pedia. So it's good to be reminded now and then of all the structured data that has become available thanks to their efforts:
This looks pretty neat, especially since Wikidata currently has over 947 million triples in their data store. Since battles usually have a place and a date, it would be nice to import this data into a data design in nodegoat and visualise these battles through time and space (diachronic geospatiality ftw).[....]
This week we gave a two-day workshop on data modeling and database development for historians. This workshop was part of the course Databases for young historians. This course was sponsored by the Huizinga Instituut, Posthumus Instituut, Huygens-ING and the Amsterdam Centre for Cultural Heritage and Identity (ACHI, UvA) and was hosted by Huygens-ING.
We had a great time working with a group of historians who were eager to learn how to conceptualise data models and how to set up databases. We discussed a couple of common issues that come up when historians start to think in terms of 'data':
How to determine the scope of your research?
How to deal with unknown/uncertain primary source material?
How to use/import 'structured' data?
How to reference entries in a dataset and how to deal with conflicting sources?
How to deal with unique/specific objects in a table/type?
These points were taken by the horns (pun intended) when every participant went on to conceptualise their data model. To get a feel for classical database software (tables, primary keys, foreign keys, forms, etc..), they set up a database in LibreOffice Base. Finally, each participant created their own data model in nodegoat and presented their model and first bits of data.[....]
You can now use nodegoat to query SPARQL endpoints like Wikidata, DBpedia, the Getty Vocabularies (AAT, ULAN, TGN), and the British Museum. Through the nodegoat graphic interface you query linked data resources and store their URIs within your dataset. This means that you can search all people in Wikidata using the string 'Rembrandt' and select the URI of your choice (e.g. 'https://www.wikidata.org/wiki/Q5598'). By doing so, you add external identifiers to your dataset and introduce a form of authority control in your data. This will help to disambiguate objects (like persons/artworks with similar names) and also enhances the interoperability of your dataset. Both these aspects make it easier to share and reuse datasets.
These two advantages (data disambiguation and data interoperability) are useful for researchers who work on small(-ish) but complex datasets. Researchers who feel that 'automated' research processes are unattainable for them as their data may be dispersed, heterogeneous, incomplete, or only available in an analogue format, are more likely to rely on something like the old fashioned card catalogue system in which all relevant objects and their varying attributes and relations are described. Luckily, we can also use digital tools to create and maintain card catalogues (databases). For a historian who is mapping the art market of a seventeenth century Dutch town, a database is a very powerful tool to store and analyse all objects (persons, artworks etc.) and the relations between these objects. Still, if no external identifiers are used, this dataset is nothing but a curated island (even if the data is published!).
Curation & Linked Data
The process we describe here aims to connect the craftsmanship of research in the humanities to the interconnected world of massive repositories, graph databases and authority files. Other useful purposes of linked data resources for the humanities have already been described extensively, like using aggregation queries to analyse large collections, thesaurus comparison/matching, or performing automated metadata reconciliation as described by the Free Your Metadata initiative.[....]
The installation is located in the first section of the permanent exhibition. The wooden table has a cut-out (elevated) map of Europe as its surface. The visualisation is projected by a Barco F35 projector (WQXGA resolution). Visitors can interact with the installation by means of capacitive sensors.
We have developed an interactive installation for the new GRIMMWELT museum in Kassel, Germany. The installation visualises and lets visitors freely interact with the full correspondence network of Jacob and Wilhelm Grimm, involving a total of 20.000 letters and 1400 correspondence partners in a timespan of 80 years. The dataset of letters has been created by the Arbeitsstelle Grimm-Briefwechsel at the Institut für deutsche Literatur of the Humboldt-Universität zu Berlin. We have developed the visualisation in cooperation with SPIN: Study Platform on Interlocking Nationalisms at the University of Amsterdam.
The installation implements a new geographical visualisation mode 'Movement' in nodegoat, in addition to the already available line-based 'Connection' mode. The Movement mode uses WebGL rendering (GPU) to animate large collections of objects smoothly. This mode also allows for a wide range of configuration parameters to finetune the visualisation to various scenarios. Due to the open and generic nature of nodegoat, we can now make use of the Movement mode for any other relevant dataset.
This short clip shows the new visualisation mode from within nodegoat:
A high resolution 1440p version of this clip is available here.[....]