We've created three tutorials that cover basic functionalities of nodegoat. In coming weeks we will add more videos on various other topics. If you have any questions, you can ask them on the nodegoat forum.
Introductory clip to the tutorials:
The first tutorial shows you how to set up a project in nodegoat and how to build a simple data design.
nodegoat is developed as a collaborative research environment that supports participatory research projects. To test its ability to combine various participatory roles with its ability to digest complex and heterogeneous data, we spent two weeks in Semarang, Indonesia working with a group of students to reveal an infrastructure of violence. These students interviewed survivors of state-sanctioned violence and entered the information they gathered directly into nodegoat. Based on these interviews, the students visited a number of sites and interviewed people who lived or worked on these sites. As the data came from personal accounts only, the visualisations that are produced in nodegoat can be characterised as memory landscapes. In this blog post we will describe both the process and the methodology of this project.
The Dutch Institute for War, Holocaust and Genocide Studies (NIOD) has set up a cooperation with the Universitas Katolik Soegijapranata (UNIKA) in Semarang, Indonesia that aims to address the anti-communist/leftist violence of 1965-66 in Semarang and the following years. The project that has emerged from this cooperation, ‘Memory Landscapes and the Regime Change of 1965-66 in Semarang’, is led by dr. Martijn Eickhoff (NIOD) and has resulted in two workshops at the UNIKA University in Semarang organised by Donny Danardono. The first workshop took place in January 2013, the second workshop was held in June 2014. During these two workshops students from UNIKA collected data on anti-communist/leftist violence by combining oral history and anthropological site research. The data includes relations between people as well as locations connected to the events of 1965 and the following years (e.g. places of mob violence, temporary detention, interrogation, torture, murder and mass burial). [....]
Working with data in the humanities, we’ve noticed that the debate on classifications is often focused on the definition of the classification and not so much on what it identifies. A well known example is of course ‘nationality’, but also a (historical) occupation/capacity and even seemingly unproblematic classifications like ‘the nineteenth century’ pose several problems.
Looking at data from an object-oriented perspective, using predefined classifications seems counterintuitive. Objects should define themselves by means of their varying attributes. Nodes and clusters emerge on the basis of correlation between objects.
Nevertheless, we understand the need to be able to identify these clusters in a structured manner without the need to perform sequences of filters. These ‘structured clusters’ should be able to be ordered, analysed and explored. For this reason, we have taken up the challenge to equip nodegoat with a functionality that allows for the definition of these clustered by means of fuzzy filtering settings. We have defined this process as ‘reversed classification’. Although we have merely conceptualised the challenge, and have yet to implement this, we want to share our ideas behind this.
In general, classifications emphasise a convention of value and vocabulary. The direction of a classification is outward, relating to the convention unidirectionally. In effect, the classification is unable to communicate/negotiate with the network it classifies. The reversal of classification opens up the convention by disclosing its parameters. Reversal allows the classification to be scrutinised, reconfigured and re-evaluate the objects it classifies.
Simply put: instead of identifying classifications and assigning these to objects in a dataset (like ‘sculptor’ or ‘German’), a user defines a multi-faceted filter spanning multiple datasets in which they define any number of parameters that are associated with a classification. This will reverse the classifying process as the definition of the classification is identified by the exchange between parameters of the classification and attributes of the object. [....]
The accessibility and flexibility of nodegoat allows for a collaborative and ongoing data entry and data curation process. The experience learns that data consistency becomes a challenge as soon as data entry processes become collaborative or are executed over longer periods of time. Especially when the data structure is complex and data sources are ambiguous, consistency is an increasingly prominent factor. To ensure uniform identification of each object within the dataset, the name of an object should both be consistent and inclusive.
Within nodegoat the name of each object can be a plain text field, generated dynamically, or a combination of the two. When generated dynamically, the object name can be build from its definitions for consistency and include the definitions from other named objects for inclusiveness. A rather exhaustive naming scheme for a painting could look like this:
By generating object names dynamically, changes in named objects (such as artist and city in the example of the painting) are also reflected accordingly in the name of the objects.
Due to the unrestricted relational nature of the naming algorithm there is a potential problem for recursion. Recursion can be introduced directly (e.g. the name of a person includes the name of the person's parents) or further down the naming scheme. By limiting recursion to a single step it is possible to leverage this feat and include family ties within a person's name without running into an infinite loop.
In a future blog post we will discuss the possibility to complement the dynamic generation of object names with conditional formatting.
Within nodegoat we are working on combining data management functionalities with the ability to seamlessly analyse and visualise data. nodegoat can be used as any other database application as it allows users to define, update and query multiple data models. However, as soon as data is entered into the environment, various analytical tools and visualisations become available instantly. Tools such as in-depth filtering, diachronic geographical mappings, diachronic social graphs, content driven timelines, and shortest path calculation enable a user to explore the context of each piece of data. The explorative nature of nodegoat allows users to trailblaze through data; instead of working with static ‘pushes’ – or exports – of data, data is dynamically ‘pulled’ within its context each time a query is fired. This approach produces a number of advantages, opportunities, and challenges we plan to discuss in this and future blog posts.
To kick off, let’s consider an example: the provenance of paintings. Should an art historian decide to deal with this research question within nodegoat, they will first conceptualise a data model based on the kind of data that needs to be included (e.g. persons, studios, paintings, collections, museums) and the relevant relations (e.g. created by, sold by, inherited by, exhibited in). This data model then has to be set up in nodegoat and subsequently be filled with pieces of evidence (see the nodegoat FAQ to learn more about this). As soon as the first objects have been entered and their relations have been identified, these objects can be plotted on a map, be viewed in a social graph, or simply: they become part of the network. Now, a question such as ‘how is an artist connected to a specific museum via an art dealership?’ becomes tangible by using functionalities such as shortest path calculation between objects and in-depth filtering.
nodegoat runs in a web browser, making it is accessible from any device connected to the internet. Working in a web based environment allows for the implemention of collaborative projects and simultaneous access to the same dataset. Multiple users (who have been assigned varying clearance levels) can enter, update and inspect data. Using this approach, a researcher or research group can decide to design a data model in nodegoat and start entering data into this data model alone, together or with a larger group. [....]