Publish your project with the new data publication module. nodegoat users can now select any project to generate a data publication that is web-accessible and downloadable as a ZIP-file. By generating a new publication a Project's data model and all of its data are published and archived. The publication remains accessible also when new publications are generated at a later stage.
Publications are stand-alone self-containing archives which include both the HTML-interface to the data model as well as all of its data in both JSON and CSV.
The Scope functionality is used throughout nodegoat to traverse your data model and select elements to be included in a visualisation, analysis, or export. With the Scope, you can limit or expand your data selection. In a prosopographical analyses you might want to include all educational institutes related to one person, plus all the relations of these institutes, while omitting all other personal relations of a person. Follow this Guide to learn how to configure a Scope.
Chronology Statements that you make in nodegoat allow you to specify what you mean by a statement like 'circa'. Instead of using qualitative statements about vagueness, Chronology Statements provide you with a way of making quantitative statements about vagueness. Chronology Statements also allow you to make relational date statements: 'the date point is between the sending of letter X and the sending of letter Y'. Follow this Guide to learn how to store uncertain dates by using Chronology Statements and follow this Guide to learn how to store relational dates by using Chronology Statements.
The temporally-aware dynamic network analysis functionality makes the temporal options offered by the Chronology Statements available on any level of a Scope. This allows you to apply and pass temporality to time-bound connections in any of a Scope's paths. The dates from Chronology Statements can be sourced from every step in the traversal: ascendant or descendant nodes, and combinations. Selected configurations can be applied on any/all of the connections/edges: outbound or inbound directionality, and combinations.
Example: Academic Connections
With this functionality it is now possible to dynamically generate networks of people who attended the same educational institute at the same time, without specifying any dates in a filter. The temporally-aware dynamic network analysis functionality applies the initial date on every other relationship that appears on a specified path:
The obvious benefit of this approach is the scalability of this functionality, as it allows you to quickly scrutinise complex networks based on time-bound connections:[....]
nodegoat has been extended with four new features in the past months. These new features were commissioned by three research projects from Switzerland, Slovenia, and The Netherlands. All nodegoat users can now make use of these features.
Data Model Viewer
This feature has been commissioned by the Historical Institute of the University of Bern for the REPAC project.
When the complexity of the data model that you have implemented in nodegoat grows, it might be challenging to maintain an overview of all the Object Types, Object Descriptions, Sub-Objects, Sub-Object Descriptions, and all the relationships in between these elements. You can now generate an overview of all the elements of your data model that have been enabled in a Project.
Go to 'Management' and click the name of a Project that is listed in the overview of Projects. Set the 'Mode' to 'References Only' to hide all non-relational elements. Set the 'Size' to 'Full Height' to expand the height beyond the size of the window. You can specify a DPI value and download a 'png' version of the generated overview. To enhance the legibility of the graph you can reposition elements by means of dragging and dropping.[....]
The release of nodegoat 7.3 comes with a set of new features that have been developed in collaboration with various projects and institutes.
Repertorium Academicum Germanicum: Vague and Complex Dates
The Repertorium Academicum Germanicum (RAG) at the University of Bern has commissioned a major overhaul of the nodegoat dating functionality. With this development process, the core of nodegoat's date handling processes have been rewritten to account for date statements that are uncertain/cyclical/relational. These statements can be expressed using 'ChronoJSON' notation, to allow for a clean and understandable description of complex date statement. We used the EDTF format as a starting point for this development process, but had to conclude that this format was not equipped to make relational date statements or include custom periodisations (like 'Sommersemester').
With these new features nodegoat users can now make statements like 'letter X was sent two months after letter Y and two months before letter Z', or 'Person A graduated on one day two years before 1498 and two years after 1498'.
These features are completely integrated into all nodegoat's functionalities. This means that you can create complex filters that use relational or vague date statements, include these levels of vagueness in your visualisation, and make selections of data based on vague dates to perform network analytical calculations.[....]
As a result of our cooperation with nodegoat's institutional partners, we have been able to develop a RESTful API for nodegoat.
The API provides an additional interface to query and store data to your Projects in nodegoat. We have integrated the API with nodegoat's core functionalities and have optimised it for large operations. The API can also be used to update the data Model, which allows you to update specific attributes of a Type, or upload a whole data Model with multiple Type templates in one go.
You can use the Project settings to configure what parts of your data are exposed through the API. The API can be configured to require authentication or allow for public access.
In case you want to use the API with your own research data, get in touch!
We have enabled the API for a demo domain. You can access this domain by logging in to nodegoat.net with the username 'demo' and password 'demo'. The following cURL commands give you a JSON package with the information that has been entered on the French intellectual Ernest Renan. You can also click on the URL to view the output in your web browser.
You can now use nodegoat to query SPARQL endpoints like Wikidata, DBpedia, the Getty Vocabularies (AAT, ULAN, TGN), and the British Museum. Through the nodegoat graphic interface you query linked data resources and store their URIs within your dataset. This means that you can search all people in Wikidata using the string 'Rembrandt' and select the URI of your choice (e.g. 'https://www.wikidata.org/wiki/Q5598'). By doing so, you add external identifiers to your dataset and introduce a form of authority control in your data. This will help to disambiguate objects (like persons/artworks with similar names) and also enhances the interoperability of your dataset. Both these aspects make it easier to share and reuse datasets.
These two advantages (data disambiguation and data interoperability) are useful for researchers who work on small(-ish) but complex datasets. Researchers who feel that 'automated' research processes are unattainable for them as their data may be dispersed, heterogeneous, incomplete, or only available in an analogue format, are more likely to rely on something like the old fashioned card catalogue system in which all relevant objects and their varying attributes and relations are described. Luckily, we can also use digital tools to create and maintain card catalogues (databases). For a historian who is mapping the art market of a seventeenth century Dutch town, a database is a very powerful tool to store and analyse all objects (persons, artworks etc.) and the relations between these objects. Still, if no external identifiers are used, this dataset is nothing but a curated island (even if the data is published!).
Curation & Linked Data
The process we describe here aims to connect the craftsmanship of research in the humanities to the interconnected world of massive repositories, graph databases and authority files. Other useful purposes of linked data resources for the humanities have already been described extensively, like using aggregation queries to analyse large collections, thesaurus comparison/matching, or performing automated metadata reconciliation as described by the Free Your Metadata initiative.[....]
The accessibility and flexibility of nodegoat allows for a collaborative and ongoing data entry and data curation process. The experience learns that data consistency becomes a challenge as soon as data entry processes become collaborative or are executed over longer periods of time. Especially when the data structure is complex and data sources are ambiguous, consistency is an increasingly prominent factor. To ensure uniform identification of each object within the dataset, the name of an object should both be consistent and inclusive.
Within nodegoat the name of each object can be a plain text field, generated dynamically, or a combination of the two. When generated dynamically, the object name can be build from its definitions for consistency and include the definitions from other named objects for inclusiveness. A rather exhaustive naming scheme for a painting could look like this:
This produces:
By generating object names dynamically, changes in named objects (such as artist and city in the example of the painting) are also reflected accordingly in the name of the objects.
Due to the unrestricted relational nature of the naming algorithm there is a potential problem for recursion. Recursion can be introduced directly (e.g. the name of a person includes the name of the person's parents) or further down the naming scheme. By limiting recursion to a single step it is possible to leverage this feat and include family ties within a person's name without running into an infinite loop.
In a future blog post we will discuss the possibility to complement the dynamic generation of object names with conditional formatting.
Within nodegoat we are working on combining data management functionalities with the ability to seamlessly analyse and visualise data. nodegoat can be used as any other database application as it allows users to define, update and query multiple data models. However, as soon as data is entered into the environment, various analytical tools and visualisations become available instantly. Tools such as in-depth filtering, diachronic geographical mappings, diachronic social graphs, content driven timelines, and shortest path calculation enable a user to explore the context of each piece of data. The explorative nature of nodegoat allows users to trailblaze through data; instead of working with static ‘pushes’ – or exports – of data, data is dynamically ‘pulled’ within its context each time a query is fired. This approach produces a number of advantages, opportunities, and challenges we plan to discuss in this and future blog posts.
To kick off, let’s consider an example: the provenance of paintings. Should an art historian decide to deal with this research question within nodegoat, they will first conceptualise a data model based on the kind of data that needs to be included (e.g. persons, studios, paintings, collections, museums) and the relevant relations (e.g. created by, sold by, inherited by, exhibited in). This data model then has to be set up in nodegoat and subsequently be filled with pieces of evidence (see the nodegoat FAQ to learn more about this). As soon as the first objects have been entered and their relations have been identified, these objects can be plotted on a map, be viewed in a social graph, or simply: they become part of the network. Now, a question such as ‘how is an artist connected to a specific museum via an art dealership?’ becomes tangible by using functionalities such as shortest path calculation between objects and in-depth filtering.
nodegoat runs in a web browser, making it is accessible from any device connected to the internet. Working in a web based environment allows for the implemention of collaborative projects and simultaneous access to the same dataset. Multiple users (who have been assigned varying clearance levels) can enter, update and inspect data. Using this approach, a researcher or research group can decide to design a data model in nodegoat and start entering data into this data model alone, together or with a larger group. [....]