'Handmade research data' presented by @kaspargubler at the #XIHeloise conference organised by @PirehP1 and @LeCnam
'Handmade research data' presented by @kaspargubler at the #XIHeloise conference organised by @PirehP1 and @LeCnam
@kaspargubler @network_heloise @LeCnam @nodegoat
RT @kaspargubler: Now at @network_heloise in Paris @LeCnam @LAB1100 talking about dealing with uncertain data in @nodegoat...
#XIHeloise…
@rensbod @tla @true_mxp Certainly a hot topic at the moment https://twitter.com/kaspargubler/status/1524672770956738561
RT @FGHO_eu: 📌Bald ist es soweit: Vom 22.-26. August findet unsere jährliche #Summerschool statt!
💡Mit @nodegoat, @unifr und @kieluni bele…
RT @MoHu_Centre: Our Bo2022 Project, a database on the history of academic mobility, will be presented on 12 May! A reading with actors And…
RT @FGHO_eu: Busy week ahead for the FGHO team!
@vivien_popken will present our Citizen science Programm @laBnF Paris, @angelalinghuang is…
👀 https://twitter.com/Dirk_van_Miert/status/1522568102080397312
#nodegoat has been extended by four new features:
- Data Model Viewer, for @RAG_online
- Music Notation, for @ZrcSazu
- Dynamic Repositioning of labels, for @RAG_online
- Find & Replace, for @SPIN_ERNiE
All nodegoat users can now use these features
➡️ https://nodegoat.net/blog.s/59/release-of-new-nodegoat-features
RT @kaspargubler: Prague Talks on Digital Humanities
Morgen Mittwoch, 14 Uhr, via Zoom.
Infos und Link:
https://histdata.hypotheses.org/
#datadr…
RT @LuaracdP: Lento pero seguro. Mujeres, hombres y territorios conectados por la literatura epistolar usando #nodegoat 💜 #redesociolitera…
RT @HuygensING: @SBvanderVeen gebruikt Nodegoat voor haar onderzoek naar de Joodse elite rond 1900 in NL. Deze object-georienteerde databas…
RT @SBvanderVeen: Interested in reading about how I use @nodegoat and @BioPortaal in my research? I wrote this month's report for the Digit…
RT @eltonteb: We'll hear about @GretaHawes's work on https://www.manto-myth.org/, an initiative to model the spatial dynamics of Greek myth usi…
RT @network_heloise: 📢New publication📢
Fonti per la storia delle popolazioni accademiche in Europa / Sources for the History of European Ac…
RT @rafdebont1: The data of @species360, and the expertise of @JohannaStaerk, Joep Leersen and @monica_vasile, enabled @animalsmoving to st…
RT @kaspargubler: @nodegoat power week at the @unibe: Tuesday to Friday daily workshops of research projects from History (Medieval and Mod…
RT @kaspargubler: Here are the direct links to the @nodegoat tutorials:
13
https://histdata.hypotheses.org/nodegoat-tutorials#import_data_import_module
14
https://histdata.hypotheses.org/nodegoat-tutorials#import_data_dynamic_data_ingestion
15
https://t.…
RT @nep4dissent: #Nep4Dissent scholars from 37 European countries strongly stand for Ukraine and urge the @EU_Commission @COST_Academy @ERC…
We run over 30 @nodegoat installations at institutes in Europe, Australia and the US. Some installations are used for one project, others are used for many.
See this breakdown by Sebastian Borkowski on how @DH_unibe uses their @nodegoat installation: https://tube.switch.ch/videos/JvWJU6mrSr
#nodegoat job alert! 👇 https://twitter.com/Stefanie_Mahrer/status/1496857538133852161
Consult the #nodegoat Documentation to learn how to configure your own Linked Data Resources and Ingestion Processes:
➡️ https://nodegoat.net/documentation.s/64/linked-data
➡️ https://nodegoat.net/documentation.s/118/ingestion
8/8
All #nodegoat users can use these features and every proper endpoint that outputs JSON or XML can be queried. #nodegoat data can be exported in CSV and ODT, or published via the #nodegoat API as JSON and JSON-LD.
7/8
The Ingestion Processes were developed as part of the @snsf_ch project 'Dynamic Data Ingestion' of @kaspargubler @unibern. It builds upon the LD Resource feature commissioned in 2015 by @GhentCDH and extended in 2019 by @ADVNarchief.
6/8
Learn how to query the @zotero API and how to ingest bibliographic data:
➡️ https://nodegoat.net/guide.s/137/ingest-bibliographic-data-from-zotero
5/8
Learn how to query the @Transkribus API and how to ingest transcription data:
➡️ https://nodegoat.net/guide.s/136/ingest-transcription-data-from-transkribus
4/8
Learn how to query the #SPARQL endpoint of @laBnF and how to ingest publication data:
➡️ https://nodegoat.net/guide.s/135/ingest-publication-data
3/8
Learn how query the #SPARQL endpoint of @wikidata and how to ingest geographical data, biographical data, or external identifiers:
➡️ https://nodegoat.net/guide.s/133/ingest-geographical-data
➡️ https://nodegoat.net/guide.s/134/ingest-biographical-data
➡️ https://nodegoat.net/guide.s/138/ingest-external-identifiers
2/8
Learn how to connect your nodegoat environment to
@wikidata 🔗
@laBnF 📚
@Transkribus ✍️
@zotero 🔖
Today we launch a new section of the #nodegoat Guides on 'Ingestion Processes':
1/8
@kol_t @archaeoklammt @kembellec And if you use the header "Accept: application/ld+json" the API outputs JSON-LD 🤓