med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

364
active users

#rdf

2 posts2 participants0 posts today

Hey @doriantaylor, I have some more dumb semantic web questions.

1. If I *don't* want to use a full existing ontology, because I intentionally want to have a simpler model during early phases of a project, is there a .. I guess a "sanctioned way" to enhance my ontology as time goes on without losing fidelity?

I know OWL's SameAs can identify individual resources as the same as some other resource, but this would kind of be that, only at the ontological level, maybe.

Tomorrow, we will dive deeper into ontologies with OWL, the Web Ontology Language. However, I'm doing OWL-lectures now for almost 20 years - and OWL as well as the lecture haven't changed much. So, I'm afraid I'm going to surprise/dissapoint the students tomorrow, when I will switch off the presentation and start improvising a random OWL ontology with them on the blackboard ;-)

#ise2025 #OWL #semanticweb #semweb #RDF #knowledgegraphs @fiz_karlsruhe @fizise @tabea @sourisnumerique @enorouzi

Continued thread

#SPARQL (SPARQL Protocol and RDF Query Language) est un langage conçu pour interroger et manipuler des données au format #RDF (Resource Description Framework), un modèle utilisé pour représenter des données sous forme de graphe, élément clé du Web sémantique. Toutes les données de Wikidata peuvent ainsi être décrites comme un triplet : sujet – prédicat – objet.
Cet atelier propose une initiation à SPARQL et au Wikidata Query Service.
#LibreABC2025

Thomas Kerboul, de la Bibliothèque de Genève, animera un atelier lors de la journée #LibreABC2025 sur les jeux de données dans #Wikidata.
#SPARQL #RDF

Description:
Wikidata est une base de connaissances libre et ouverte offrant un accès à des données structurées. Au-delà de la consultation élément par élément, Wikidata permet de visualiser une grande quantité de données sous forme de tableaux, de cartes ou encore de chronologies et de créer ses propres jeux de données grâce au langage SPARQL.

OK @doriantaylor (or anybody), one more beginner question. When does a reasoner actually do its work? So, I've got a pile of data in whatever form, and I formulate a query. Then .... what? The reasoner looks at the datatypes in the query and says e.g. "if you're asking for publications that also means books or articles?" Or does it create dummy nodes for the existing data, so there's a publication node for every book and every article?
#rdf #SemanticWeb

Hey do you know things about RDF and/or have references to work on building machine-usable ontologies? Cause I'd like to talk to you! (And if you don't could you give a boost?)

I'm looking into it because for media-thing I'm wanting to identify ways to give people meta-descriptive power over their media metadata.

#rdf #ontology #ontologies #mediathing

As Heiko Paulheim is referring in his tutorial at ISWS 2025, his presentation on RDF2vec is not an individual contribution, but an effort of a great team of PhDs, PostDocs & students from his research group at University of Mannheim.

I might be overdoing this whole #RDF and #LinkedData thing, but… here's the first steps with #Trinja, a RDF-to-HTML mapper and #SSG:

codeberg.org/Taganak/trinja/sr

The idea is: Use *any* resource described as RDF (e.g. from #Wikidata or an #ActivityPub action), link a #Jinja template to it or its rdf:type in your own set of statements, and there you got your visualisation!

Based on #TaganakNet, the #Rust #RDF development kit by @codecraft and me. We are collecting real-world examples at a good rate!

Continued thread

@libreabc

Notre prochaine intervention à #LibreABC le 9 septembre prochain s'intitule "Comment gérer ses fonds d’archives sous forme de données liées au moyen de la plateforme libre ResearchSpace"

Cela sera l'occasion de promouvoir l'utilisation de #RDF natif pour gérer des collections et de plaider pour une mutualisation des coûts de développement dans ce domaine.

performing-arts.ch/

researchspace.org/

#LOD
#ResearchSpace
#RecordsInContexts
#archives
#archivCH

www.performing-arts.chSwiss Performing Arts Platform

I have been thinking about how to combine RDF models when one of them is too large to load in to memory.

The answer will involve copying a connected subgraph into a temporary model, a "view" if you will. This will be centred on a start node and then use some specified number of breadth first search steps.

The problem is that if you hit a node with a large number of links, such as a class type, the process will crash just getting a count of the links (if that were possible).

I think the answer involves setting a boundary fence. A set of nodes, or tests that if reached will prevent the search from including that node in subsequent steps.

Some thoughts:
ocratato-sassy.sourceforge.io/

Exploring RDF

What techniques are there for getting a small subgraph from large RDF models?

Lets assume I can use SPARQL to locate some RDF statement (triple). I then need to get all the connected statements for some defined range (eg 4 levels). The problem is if I hit a node that is a class name for millions of objects the process will stall or crash.