A couple of years ago I stumbled upon The DITA RDF Project by Colin Maudry. As Colin describes it:
"The objective of this project is to develop an ontology to describe DITA XML objects and to publish tools to generate RDF triples based on that ontology. In the end, it enables the publication of the metadata of a DITA documentation set to the Semantic Web and consequently its linking with other data types (product, people, sales, non-DITA document metadata, etc.)."
You can find it here: http://colinmaudry.github.io/dita-rdf/
At the time I didn't appreciate the purpose and value of Colin's contribution as an open-source plugin to the DITA Open Toolkit. Suffice to say, it is brilliant, but was too far ahead of its time for most of us to grasp. After all, many of us are pragmatic types seeking to apply technology for practical use (at least I'm in that group; I bow down and pay homage to the brilliant think-tank types that can abstract across multiple layers of indirection).
To be fair, how could most of us in the content practice understand the significance of The DITA RDF Project? You'd have needed to understand ontology and knowledge graphs - back in 2015!. Remember, KGs didn't make it into the mainstream until Google published their first KG back in 2012.
Why am I falling all over myself with this project? Late as I may be than others, imagine if we can use this (or an updated version) of this DITA ontology to graph our entire DITA content corpus? Take it further - can we not only map the corpus, but can we add edge data from, say - a content map (an instance of a Content Use Model) to gain both immense insights into the corpus and enhance graph-based search retrieval, and assembly.
This, I think, deserves a thorough discussion.
Michael