Archive

Posts Tagged ‘modularisation’

Linked Ontology Web

April 1, 2009 1 comment

I’ve been thinking a bit at work about how we should publish OWL ontologies on the semantic web, and if this can be done in a way analagous to the linked data web. I want to quickly blog my thoughts before I head to the pub :)

There are currently a number of great tutorials on how to publish RDF as linked data. Without going into too much detail every URI in the published RDF is dereferencable, which very roughly speaking means that it returns some information when your visit it. A URI such as http://os.rkbexplorer.com/id/osr7000000000037256 will return some RDF/XML if they client requests RDF or some HTML if you are visiting from, say, a web browser. There are a number of ways to modularise the data, but typically the information returned on a URI will be the result of a SPARQL describe query or will be the triples where the URI appears as either the subject or object of that triple.  Apologies for the quick and dirty description of linked data, but more information can be found in the previous linked tutorials.

RDF vocabularies and ontologies are typically just published on the web as dumps of RDF/XML and only in some cases are the classes and properties dereferencable. In other words the whole file is simply uploaded in bulk. There are guidelines for publishing RDF vocabularies here.

It seems to me that this will be inadequate for publishing larger and more complex ontologies on the web. Do we need a way to publish large ontologies on the web in a linked data stylee? I think it would certainly be useful.

The dereferencable URI bit is easy enough and can be done as per linked data. For example,  the HTML page for the URI http://www.ontology.com/River could provide a description of the class River using some controlled natural language. The question then is what to put in the OWL/RDF file that is retrieved for the class River  from that URI? What is the class or OWL equivalent of a “SPARQL describe”?

The problem to me seems to be similar to the problem of ontology modularisation discussws here. Suppose I am building an ontology about animals and I need to use the concept Farm and Zoo from a building ontology. When I import the class Zoo how am I sure that I include all the relevant axioms to describe a Zoo, and only the relevant axioms to describe a Zoo? I’ll not describe the how’s here as it has been discussed in this tutorial (and numerous supporting papers). The point is that there are tools (try one online) for extracting the correct axioms from an ontology for describing a given class. Should these tools be used in the linked data community as a means to enable us to publish detailed ontologies on the linked data web? So to be clear:

1) We publish the full OWL file on the web (in an analagous way to a dump of RDF data in the linked data web) – this would be, say, the complete buildings ontology.

2) We make URI derefencable and use content negotiation to retrieve either RDF/XML or HTML as required as we do for linked data.

3) When we deference a class URI (e.g. http://www.ontology.com/Zoo) the axioms contained in the RDF file returned for “Zoo” are determined by the ontology modularisation tools described here rather than some perhaps more naive approach (where as for linked data this would be, say, a SPARQL describe on the URI).

I’d love to know if there are any links to similar work and to know what people think about this proposal.

Reblog this post [with Zemanta]
Follow

Get every new post delivered to your Inbox.

Join 2,092 other followers