Archive

Posts Tagged ‘mash-up’

Some quick linked data hacks

June 16, 2010 22 comments

In previous posts I discussed the work I’d been doing on my family tree linked data. I decided it might be interesting to plot places of birth for my ancestors on a map to get a true idea of where they all came from. The result, a faceted browser that lets me filter based on family name or birth place, can be seen here. This mashup was very easy to achieve using linked data and a tool called Exhibit. To quote: “Exhibit lets you easily create web pages with advanced text search and filtering functionalities, with interactive maps, timelines, and other visualizations…”.

As I explained in a previous post the places of birth for family members were recorded in my family tree linked data by linking to place resources in DBpedia, for example: http://www.johngoodwin.me.uk/family/event1917. In order to perform the mashup I need lat/long values for each place of birth. One option might have been to do some kind of geo-coding on the place name using an API. However, I didn’t relish the world of pain I’d get from retrieving data in some arbitrary XML format or the issues with ambiguities in place names. The easiest way to get that information was to enrich my family tree data by consuming the linked data I’d connected to. This is how I did it…

First I ran a simple SPARQL query to find all the places referenced:

select distinct ?place
where {?a <http://purl.org/NET/c4dm/event.owl#place&gt;
?place .}

(match on all triples of the form ?a <http://purl.org/NET/c4dm/event.owl#place&gt; ?place, and then return all distinct values of ?place).

The results are URIs of the form http://dbpedia.org/resource/Luton. I then used CURL (a command line tool for transferring data with URL syntax) to retrieve the RDF/XML behind of the URIs:

curl -H “Accept: application/rdf+xml” http://dbpedia.org/resource/Luton

This basically says give me back RDF/XML for the resource http://dbpedia.org/resource/Luton. It was then easy to insert this RDF/XML into my triplestore (RDF database). I can do this because my family tree data was in linked data format (RDF) and linked to an existing resources also in RDF – so there was no problem with integrating data in different schemas/formats.

Now all I had to do was retrieve the information I needed to do the mashup. This was done using a SPARQL query:

select ?a ?name ?familyname ?birthdate ?birthplacename ?latlong
where
FILTER langMatches( lang(?birthplace), “EN” )
}
ORDER BY ?birthdate

Given that Exhibit works really well with JSON I opted to return the results to the query in that format (SPARQL queries are typically returned as XML or JSON). It was then a simple matter of making the resultant JSON into a suitable form that Exhibit can process.

I did another simple mashup using the BBC linked data here. This followed a similar process, except that the BBC had already enhanced there data by following links to DBpedia. This BBC mashup basically lets you find episodes of brands of radio show that play your favourite artists/genres. The BBC data contains links between artists and radio shows. There are ‘sameAs’ links from the BBC artist data to DBpedia. It is DBpedia that then provides the connection between artists and their genre(s).

Hopefully this shows the power of linked data in a simple way. There is a simple pattern to follow…

1) Make data, and make that data available in RDF. People can then link to you, and you can link to other people who have data in RDF. So I made family tree data in RDF, the BBC made music/programme data in RDF.

2) Link to linked data resources on the web (in this case we both linked to DBpedia).

3) Enhance your data by consuming the data behind those links – this is trivial because they are both in the linked data format RDF.

4) Make something cool/useful :)

In fact this will be even easier to build useful services when the linked data API is in use as this will bypass the need for SPARQL in the many cases. As more and more people provide linked data we will have an easy way to provide services built on top of combined data sources, and the linked data API will make it web 2.0 friendly for those (understandably?) put off by SPARQL.

The Guardian Open Platform and Data Store

March 10, 2009 1 comment

Today the Guardian launched their Open Platform. According to their website “The Open Platform is the suite of services that make it possible for our partners to build applications with the Guardian.” The Open Platform contains two products: The Content API and the Data Store. The content API provides a REST (-ish apparently) mechanism to query a vast amount of documents and content from the Guardian. The Data Store is “a collection of important and high quality data sets curated by Guardian journalists”. This is all very cool!

Currently the Data Store provides a large number of datasets on subjects as diverse as military spending, carbon emissions and university rankings. Currently the data is provided as spreadsheets that have been uploaded to Google Docs allowing easy access. Again all very cool.

This work will obviously result in a lot of cool applications and mash-ups. However, the semantic web geek in me can’t help this that mash-ups are so last week :) It seems obvious (?) that the next step for the Guardian Data Store is to provide the data in RDF and host it in as linked data. These datasets would be a fantastic addition to the linked data web, allowing mesh-ups where the data from various linked data sources can be fused in different ways.

Time to convince the Guardian that this is the next logical step for this, already great, piece of work.

Reblog this post [with Zemanta]
Follow

Get every new post delivered to your Inbox.

Join 2,189 other followers