Home > linked data, Semantic Web > A Crude BBC Places Linked Data mashup

A Crude BBC Places Linked Data mashup


Last night I did some more experimenting with the Python rdflib directory. This time I did a crude (it’s not that pretty or polished yet) mashup of some of the BBC linked data and DBpedia linked data.

The Beeb have been in the linked data business for a while and their initial efforts were around programmes and music (but you also check out the great linked data powered wildlife finder).

Recently they’ve started to experiment with tagging their programmes with relevant people, places and organisations.

I decided it might be quite nice to have a simple mashup showing TV and radio shows about different places. To this end I did a quick linked data mashup to produce some KML showing this information.

To do this I again used Python’s rdflib. Here it was a simple case of following links from a place to a TV/radio programme and loading the RDF into a graph. It was then a case of executing a simple SPARQL query over this graph to get a KML file containing programme details and a lat/long coordinate for plotting it on a map. The BBC place data did not contain lat/long for all the places, but luckily they did include a ‘sameAs’ to the place information in DBpedia. Here all we had to do was follow the ‘sameAs’ link and load in the DBpedia data.

I explained how to use rdflib to do this sort of thing in my last post, but meanwhile here is the source code and here is the KML. The KML can be used with a mapping API of your choice, but for a quick view drop the KML URL into the search box on Google maps or view in Google Earth.

At the moment this is a bit clunky, but it’s just a start…

About these ads
  1. Chris Wallace
    January 24, 2011 at 12:55 am | #1

    Nice idea, but I wonder if the conversion to RDF is worthwhile in this case. I had a bash at doing this in XQuery using the XML files – here is the working:

    http://184.73.216.20/exist/rest/db/apps/utils/taskhtml.xq?tasks=/db/apps/BBC/info.xml

    On the EC2 micro instance used here the scrape takes about 30 secs so caching is needed.

    Chris

  1. January 21, 2011 at 8:19 am | #1
  2. January 21, 2011 at 7:01 pm | #2
  3. January 24, 2011 at 12:10 pm | #3

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,092 other followers

%d bloggers like this: