SEMHE ’11 – Call for papers

The 3rd SemHE workshop (Semantic Web Applications in Higher Education) will take place this September and will be co-located with EC-TEL’11 this September in Palermo.

The workshop organisers are hoping that it will attract people working on semantic web applications in HE and those who have been developing linked data infrastructures for the HE sector. The workshop will address the following themes:

– Semantic Web applications for Learning and Teaching Support in Higher Education.
– Use of linked data in repositories inside or across institutions.
– Collaborative learning and critical thinking enabled by semantic Web applications.
– Interoperability among Universities based on Semantic Web standards.
– Ontologies and reasoning to support pedagogical models.
– Transition from soft semantics and lightweight knowledge modelling to machine processable, hard semantics.
– University workflows using semantic Web applications and standards.

The deadline for submissions is 8 July and full information is available at the workshop website.

2nd Linked Data Meetup London

Co-located with the dev8D, the JISC Developer Days event, this week, I along with about 150 others gathered at UCL for a the 2nd Linked Data Meetup London.

Over the past year or so the concept and use of linked data seems to be gaining more and more traction. At CETIS we’ve been skirting around the edges of semantic technologies for some time – tying to explore realization of the vision particularly for the teaching and learning community. Most recently with our semantic technologies working group. Lorna’s blog post from the last meeting of the group summarized some potential activity areas we could be involved in.

The day started with a short presentation from Tom Heath, Talis, who set the scene by giving an overview of the linked data view of the web. He described it as a move away from the document centric view to a more exploratory one – the web of things. These “things” are commonly described, identified and shared. He outlined 10 task with potential for linked data and put forward a case for how linked data could enhance each one. E.g. locating – just now we can find a place, say Aberdeen, however using linked data allows us to begin to disambiguate the concept of Aberdeen for our own context(s). Also sharing content, with a linked data approach, we just need to be able to share and link to (persistent) identifiers and not worry about how we can move content around. According to Tom, the document centric metaphor of the web hides information in documents and limits our imagination in terms of what we could do/how we could use that information.

The next presentation was from Tom Scott, BBC who illustrated some key linked data concepts being exploited by the BBC’s Wildlife Finder website. The site allows people to make their own “wildlife journeys”, by allowing them to explore the natural world in their own context. It also allows the BBC to, in the nicest possible way, “pimp” their own progamme archives. Almost all the data on the site, comes from other sources either on the BBC or the wider web (e.g. WWF, Wikipedia). As well as using wikipedia their editorial team are feeding back into the wikipedia knowledge base – a virtuous circle of information sharing. Which worked well in this instance and subject area, but I have a feeling that it might not always be the case. I know I’ve had my run-ins with wikipedia editors over content.

They have used DBPedia as a controlled vocabulary. However as it only provides identifiers, and no structure they have built their own graph to link content and concepts together. There should be RDF available from their site now – it was going live yesterday. Their ontology is available online.

Next we had John Sheridan and Jeni Tennison from data.gov.uk. They very aptly conceptualised their presentation around a wild-west pioneer theme. They took us through how they are staking their claim, laying tracks for others to follow and outlined the civil wars they don’t want to fight. As they pointed out we’re all pioneers in this area and at early stages of development/deployment.

The data.gov.org project wants to:
* to develop social capital and improve delivery of public service
*make progress and leave legacy for the future
*use open standards
*look at approaches to publishing data in a distributed way

Like most people (and from my perspective, the teaching and learning community in particular) they are looking for, to continue with the western theme, the “Winchester ’73” for linked data. Just now they are investigating creating (simple) design patterns for linked data publishing to see what can be easily reproduced. I really liked their “brutally pragmatic and practical” approach. Particularly in terms of developing simple patterns which can be re-tooled in order to allow the “rich seams” of government data to be used e.g. tools to create linked data from Excel. Provenance and trust is recognised as being critical and they are working with the W3C provenance group. Jeni also pointed that data needs to be easy to query and process – we all neglect usability of data at our peril. There was quite a bit of discussion about trust and John emphasised that the data.gov.uk initiative was about public and not personal data.

Lin Clark then gave an overview of the RDF capabilities of the Drupal content managment system. For example it has default RDF settings and FOAF capability built in. The latest version now has an RDF mapping user interface which can be set up to offer up SPARQL end points. A nice example of the “out of the box” functionality which is needed for general uptake of linked data principles.

The morning finished with a panel session where some of key issues raised through the morning presentations were discussed in a bit more depth. In terms of technical barriers, Ian Davies (CEO, Talis) said that there needs to be a mind shift for application development from one centralised database to one where multiple apps access multiple data stores. But as Tom Scott pointed out it if if you start with things people care about and create URIs for them, then a linked approach is much more intuitive, it is “insanely easy to convert HTML into RDF “. It was generally agreed that the identifying of real world “things”, modelling and linking of data was the really hard bit. After that, publishing is relatively straightforward.

The afternoon consisted of a number of themed workshops which were mainly discussions around the issues people are grappling with just now. I think for me the human/cultural issues are crucial, particularly provenance and trust. If linked data is to gain more traction in any kind of organisation, we need to foster a “good data in, good data out” philosophy and move away from the fear of exposing data. We also need to ensure that people understand that taking a linked data approach doesn’t automatically presume that you are going to make that data available outwith your organisation. It can help with internal information sharing/knowledge building too. Of course what we need are more killer examples or winchester 73s. Hopefully over the past couple of days at Dev8 progress will have been made towards those killer apps or at least some lethal bullets.

The meet up was a great opportunity to share experiences with people from a range of sectors about their ideas and approaches to linked data. My colleague Wilbert Kraan has also blogged about his experiments with some of our data about JISC funded projects.

For an overview of the current situation in UK HE, it was timely that Paul Miller’s Linked Data Horizon Scan for JISC was published on Wednesday too.

Are there compelling use cases for using semantic technolgoies in teaching and learning?

On Monday the SemTech project had a face to face meeting in London to update on progress with the project and their survey of semantic technologies being used in education.

The day started with Thanassis Tiropanis giving an overview of the project today and in particular the survey site (see previous blog post) which has collated 40 semantic applications that can/are being used in teaching and learning. The team are now grappling with trying to make sense of the data collected. Some early findings, perhaps not surprisingly, show that there is most activity around information collection, publishing and data gathering. However there are some examples of more collaborative type activities being supported through semantic technologies. There is still time to contribute to the survey if you want to add any or your experiences.

I found the afternoon group discussions the most interesting part of the day. I chaired a group looking at the institutional perspective around using/adopting semantic technologies in respect of the following four questions:

1 what are the most important challenges in HE today?
2 how might semantic tech be part of the solution?
3 what are the current barriers to semantic technology adoption?
4 what areas of semantic technology require investing additional effort in?

As you would expect we had a fairly wide ranging discussion; but ultimately agreed that the key to getting some institutional traction would be to have some examples/use cases of how semantic technologies could help with key institutional concerns such as student retention. We came to the consensus that if data was more rigorously defined,categorized and normalized ie in RDF/triple stores, then it would be easier to query disparate data sources with added intelligence and so provide more tailored feedback/ early warning signs to teachers and administrators. However at the moment most institutions suffer from having numerous data empires who don’t see the need to communicate with each other and who don’t always have the most rigorous approach to data quality. Understanding data workflow within the institution is central to this. It will be interesting to see if any of the current JISC Curriculum Design projects decide to adopt a more semantic approach to workflow issues.

So in the answers we came up with were:

1 External influences eg HEFC, student retention, recruitment, course provisioning, research profile
2 Let us think of the questions we haven’t thought of yet.
3 Data empires, lack of knowledge, use cases, good examples in practice
4 Demonstrators to show value of adding semantic layer to existing data

Semantic Technologies in education survey site now available

The next stage of the SemTech project (as reported earlier in Lorna’s blog) is now underway. The team are now conducting an online survey of relevant semantic tools and services. The survey website provides a catalogue of relevant semantic tools and services and information on how they relate to education.

If you have an interest in the use of semantic technologies in teaching and learning, you can register on the site and add any relevant technologies you are using, or add tags to the ones already in documented. As the project is due for completion by the end of February, the project team are looking for feedback by 2 February.

Semantic technologies in teaching and learning working group – first meeting

The first meeting of the semantic technologies in teaching and learning working group took place at the University of Strathclyde on Friday 3 October.

The SemTec project outlined their project and there was a general discussion re the proposed methodology, scope and community engagement. A twine group has been established for the working group ( if you want an invitation, let me know). The next WG meeting will be in December sometime dates and location to be confirmed, followed by a public meeting early 2009.

More information on the working group is available on the intrawiki

Overview of semantic technologies

Read/Write web have produced a really concise guide to the use of semantic technologies – Semantic Web patterns: a guide to semantic technologies. They have also just introduced a new monthly podcast feature called “The Semantic Web Gang”. The first episode is called “readiness for the semantic web”. Although taking a primarily business view of things, I’m sure that there will be lots of cross over with the e-learning community and a good way to keep abreast of developments in the use of semantic technologies.