NoTube blog – the future of television is social and semantic

Filming our NoTube demo: Hollywood here we come

Posted in Demos by vickybuser on April 30, 2010

Within the NoTube project, the BBC together with colleagues from VU University Amsterdam, Pro-netics and other project partners are exploring the theme of “Internet TV in the Social Web”. Over the past few months we have been planning and building our first prototypes in preparation for the first NoTube Project Review.

Watching TV in a traditional setting is core to these initial prototypes. In the project’s spirit of reusing and adapting existing TV software, Libby installed the Open Source media centre MythTV on her TV at home. She also did various other things with bits of hardware and software to set up the demo. It works in Libby’s front room (and you don’t get a more realistic setting than that), but the set-up doesn’t transport very well to Amsterdam, where the Review will take place. This is mostly because it depends on Freeview (DVB-T) – and the selection of free-to-air DVB-T channels in The Netherlands doesn’t include BBC channels, which our initial demo requires.

For this reason (and because BBC’s iPlayer content is not available outside the UK) we decided to make a video of this part of our demo to include in the Review presentation – and to share with our NoTube and BBC colleagues.

NoTube demo

Using the NoTube iPhone app to control the TV: interactions on the iPhone translate into Jabber requests to the MythTV backend using the XMPP 'buttons' protocol

Libby and I did a trial run a few weeks ago: we used Libby’s digital camera for filming, iMovie for editing and our own voices for the narration. We decided to make two versions of the video using the same content but different voice-overs: one describing the end-user experience and the other explaining the back-end processes. We showed the results to our NoTube colleagues at the last Project Meeting in Turin.

NoTube iPhone app

Using a smart phone to see more information about a programme from the Linked Open Data cloud

This was our first attempt at making a video and we learnt a number of things:

  • We shouldn’t underestimate how long it takes to make a good video
  • Pacing is crucial – some of the processes we explain are quite complex, and viewers can’t be expected to concentrate on both what they’re seeing and hearing at the same time
  • Our DIY narration was too fast for people to follow: we needed a professional to record the voice-over for us
  • There were too many freeze frames in places where we were describing the back-end processes: we needed to include some diagrams to help explain what’s going on and link it to the underlying architecture
  • Showing the same film twice with different voice-overs didn’t work: it was too hard for the audience to distinguish the difference between them

We came back from the Turin meeting with useful feedback and set to work on organising our ‘production’. We liked the suggestion from our NoTube colleague, Lora Aroyo, that we video a selection of our various planning diagrams, drawings and wireframes to illustrate the end user story. Lora helped us write a script for this and I gathered the relevant materials and made a trial video to see if the idea could work. We also re-visited the script for the demo video trying to make it more concise, and we made some simple box and arrow diagrams to show how the various NoTube services are interacting behind the scenes.

Filming day arrived. Our cameraman did a great job with lights and camera angles, and making sure that we had plenty of footage at different ranges to choose from. Editing proved more challenging: some of the bits of filming that we really liked just didn’t work with our script and our initial edits were too long. We soon realised that we had to be really ruthless with our script editing and we cut it by half. We also discovered that what read well on paper didn’t sound so good when it was read out loud. And, again, we underestimated how long the editing process would take…

In the end we just had to be pragmatic and do the best we could in the time we had (one day) before the voice recording was scheduled. For editing purposes we used our own voice-overs, but it wasn’t until we heard the voice of our professional narrator that we really appreciated how much better it sounded!

You can see the videos on Vimeo here – please do let us know what you think:

http://vimeo.com/11232681

http://vimeo.com/11231965

Brokering EPG & enrichment services

Posted in Demos, Thinking Out Loud by Stefan on April 22, 2010

Libby’s post already mentioned some of the current activities in NoTube related to EPG metadata, particularly EPG harvesting, enrichment and filtering. While the Semantic Web Services (SWS) broker, one of the central components in NoTube, aims at brokering distributed services, scenarios involving EPG-related services serve as a perfect use case for showcasing the role of the broker. This is, because from an application point of view, EPG processing usually involves orchestrating a set of services which (1) harvest EPG metadata, (2) enrich it with additional information (i.e. harvested from the Linked Data cloud), and (3) filter it according to specific preferences of a user. Each of these steps requires the application developer to know where the actual services are (i.e. the invocation endpoints), how to invoke them and process the responses, and in particular, how to mediate mismatches which inevitably occur during orchestration of such distributed services. As an example, during step (1), an application needs to be aware of each available EPG service, its particular capabilities and the kind of EPG – e.g., for which channel and in which language – it provides. SWS aim at taking that strain away from the application (developer) by providing an abstract interface – the broker’s API – which allows to request application-oriented goals from a single endpoint. The broker allows to expose much more complex functionalities to the application interfaces and to provide a response in a structure and format which is desired by the specific requester. This is usually achieved on the basis of formal semantic descriptions of services which allow for automated discovery, orchestration and execution of Web services.


(A higher resolution version of the video is available here. )

In our current prototype, we provided an implementation of the broker component based on the IRS-III SWS environment. The brief demo video below illustrates, how the broker automatically selects suitable sets of EPG-harvesting services by reasoning on semantic annotations of EPG services and their underlying channels to identify services/channels which match a specific user language. These are then orchestrated in one go together with some enrichment (mainly DBPedia) and filtering functionalities allowing to pre-select suitable metadata records.

That’s just to give one of the first examples of SWS brokerage used within NoTube, while we are constantly working on exposing further functionalities as added-value SWS goals. On a sidenote, as it turned out that more human-oriented structured service annotations are required within NoTube as well, we are currently implementing a more light-weight service annotation store on the basis of the iServe repository, which might be introduced in anther post soonish.