LinkedTV Core Ontology

IRI:
http://data.linkedtv.eu/ontologies/core
This version:
2014-03-23 - v0.2 [OWL] [TTL]
History:
2014-03-23 - v0.2 [OWL] [TTL]
2013-01-12 - v0.1 [OWL]
Authors:
José Luis Redondo García, Raphaël Troncy

Creative Commons License This work is licensed under a Creative Commons Attribution License. This copyright applies to the LinkedTV Ontology and accompanying documentation in RDF. Regarding underlying technology, LinkedTV uses W3C's RDF technology, an open Web standard that can be freely used by anyone.

This work is partially funded by the European Union’s 7th Framework Programme via the project LinkedTV (GA 287911).


Abstract

The LinkedTV ontology has been developed under the scope of the European Project LinkedTV FP7. This project aims to make TV content and Web information seamlessly interconnected. This way, watching television while accessing Web content should be smooth and transparent to the viewer. Technologically speaking, this vision requires systems to be able to represent television information as is done today in the Web of Data: interlinked other resources at different granularities, with any other kind of information, searchable, and accessible everywhere and at every time. Here is where the LinkedTV ontology plays its role.

The LinkedTV ontology defines a list of classes that can be relevant in the vast domain of television content, like for example Chapters, Scenes, Concepts, Objects, etc. This model relies on other established and well known ontologies like the The Open Annotation Core Data Model, the Ontology for Media Resources, the NERD ontology, or the Programmes Ontology.

In this ontology, television content can be annotated not only at the upper level of the entire program but also with different degrees of granularity, thanks to the use of the Media Fragments URI 1.0 specification. The instances of the MediaFragment class are the anchors where the other information is attached: legacy metadata from the providers, results obtained by automatic analysis of the video file, and even more important, links it to other resources in the Web where extra information about the content can be found.


Status of this Document

The LinkedTV ontology intends to evolve gradually during the duration of the LinkedTV EU Project. We show here a first list of classes that is expected to be increased during the following months. In parallel, some adjustments to the documentation could be done in order to reflect the emerging features that LinkedTV intends to implement.

Also, new inferred axioms and changes may be added at any time according to the evolution of the other ontologies involved in the data model. The LinkedTV namespace URI, by contrast, is fixed and its identifier is not expected to change. Furthermore, efforts are underway to ensure the long-term preservation of the http://www.linkedtv.eu domain.

Comments are very welcome, please send them to redondo@eurecom.fr. Thank you.


Table of Contents

  1. LinkedTV ontology at a glance
  2. LinkedTV metamodel infrastructure
  3. LinkedTV classes

1. LinkedTV ontology at a glance

An alphabetical index of LinkedTV terms, by class and by property (relationships, attributes), are given below. All the terms are hyperlinked to their detailed description for quick reference.

Classes: ASR, Chapter, Concept, Entity, Face, Keyword, Organization, RelatedContent, Scene, Shot, SpatialObject,

Properties: hasConfidence, hasMediaResource, hasSubtitle,

2. LinkedTV metamodel infrastructure

Following ontologies have been selected as basis for metadata infrastructure in LinkedTV:

a) Ontologies for feature descriptions of multimedia metadata:

b) Following ontologies have been selected as large scale resource for linking detected concepts:

These ontotologies are interlinked and utilizes each others features. BBC Programmes ontology uses FOAF for actor descriptions and Event ontology as provider for superconcepts for events such as broadcasts. PROV-O ontology relies in others in a similar way, and all of them together create the LinkedTV data model as is shown in the figure.

3. LinkedTV classes

Class: linkedtv:ASR

URI: http://data.linkedtv.eu/ontologies/core#ASR

ASR - Annotates the automatic translation of spoken words of the video into text. This information is specially important when subtitles are not available for a particular video.

[back to top]

Class: linkedtv:Chapter

URI: http://data.linkedtv.eu/ontologies/core#Chapter

Chapter - This class allows to represent a certain part of a a TV Content that elaborates about a specific topic or subject. In the LinkedTV data model, Chapters are always attached to a MediaFragment that specifies their temporal references. A TV content can be composed by zero, two, or more than two different Chapters, and every Chapter is composed of different Scenes. In most of the cases, the information about Chapters is given directly by the broadcasters.

[back to top]

Class: linkedtv:Concept

URI: http://data.linkedtv.eu/ontologies/core#Concept

Concept - This class allows to represent a general idea, thought or notion, derived or inferred from specific instances that appear on a MediaFragment. In the LinkedTV project those concepts are classified according to the hierarchy defined in the LSCOM ontology (http://vocab.linkeddata.es/lscom/), and automatically extracted by using specific classifiers.

[back to top]

Equivalent Class:
lscom:Thing

[back to top]

Class: linkedtv:Entity

URI: http://data.linkedtv.eu/ontologies/core#Entity

Entity - This class is used to represent the atomic elements that appear in the text, such as persons, organizations, locations, expressions of times, quantities, etc. In the context of the LinkedTV Project, those entities are recognized by using the NERD web framework, which unifies numerous named entity extractors (http://nerd.eurecom.fr/).

[back to top]

Equivalent Class:
nerd:Thing

[back to top]

Class: linkedtv:Face

URI: http://data.linkedtv.eu/ontologies/core#Face

Face - Represents the appearance of the face of a person inside a certain MediaFragment of the TV content. Usually, those faces are detected by executing automatic face recognition processes over the video. The spatial references for the bounding boxes where the faces are being shown are encoded in the URL using Media Fragments 1.0.

Equivalent Class:
foaf:Person

[back to top]

Class: linkedtv:Keyword

URI: http://data.linkedtv.eu/ontologies/core#Keyword

Keyword - Word that is relevant inside the context defined by a certain Media Fragment. These classes are attached to those MediaFragment class by using the property ma:hasKeyword.

[back to top]

Class: linkedtv:Organization

URI: http://data.linkedtv.eu/ontologies/core#Organization

Organization - This class represents the agents responsible of generating oa:Annotations inside the LinkedTV knowledge base. Examples of organizations are the different parnerts involved in the project: EURECOM, CERTH, RBB, etc.

[back to top]

Class: linkedtv:RelatedContent

URI: http://data.linkedtv.eu/ontologies/core#RelatedContent

Related Content - Content that is related to the main topic featured on the television content. These items may be manually determined (by the content editor) or may be populated automatically.

[back to top]

Class: linkedtv:Scene

URI: http://data.linkedtv.eu/ontologies/core#Scene

Scene - A part of a TV content that occurs in a single location and continuous time. In the LinkedTV context, a scene is composed of a set of linkedtv:Shots, and normally belongs to a certain linkedtv:Chapter where other related scenes are included too.

[back to top]

Class: linkedtv:Shot

URI: http://data.linkedtv.eu/ontologies/core#Shot

Shot - Represents a series of frames, that runs for an uninterrupted period of time. Also can be seen as the time between the beginning and end of a capturing process, or the continuous footage between two camera edits.

[back to top]

Class: linkedtv:SpatialObject

URI: http://data.linkedtv.eu/ontologies/core#SpatialObject

Spatial Object - This class represents an object that appears in a TV content during a certain period of time and has some spatial references attached to it. In the context of the LinkedTV project, those spatial references are basically a list of bounding boxes that specify where exactly the object is appearing.

[back to top]

Property: linkedtv:hasConfidence

URI: http://data.linkedtv.eu/ontologies/core#hasConfidence

hasConfidence - It allows to specify the degree or trust that a linkedtv:Organization agent has over the fact that a instance of an oa:Annotation class is correct and the information that introduces in the data model is reliable. This property is also used for indicating the degree of confidence that a NER extractor has had during the process of spotting a certain linkedtv:Entity class.

OWL Type:
DatatypeProperty
Domain:
http://www.openannotation.org/spec/core#Annotation, http://data.linkedtv.eu/ontologies/core#Entity
Range:
xsd:float

[back to top]

Property: linkedtv:hasRelevance

URI: http://www.linkedtv.eu/ontology#hasConfidence

hasRelevance - This property is used for expressing the relevance that a NER extractor has had during the process of spotting a certain linkedtv:Entity class.

OWL Type:
DatatypeProperty
Domain:
http://www.openannotation.org/spec/core#Annotation
Range:
xsd:float

[back to top]

Property: linkedtv:hasMediaResource

URI: http://www.linkedtv.eu/ontology#hasMediaResource

hasMediaResource - This property relates every instance of the po:Version class, with the upper ma:MediaFragment of the television content that is being described in it.

OWL Type:
ObjectProperty
Domain:
http://purl.org/ontology/po#Version
Range:
http://www.w3.org/ns/ma-ont#MediaFragment

[back to top]

Property: linkedtv:hasSubtitle

URI: http://www.linkedtv.eu/ontology#hasSubtitle

hasSubtitle - For a particular ma:MediaFragment, it allows to specify which is the text of the corresponding subtitle, which is represented by an instance of a class str:String.

OWL Type:
ObjectProperty
Domain:
http://www.w3.org/ns/ma-ont#MediaFragment
Range:
http://nlp2rdf.lod2.eu/schema/string#String

[back to top]