Asset Description

From SAM
Jump to: navigation, search

Asset Description refers to the characterization of the media content (images, videos, audio files, tweets, etc) both at the meta-level (e.g. generic details about music files such as artist, title, length) and the content level (e.g. what are the lyrics at a point in a song, the context of a book chapter, or the articles and actors in a film at a point in time). In this way, the assets can afterwards be aggregated, syndicated and finally consumed.

Introduction

Recently, there has been a tremendous increase in the amount of digital multimedia information that is distributed over the Web. This is because everyone can follow the production line of digital multimedia content nowadays. Production companies shoot high quality video in digital format; organizations that hold multimedia content (such as TV channels, film archives, museums and libraries) convert analog to digital material so as to preserve,manage and distribute this material easily and also in a unified way. What is more, amateurs and novices use digital devices to produce image and video content in MPEG and JPEG formats. [1]

Annotating the multimedia content is not a trivial task and although there are plenty of standards for describing content, they have not been widely used. The main reason is that it is difficult, time consuming and thus expensive to manually annotate multimedia content. Moreover, multimedia applications often require metadata from a variety of different vocabularies. [2]These problems could be solved by merging and aligning existing good practices in multimedia industry with the current technological advances of the Semantic Web. [3]

Relevance to SAM

One of the three main pillars of SAM is the Content Syndication, the concept of creating content once and delivering it to many different places at the same time. The media content should first be annotated in order to be syndicated then. In the context of SAM, an Asset may include:

  • Existing database records from a content partner
  • Widgets/Social Media connections (Twitter, Facebook, Google+, YouTube, Flickr, Instagram)
  • SAM internal Assets and Social Media connected services
  • Links to relevant products such as books, DVDs, Blu-rays, CDs or video games
  • Links to relevant digital downloads or streams
  • Relevant Tweets, Posts, Videos, Photos, Groups, Lists and User Accounts
  • Relevant blogs, reviews, articles, images and websites from the open web

All these assets should be enriched with metadata attributes reflecting their subject matter so as to build a strong context aware environment where wider range of information will be delivered to the consumer.

The Asset Description in the context of SAM is described in T5.1 Asset Description and Composition Techniques, in WP5 Content Syndication and Delivery. The results of this task will serve as inputs for T5.2 Content Gateways, T5.3 Assets Aggregation and Composition and T5.5 Assets Marketplace.

State of the Art Analysis

The recent advances in ubiquitous and mobile computing have stimulated a Universal Multimedia Access (UMA) model as an emerging component for the next generation of multimedia applications. The basic concept underlying UMA is universal or seamless access to multimedia content, by automatic selection and adaptation of content based on the user’s environment (Mohan, Smith and Li, 1999). This boosts a series of new applications in the area of media search and retrieval, summarisation and multimedia content annotation [WIKI01]. Methods for adaptation include rate reduction, adaptive spatial and temporal sampling, quality reduction, summarisation, personalisation, and reediting of the multimedia content.

Many different standards and vocabularies exist to annotate the data and that describe different aspects of multimedia, ranging from descriptions of media types from a software engineering perspective to graphic design characteristics. The results of this task will serve as inputs for T5.2 Content Gateways, T5.3 Asset Aggregation and Composition and T5.5 Assets Marketplace.

Data Models

The Semantic Web is an initiative led by the World Wide Web Consortium (W3C). It aims at converting the World Wide Web to a Web of linked data by including semantic content in web pages. In this way, people are able to create data stores on the Web, build vocabularies, and write rules for handling data [4]. Under the Semantic Web umbrella, the following technologies have been developed:

EDM

"Europeana is a service, largely funded by the European Commission, that aggregates and gives access to the online cultural heritage of Europe. Metadata relating to cultural heritage objects is harvested from libraries, museums, archives and audio-visual collections – many of whom are aggregators themselves. This has necessitated the development of the Europeana Data Model (EDM), a cross-domain data model that will accommodate the specific requirements of these different domains. It has been developed with the principles of the semantic web as a fundamental requirement to allow enrichment of the data by extensive linking.".[5] EDM can include include any element (i.e. class or property) found in a content provider's description. For this reason, EDM not only introduces new elements, but also re-uses elements from other namespaces: RDF, RDFS, Dublin Core, SKOS[6] and ORE[7]. [8]

Schema.org

This site provides a collection of schemas that webmasters can use to markup HTML pages in ways recognized by major search providers, and that can also be used for structured data interoperability (e.g. in JSON). Search engines including Bing, Google, Yahoo! and Yandex rely on this markup to improve the display of search results, making it easier for people to find the right Web pages. Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure.[9]

RDF

Resource Description Framework (RDF) is a W3Cs standard model for data interchange on the Web. RDF aims at facilitating data merging even if the underlying schemas differ. RDF also supports the evolution of schemas over time without requiring all the data consumers to be changed[10]. In RDF, the descriptions of resources are expressed with sets of subject-predicate-object triples (RDF graphs). The elements of the RDF graphs can be IRIs, blank nodes or datatypes literals. In this way, structured and semi-structured data can be mixed, exposed, and shared across different applications.[11]

Dublin Core Element Set

"The Dublin Core Metadata Element Set is a vocabulary of fifteen properties for use in resource description".[12]Dublin Core metadata can be assigned to several types of resources such as web resources (video, images, web pages, etc.), as well as physical resources (books, CDs, artworks, etc)[13]The original set of 15 classic metadata terms, known as the Dublin Core Metadata Element Set are[14]:

  1. Title
  2. Creator
  3. Subject
  4. Description
  5. Publisher
  6. Contributor
  7. Date
  8. Type
  9. Format
  10. Identifier
  11. Source
  12. Language
  13. Relation
  14. Coverage
  15. Rights

FOAF

FOAF is a project devoted to linking people and information using the Web. Regardless of whether information is in people's heads, in physical or digital documents, or in the form of factual data, it can be linked. FOAF integrates three kinds of network: social networks of human collaboration, friendship and association; representational networks that describe a simplified view of a cartoon universe in factual terms, and information networks that use Web-based linking to share independently published descriptions of this inter-connected world. FOAF does not compete with socially-oriented Web sites; rather it provides an approach in which different sites can tell different parts of the larger story, and by which users can retain some control over their information in a non-proprietary format.[15]

ORE

Open Archives Initiative Object Reuse and Exchange (OAI-ORE) defines standards for the description and exchange of aggregations of Web resources. These aggregations, sometimes called compound digital objects, may combine distributed resources with multiple media types including text, images, data, and video. The goal of these standards is to expose the rich content in these aggregations to applications that support authoring, deposit, exchange, visualization, reuse, and preservation. Although a motivating use case for the work is the changing nature of scholarship and scholarly communication, and the need for cyberinfrastructure to support that scholarship, the intent of the effort is to develop standards that generalize across all web-based information including the increasing popular social networks of “web 2.0”.[16]

OWL

"The Web Ontology Language is a semantic markup language for publishing and sharing ontologies on the World Wide Web, designed for use by applications that need to process the content of information. OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. OWL has three increasingly-expressive sublanguages: OWL Lite, OWL DL, and OWL Full". [17]

SKOS

SKOS provides a standard way to represent knowledge organization systems using the Resource Description Framework (RDF). Encoding this information in RDF allows it to be passed between computer applications in an interoperable way.Using RDF also allows knowledge organization systems to be used in distributed, decentralised metadata applications. Decentralised metadata is becoming a typical scenario, where service providers want to add value to metadata harvested from multiple sources.[18].

Serialization Formats

JSON-LD

JSON-LD is a lightweight Linked Data format. It is easy for humans to read and write. It is based on the already successful JSON format and provides a way to help JSON data interoperate at Web-scale. JSON-LD is an ideal data format for programming environments, REST Web services, and unstructured databases such as CouchDB and MongoDB.[19]

Turtle

Turtle (Terse RDF Triple Language) is a format for expressing data in the Resource Description Framework (RDF) data model with the syntax similar to SPARQL. RDF, in turn, represents information using "triples", each of which consists of a subject, a predicate, and an object. Each of those items is expressed as a Web URI.[20]

Manchester

The Manchester OWL Syntax, developed by the CO-ODE project at The University of Manchester, is a new syntax designed for writing OWL class expressions. It was influenced by both the OWL Abstract Syntax and the DL-style syntax, which uses description logic symbols such as the universal quantifier (∀) or the existential quantifier (∃).[21]

RDA (Resource Description and Access)

RDA is the planned replacement for AACR2 as the predominant content standard in the library community. It is intended to be useful beyond the library community as well. While primarily focused on descriptive metadata, some instructions exist that cover technical, rights, and structural metadata. RDA pushes the boundaries of a content standard, referring to sets of rules as “elements” which makes it closer to a structure standard than AACR2. Different communities will likely find either RDA’s rules aspect or its data element aspect more interesting than the other. The standard is currently in draft; the initial version of RDA is scheduled for release in the summer of 2010. The initial release will have placeholders for several planned chapters[22].

RDF/XML Schema

Resource Description Framework Schema (RDFS) is a set of classes that consist a data-modelling vocabulary for RDF data. It provides mechanisms for describing groups of related resources and the relationships between these resources. These resources are used to determine characteristics of other resources, such as the domains and ranges of properties.[23]

RFDa

RDFa is a W3C Recommendation that adds a set of attribute-level extensions to HTML, XHTML and various XML-based document types for embedding rich metadata within Web documents. The RDF data-model mapping enables its use for embedding RDF subject-predicate-object expressions within XHTML documents. It also enables the extraction of RDF model triples by compliant user agents.[24]

Microformats

A microformat (sometimes abbreviated μF) is a web-based approach to semantic markup which seeks to re-use existing HTML/XHTML tags to convey metadata and other attributes in web pages and other contexts that support (X)HTML such as RSS. This approach allows software to process information intended for end-users (such as contact information, geographic coordinates, calendar events, and similar information) automatically.[25]

Ontology for Media Resource

The Ontology for Media Resource is a W3C Working Draft designed to provide a vocabulary for media resources, especially those on the Web. A “media resource” is defined as either a tangible, retrievable resource or the abstract work represented by a tangible thing. The Ontology defines a relatively small number of core properties in RDF, including properties for basic description, technical information, and user ratings. The specification also provides mappings to a wide variety of related standards[26].

XMP

Extensible Metadata Platform (XMP) is an XML-based schema for storing image metadata. XMP uses W3C's RDF (Resource Description Framework) standard for tagging files. The XMP specification includes predefined schemas with hundreds of properties for common document and image characteristics.

Standards

MPEG-7

MPEG-7 (Multimedia content description interface) is a content description standard for multimedia assets. It aims to "standardise a core set of quantitative measures of audio-visual features, called Descriptors (D), and structures of descriptors and their relationships, called Description Schemes (DS) in MPEG-7 parlance".[27] It also standardises a language to specify the Description Schemes, called the Description Definition Language (DDL). One of the main objectives of MPEG-7 is to facilitate interoperability and globalisation of data resources and also to provide flexibility of data management. MPEG-7 supports multiple ways of structuring annotations and gives the possibility to add domain dependent descriptors, if the default descriptors do not provide enough detail.

MPEG-21

MPEG-21 is a standard, created by Moving Picture Experts Group that "aims at defining a normative open framework for multimedia delivery and consumption for use by all the players in the delivery and consumption chain". Two essential concepts of this standard are[28]:

  • the definition of a fundamental unit of distribution and transaction (the Digital Item)
  • the concept of Users interacting with Digital Items

VRA

"The VRA Core is a data standard for the description of works of visual culture as well as the images that document them".[29] As this standard is used mostly to describe visual representations of physical objects and in order to avoid confusion, a clear distinction between annotation records about the object and its digital representation has been made. What is more, while in process of annotation, VRA suggests to use terms of other vocabularies, such as Art and Architecture Thesaurus (AAT)[30].

EXIF

Exchangeable Image File (EXIF) is a format standard for images, sound and auxiliary tags used by digital cameras and smartphones, scanners and other devices that handle image and sound files recorded by digital cameras. The metadata tags defined in the Exif standard cover a broad spectrum:[31]

  • Date and time information
  • Camera settings (camera model and manufacturer, orientation (rotation), aperture, shutter speed, focal length, metering mode, and ISO speed information)
  • A thumbnail for previewing the picture on the camera's LCD screen, in file managers, or in photo manipulation software
  • Descriptions
  • Copyright information
  • Location information

IPTC IIM

Information Interchange Model (IIM) is a metadata standard for text, images and other media types, created by the International Press Telecommunications Council (IPTC) for exchanging news. The full IIM specification includes a complex data structure and a set of metadata definitions. IIM's file structure technology has largely been overtaken by the Extensible Metadata Platform (XMP), which is much more flexible. IPTC and Adobe developed together the IPTC Core Schema for XMP, which is actually a revision of IIM properties to XMP. The IPTC core standard includes 32 metadata elements. [32]

SAM Approach

Following the study of the state of the art and the analysis of the requirements from the user stories and scenarios, as well as the component dependencies on Asset Description, this section focuses on the description of the specification regarding SAM Assets. The SAM Asset Description approach was inspired by the Europeana and the Europeana Data Model not only conceptually, following the same design principles, but also technically, following the same schema structure and reusing where possible concepts and properties from EDM.

Asset Description Dependencies

The components which have direct or indirect dependencies on Asset Description are the following:

AssetDependencies.png

Asset Description Schema

The proposed SAM Asset Description Ontology has two main aspects. On one hand the ontology should incorporate the classes, properties and information needed to fulfil the SAM scenario and component requirements. On the other hand: to maintain the complete information set of the original asset without compromising the information quality, structure and relationships. Therefore, the SAM ontology introduces some new concepts for the purposes of SAM, reusing concepts and properties from Europeana, Dublin Core and Schema.org, but at the same time maintains the backward compatibility with the original asset which is imported into the SAM environment. These concepts are implemented in the ontology as classes and are analysed below:

  • Asset Class is the main concept of the ontology and includes a list of elements (as object and data properties) to describe the asset information in a structure that can be effectively interpreted by the SAM Components. This class incorporates the rest classes described in this list and reuses, where possible, terms and concepts from EDM. In addition, the SAM Asset has references to the rest SAM Classes
  • Semantics Class includes all information regarding the semantic annotation and characterization of assets
  • Social Media Class describes the social media aspects of an asset such as the related Facebook pages or Twitter accounts
  • Linking Class incorporates information about basic and advanced asset linking and compositions
  • Syndication Class refers to SAM processes for the syndication of assets and their presentation to end users
  • Voice Control Class has specific information and grammar for personalized control of the devices that present the particular asset
  • Original Asset Class includes a reference to the original assets that is also stored in the SAM environment so as to be able to use in the future information that can be transformed to the aforementioned concepts.

SamOntology.png

Latest Developments

The final version of the Asset Description has evolved in order to describe more efficiently data of BDS (http://www.bdslive.com/). The main structure of the ontology remains the same but it is enriched with many more fields to cover all the data BDS is using to describe their data. As it is depicted in the following image, there are three major classes:

  • Extension: that contain all the data needed for the SAM components
  • External concepts: that contain all the BDS data
  • SAM_Asset: that contain all the essential data about an Asset regardless its origins

Ontology sam.png

References

  1. Stamou, Giorgos, et al. "Multimedia annotations on the semantic web." MultiMedia, IEEE 13.1 (2006): 86-90.
  2. Geurts, Joost, Jacco Van Ossenbruggen, and Lynda Hardman. "Requirements for practical multimedia annotation." Workshop on Multimedia and the Semantic Web. 2005.
  3. Stamou, Giorgos, et al. "Multimedia annotations on the semantic web." MultiMedia, IEEE 13.1 (2006): 86-90.
  4. W3C, Semantic Web, http://www.w3.org/standards/semanticweb/
  5. The Europeana Data Model and Europeana Libraries, Robina Clayphan, http://dcevents.dublincore.org/BibData/fyo/paper/view/115/45
  6. Simple Knowledge Organization System, http://www.w3.org/2004/02/skos/intro
  7. Open Archives Initiative Object Reuse and Exchange, http://www.openarchives.org/ore/
  8. Definition of the Europeana Data Model v5.2.4 http://pro.europeana.eu/documents/900548/0d0f6ec3-1905-4c4f-96c8-1d817c03123c
  9. http://schema.org/
  10. Resource Description Framework (RDF), http://www.w3.org/RDF/
  11. RDF 1.1 Concepts and Abstract Syntax, http://www.w3.org/TR/rdf11-concepts/
  12. Dublin Core Metadata Element Set, Version 1.1, http://dublincore.org/documents/dces/
  13. DCMI Metadata Terms, http://dublincore.org/documents/dcmi-type-vocabulary/index.shtml
  14. DCMI Metadata Terms, http://dublincore.org/documents/dces/
  15. FOAF Vocabulary Specification, http://xmlns.com/foaf/spec/
  16. Open Archives Initiative Object Reuse and Exchange, http://www.openarchives.org/ore/
  17. OWL Web Ontology Language, http://www.w3.org/TR/owl-features/
  18. Introduction to SKOS, http://www.w3.org/2004/02/skos/intro
  19. JSON for Linking Data, http://json-ld.org/
  20. Terse RDF Triple Language, http://www.w3.org/TeamSubmission/turtle/
  21. http://protegewiki.stanford.edu/wiki/Manchester_OWL_Syntax
  22. http://www.rdatoolkit.org/
  23. RDF Schema 1.1 http://www.w3.org/TR/rdf-schema/
  24. Rich Structured Data Markup for Web Documents,http://www.w3.org/TR/xhtml-rdfa-primer/
  25. http://microformats.org/wiki/Main_Page
  26. http://www.w3.org/TR/2009/WD-mediaont-10-20090618/
  27. Everything you wanted to know about MPEG-7: Part 1, Frank Nack, Adam T. Lindsay
  28. http://mpeg.chiariglione.org/standards/mpeg-21
  29. Visual Resources Association, http://www.vraweb.org
  30. Geurts, Joost, Jacco Van Ossenbruggen, and Lynda Hardman. "Requirements for practical multimedia annotation." Workshop on Multimedia and the Semantic Web. 2005.
  31. http://www.cipa.jp/exifprint/index_e.html
  32. International Press Telecommunications Council, Information Interchange Model, http://www.iptc.org/site/Photo_Metadata/IIM/