|
|
TRACK 1 |
|
|
Tutorial 1A Title: Introduction to Linked Open Data (LOD) 9:30–13:00 — Monday, 2 September 2013
|
|
|
Tutorial Lead: Ivan Herman
Ivan Herman is the Semantic Web Activity Lead at W3C. He graduated as a mathematician at the Eötvös Loránd University of Budapest, Hungary, in 1979. After a brief scholarship at the Université Paris VI he joined the Hungarian research institute in computer science (SZTAKI) where he worked for 6 years. He left Hungary in 1986 and, after a few years in industry, he joined the Centre for Mathematics and Computer Sciences (CWI) in Amsterdam where he has held a tenure position since 1988. He received a PhD degree in Computer Science in 1990 at the Leiden University, in the Netherlands. Ivan joined the W3C team as Head of Offices in January 2001 while maintaining his position at CWI. He served as Head of Offices until June 2006, when he was asked to take the Semantic Web Activity Lead position.
Abstract: The goal of the tutorial is to introduce the audience into the basics of the technologies used for Linked Data. This includes RDF, RDFS, main elements of SPARQL, SKOS, and OWL. Some general guidelines on publishing data as Linked Data will also be provided, as well as real-life usage examples of the various technologies.
Who Should Attend: The tutorial requires general knowledge of computer science and information technology, but does not rely on any knowledge of XML, programming languages, or knowledge representation.
|
|
|
Tutorial 1B Title: Introduction to Ontology Concepts and Terminology 14:30–18:00 — Monday, 2 September 2013
|
|
|
Tutorial Lead: Steven Miller
Steven Miller is a Senior Lecturer at the University of Wisconsin-Milwaukee School of Information Studies. He teaches graduate courses on Metadata, Information Architecture, and RDF and Ontologies for the Semantic Web. Steven worked as a professional cataloger and as later Head of the Monographs Department at the UWM Libraries before moving into teaching full time. He is a member of the Editorial Board of the Journal of Library Metadata and the author of the book Metadata for Digital Collections, published by Neal-Schuman in 2011. In the past he has served as the Chair of the ALA ALCTS Metadata Interest Group and Co-Chair of the Wisconsin Heritage Online Metadata Working Group. He has taught numerous metadata and cataloging workshops and created training resources for the Library of Congress and OCLC.
Abstract: This tutorial will provide an beginning-level introduction to basic RDFS and OWL ontology concepts and terminology. It will approach ontology modeling within the context of the RDF data model and the Linked Data and Semantic Web visions, viewing RDFS and OWL ontologies as methods of providing machine-actionable structure to RDF triples. The tutorial will focus on the concepts of classes and subclasses, properties and subproperties, property domains and ranges, class inheritance, and various logical inferencing capabilities that can enable RDF instance data. It will include illustrative examples, and, if possible, briefly show the use of the Protégé ontology software. It will include an overview of OWL, with its greater potential inferencing power based on various property and class specifications, as listed in the outline.
Who Should Attend: This tutorial in intended for information professionals who have little or no prior familiarity with ontologies, RDFS, or OWL and who want to gain an introductory level understanding of basic ontology concepts and terminology. Many working information professionals fall into the scope of this intended audience and competency level.
Learning Outcomes:
At the conclusion of the tutorial, participants will:
-
Understand basic RDFS ontology concepts such as classes, properties, instances, domain and range.
-
Understand how ontologies provide structure to RDF triples.
-
Be able to create a basic, beginning-level RDFS-compatible ontology.
-
Determine logical inferencing capabilities based on specific class, property, domain and range specifications.
-
Gain initial exposure to more complex OWL property and class specifications and their greater potential inferencing power.
-
Better understand: existing RDF-based ontologies such as BIBO, BIBFRAME, the BBC ontologies, and the Europeana Data Model; DCMI Metadata Terms specifications; and conceptual models such as the Dublin Core Abstract Model.
-
Be better able to understand and contribute to professional discussions about ontologies, ontology concepts, and ontology terminology on discussion lists, at conferences, and the like.
|
|
|
|
TRACK TWO |
|
|
Tutorial 2A Title: Metadata Provenance 9:30–13:00 — Monday, 2 September 2013
|
|
|
Tutorial Lead: Kai Eckert
Kai Eckert is a research associate at the Chair of Chris Bizer, University of Mannheim, where he leads the infrastructure development of the EU funded project DM2E (Digitised Manuscripts to Europeana). Kai Eckert is a computer and information scientist with master degrees from the University of Mannheim (Computer Science, Business Informatics) and the Humboldt-University of Berlin (MA LIS). He worked several years as software developer, before he joined the university again to work towards a doctorate with his thesis on usage-driven maintenance of knowledge organization systems. From 2010 to 2012, Kai Eckert worked for the Mannheim University Library as subject specialist and deputy head of the IT department. He developed the Linked Data Service of the library, providing the first publication of a library catalogue as Linked Data in Germany. He was member of the W3C Provenance Incubator Group and the W3C Library Linked Data Incubator Group. Currently, he participates in the W3C Provenance Working Group and co-chairs the DCMI Metadata Provenance Task Group.
Abstract:
When metadata is distributed, combined, and enriched as Linked Data, the tracking of its provenance becomes a hard issue. Using data encumbered with licenses that require attribution of authorship may eventually become impractical as more and more data sets are aggregated—one of the main motivations for the call to open data under permissive licenses like CC0. Nonetheless, there are important scenarios where keeping track of provenance information becomes a necessity. A typical example is the enrichment of existing data with automatically obtained data, for instance as a result of automatic indexing. Ideally, the origins, conditions, rules and other means of production of every statement are known and can be used to put it into the right context.
In RDF, the mere representation of provenance —i.e., statements about statements— is challenging. We explore the possibilities, from the unloved reification and other proposed alternative Linked Data practices through to named graphs and recent developments regarding the upcoming next version of RDF. The session closes with a brief overview of vocabularies that can be used to actually express the provenance. This lays the ground for the PROV tutorial in the afternoon, where the two most interesting and at the same time most diverse approaches, W3C PROV and Dublin Core as a provenance vocabulary, will be introduced in detail.
There will be time to discuss use cases and open challenges contributed by the participants. Please contact the organizer for details, if you would like to contribute a case.
Who Should Attend: The tutorial is intended for Linked Data practitioners who know the basic concepts of RDF and Linked Data and are interested in possible ways to publish data about the Linked Data.
Learning Outcomes: Participants will understand the general problems that arise if provenance information for Linked Data is to be represented and get an overview on existing solutions and best practices with their respective advantages and disadvantages.
|
|
|
Tutorial 2B Title: PROV-O: The W3C Provenance Ontology 14:30–18:00 — Monday, 2 September 2013
|
|
|
Tutorial Lead: Daniel Garijo
Daniel Garijo is a Ph.D. student in the Ontology Engineering Group at the Artificial Intelligence Department of the Computer Science Faculty of Universidad Politécnica de Madrid. His research activities focus on e-Science and the Semantic Web, specifically on how to increase the understandability of scientific workflows using provenance, metadata, intermediate results and Linked Data. He has participated in the W3C Provenance Incubator Group, Dublin Core Metadata Provenance Task Group and is currently a member of the W3C Provenance Working Group.
Abstract: Provenance is key for describing the evolution of a resource, the entity responsible for its changes and how these changes affect its final state. A proper description of the provenance of a resource shows who has its attribution and can help resolving whether it can be trusted or not. This tutorial will provide an overview of the W3C PROV data model and its serialization as an OWL ontology. The tutorial will incrementally explain the features of the PROV data model, from the core starting terms to the most complex concepts. Finally, the tutorial will show the relation between PROV-O and the Dublin Core Metadata terms.
Who Should Attend: This tutorial is intended for information professionals who are not familiar with the W3C standard for provenance in the Web (PROV), or who want to learn more about the specific concepts and properties of the model and its relation to the Dublin Core terms. A basic knowledge in OWL/RDF is recommended for following the tutorial (although it is not critical).
Summary Outline:
- Introduction and background: Provenance and the W3C Provenance Working Group.
- The PROV Data model.
-
A simple example of PROV
-
PROV starting point terms: basic terms for describing resources
-
PROV extended terms: advanced terms for enriching provenance descriptions. How do we assert the provenance of provenance?
-
PROV qualified classes and properties: classes and properties
-
The PROV-O Ontology: Starting points, extended terms and qualified classes in OWL.
-
Mapping Dublin Core to PROV:
-
Relation of Dublin Core to Provenance
-
PROV entities and Dublin Core resources
-
Direct mappings
-
Complex mappings.
|
|