Full Paper Abstracts
Paper Author | Paper Title & Abstract |
Kai Eckert | OCS: 154 TITLE: Provenance and Annotations for Linked Data ABSTRACT: Provenance tracking for Linked Data requires the identification of Linked Data resources. Annotating Linked Data on the level of single statements requires the identification of these statements. The concept of a Provenance Context is introduced as the basis for a consistent data model for Linked Data that incorporates current best-practices and creates identity for every published Linked Dataset. A comparison of this model with the Dublin Core Abstract Model is provided to gain further understanding, how Linked Data affects the traditional view on metadata and to what extent our approach could help to mediate. Finally, a linking mechanism based on RDF reification is developed to annotate single statements within a Provenance Context. |
Mi Tian, György Fazekas, Dawn Black & Mark Sandler | OCS 160 TITLE: Towards the Representation of Chinese Traditional Music: A State of the Art Review of Music Metadata Standards ABSTRACT: This paper examines existing metadata standards for describing music related information in the context of Chinese music tradition. With most research attention focussing on Western music, research into computational methods and knowledge representation for world music is still in its infancy. Following the introduction of symbolic elements in the Chinese traditional system, a comparison between these elements and the expressiveness of some prevailing metadata models and standards including Semantic Web ontologies is presented. |
Jian Qin & Kai Li | OCS: 162 TITLE: How Portable Are the Metadata Standards for Scientific Data? A Proposal for a Metadata Infrastructure ABSTRACT: The one-covers-all approach in current metadata standards for scientific data has serious limitations in keeping up with the ever-growing data and being built as part of a metadata infrastructure. This paper reports the preliminary findings from a survey to metadata standards in the scientific data domain and argues for the need for a metadata infrastructure. The survey collected 4,400+ unique elements from 16 standards and categorized these elements into 9 categories. Preliminary findings from the data include inconsistent naming of elements across standards, a fraction of single-word element names, and varying linguistic forms of elements. The limitations of large, complex standards and widely varied naming practices are the major hurdles for building a metadata infrastructure. The paper articulated the three principles for metadata infrastructure: the least effort principle is the premise on which the metadata infrastructure argument operates; being portable is the essential condition or prerequisite for metadata schemes to be "infrastructurized"—a word coined to denote the state of being built into or as part of the infrastructure; and the infrastructure service principle means that metadata elements, vocabularies, entities, and other metadata artifacts are established as the underlying foundation upon which the tools and applications as well as functions of metadata services are built. |
Antoine Isaac, Valentine Charles, Kate Fernie, Costis Dallas, Dimitris Gavrilis & Stavros Angelis | OCS: 171 TITLE: Achieving Interoperability between the CARARE Schema for Monuments and Sites and the Europeana Data Model ABSTRACT: Mapping between different data models in a data aggregation context always presents significant interoperability challenges. In this paper, we describe the challenges faced and solutions developed when mapping the CARARE schema designed for archaeological and architectural monuments and sites to the Europeana Data Model (EDM), a model based on Linked Data principles, for the purpose of integrating more than two million metadata records from national monument collections and databases across Europe into the Europeana digital library. |
Gordon Dunsire, Mirna Willer & Predrag Peroži? | OCS: 173 TITLE: Representation of the UNIMARC Bibliographic Data Format in Resource Description Framework ABSTRACT: This paper describes the history and role of the UNIMARC bibliographic data formats, as background to a discussion of preliminary outcomes of a project to represent the formats in Resource Description Framework and map them to related standards, including the International Standard Bibliographic Description. These include the testing of the strategy for namespace and URI design and the methodology for populating them with content, identification of alignment inconsistencies, and preliminary mappings to Dublin Core, RDA: resource description and access, and MARC 21. The paper discusses the relevance of these standards to the aims of universal bibliographic control and user-focused applications in the Semantic Web. |
Mariana Curado Malta & Ana Alice Baptista | OCS: 178 TITLE: A Method for the Development of Dublin Core Application Profiles (Me4DCAP V0.1): Detailed Description ABSTRACT: This paper is framed in a research in progress project that has as goal the development of a method for the development of Dublin Core Application Profiles (Me4DCAP). The development of the first version of Me4DCAP has been published and it is in process of evaluation. This paper describes in detail Me4DCAP V0.1, showing the sources used to justify its design. Me4DCAP was based in a Design Science Research methodological approach. Me4DCAP has as starting point the Singapore framework for Dublin Core Application Profiles (DCAP) and the Rational Unified Process; and integrates also knowledge from: (i) software development processes and techniques, focusing on the early stages of the processes that deal with data modeling; and from (ii) the practices of the metadata community concerning DCAP development. Me4DCAP establishes the way through the DCAP development. It establishes when activities must take place, how they interconnect, and which artifacts they will bring about; it also suggests which techniques should be used to build these artifacts. |
Jihee Beak & Richard P. Smiraglia | OCS: 179 TITLE: With a Focused Intent: Evolution of DCMI as a Research Community ABSTRACT: The Dublin Core Metadata Initiative (DCMI) has played a pivotal role in developing and nurturing a metadata domain. DCMI's conference has become an international venue for metadata researchers and professionals. The purpose of this study is to discover the epistemological consensus and social semantics, if any, of a metadata domain based in DCMI conferences. Specifically, we will identify the patterns of emergent and evolving themes over time. To do so we use bibliometric tools including co-word analysis and author co-citation analysis of the DCMI conferences from 2001-2012. The results show a domain with an underlying teleology (Dublin Core metadata) and with social semantics, represented by semantic coherence in the use of terms. Social semantics also demonstrates shared epistemology as revealed by the co-citation perceptions of the domain. The domain clearly has a focused intent, albeit with a limited focus. User groups are missing from the domain's definition as it emerges in this analysis. Also, there is much room for the domain to nurture so-far under-represented research topics. |
Hannah Tarver & Mark Phillips | OCS: 183 TITLE: Lessons Learned in Implementing the Extended Date/Time Format in a Large Digital Library ABSTRACT: In 2012, the University of North Texas (UNT) Libraries implemented the Library of Congress Extended Date/Time Format (EDTF) into the metadata guidelines for their digital holdings which now contain 460,000 records. This paper discusses the evaluation process to identify the number of previously-existing dates that meet EDTF standards and those that need to be edited for conformance. It also outlines practical steps used for implementing the standard, such as date validation for metadata creators and changes to date displays for public users. Finally, it presents some of the challenges encountered during the implementation process and considerations for other institutions that may want to use the [email protected] |
Diane Ileana Hillmann, Gordon Dunsire & Jon Phipps | OCS: 185 TITLE: Maps and Gaps: Strategies for Vocabulary Design and Development ABSTRACT: In this paper we discuss changes in the vocabulary development landscape, their origins, and future implications, via analysis of several existing standards. We examine the role of semantics and mapping in future development, as well as some newer vocabulary building activities and their strategies. |
Liddy Nevile | OCS: 166 TITLE: DC Metadata is Alive and Well (and has Influenced a New Standard for Education) ABSTRACT: The Dublin Core Metadata Initiative [DCMI], as a community, has collaboratively developed 'standards' for twenty years. DCMI recommendations have become 'international standards' by being adopted, for example by the United States' National Information Standards Organization [NISO], and then by promotion by them to the International Standards Organization, [ISO/IEC JTC1]. This has led to wider implementation on one dimension, formally, still shepherded by the DCMI. Different dimensions have emerged from significant developments within other entities and communities, such as the World Wide Web Consortium [W3C], etc. The deliberately open nature of DCMI work has meant that people with no known connection to DCMI can nevertheless take advantage of the DCMI work. Further, it asserts that 'DC Metadata' is, as a result of work done by outsiders, in fact thriving in the global environment. This paper considers a development where DCMI specifications have been adopted into an ISO/IEC standard. The new standard is to be used to develop other standards. This development has led in turn, for example, to the adoption of the ISO/IEC version as a normative standard for European education. It is yet another example of communities independently doing work on DC Metadata, alongside similar work done by library and other communities. To celebrate what might be considered the commendable 'viral' nature of DC Metadata, the emerging ISO/IEC standard 19788 (Metadata for Learning Resources) is described. |
Tsunagu Honma, Mitsuharu Nagamori & Shigeo Sugimoto | OCS: 144 TITLE: Find and Combine Vocabularies to Design Metadata Application Profiles using Schema Registries and LOD Resources ABSTRACT: Metadata schema which defines constraints about metadata records is a fundamental resource for metadata interoperability. Building interoperable metadata schema has been a main topic of the Dublin Core since its early days. It is important to make use of existing metadata schema to develop a new schema in order to minimize newly defines metadata vocabularies, which is a very basic consensus that DCMI has developed. In order to improve usability of existing metadata schemas to develop new schemas, it is important to improve usability of information about metadata schema publicly available on the Internet. This study is aimed to develop a technology to help metadata schema designers find useful metadata schemas and use those schemas for the metadata schema development. Key concepts used in this study are Description Set Profiles (DSP) as a formal basis of metadata schema and Linked Open Data (LOD) as a framework to connect metadata schema resources. In this paper, we first analyse requirements for metadata schema search and reuse following introduction and discussion on related works. Then, it presents a set of guidelines to find and combine metadata vocabularies and a technology to help develop metadata schemas. |
Jane Greenberg, Shea Swauger, Elena Feinstein | OCS: 189 TITLE: Metadata Capital in a Data Repository ABSTRACT: This paper reports on a study using collaborative modeling and content analysis methods for examine metadata reuse as a form of metadata capital. A sample of 20 cases for two workflows, identified Case A and Case B, captured 100 instantiations (60 metadata objects, 40 metadata activities). Results indicate that Dryad's overall workflow has a substantial amount of metadata reuse, with 10 of the 12 metadata properties demonstrating metadata capital via reuse at 70% or greater. Metadata reuse was common for basic verbal properties such as, author, title, subject, and it was lacking—and in some cases non-existent—for more complex verbal properties, such as taxon, spatial, and temporal information, as well numerical or identifier properties such as date.issued, and dc.identifer.citation. System design priority areas are identified to promote the generation of more accurate metadata earlier in the metadata workflow. Contributions for studying metadata capital are also discussed. |
Project Report Abstracts
Project Report Author | Project Report Title & Abstract |
Alejandro Villar Fernández & Ana Santurtún Zarrabeitia | OCS: 133 TITLE: Implementation of a Linked Open Data solution for the Statistics Agency of Cantabria's metadata and data bank ABSTRACT: Statistics is a fundamental piece inside the Open Government philosophy, being a basic tool for citizens to know and make informed decisions about the society in which they participate. Due to the great number of organizations and agencies that collect, process and publish statistical data all over the world, several standards and methodologies for information exchange have been created in recent years in order to improve interoperability between data producers and consumers, of which SDMX is one of the most renowned examples. Despite having been developed independently of this, the global Semantic Web effort (backed mainly by the W3C-driven Linked Open Data initiatives) presents itself as an extremely useful tool for publishing both completely contextualized metadata and data, therefore making them easily understandable and processable by third parties. This report details the changes made to the IT systems of the Statistical Agency of Cantabria (Instituto Cántabro de Estadística, ICANE) with the purpose of implementing a Linked Open Data solution for its website and statistical data bank, making all data and metadata published by this Agency available not only to humans, but to automatized consumers, too. Multiple standards, recommendations and vocabularies were used for this task, ranging from Dublin Core metadata RDFa tagging, through the creation of several SKOS concept schemes, to providing statistical data using the RDF Data Cube vocabulary. |
Hsueh-hua Chen, Yu Lin & Cynthia Chen | OCS: 149 TITLE: Approaches to Building Metadata for Data Curation ABSTRACT: In National Taiwan University (NTU), the Library aims to provide data curation services for university researchers from different research fields, particularly focusing on those from small sciences. In this paper, we will first investigate existing metadata schemas used for data curation services in North America and Europe. Next, we will attempt to develop an application profile, proposing metadata fields to be applied to data curation services in NTU. Finally, we will discuss our findings in this study, and take further action to develop a repository platform. |
Biligsaikhan Batjargal, Takeo Kuyama, Fuminori Kimura & Akira Maeda | OCS: 150 TITLE: Linked Data Driven Dynamic Web Services for Providing Multilingual Access to Diverse Japanese Humanities Databases ABSTRACT: Several cultural domain resources in different languages have become available as Linked Open Data (LOD) in the last few years. However, there is little re-use of this data in multilingual information retrieval applications. The paper discusses Linked Data driven approaches in providing integrated multilingual access to diverse Japanese humanities databases by linking and re-using LOD resources dynamically. It proposes a method, which dynamically generates links across databases using Linked Data when a user performs a search using keywords. We built a prototype information retrieval system based on LOD resources, personal names authority data, subject headings, and links to other Linked Data resources. Furthermore, we demonstrate how this approach is integrated in real-life retrieval systems and how linking and accessing diverse databases can be enhanced to make use of the available LOD resources. The proposed method also enables to access multiple databases in different languages by using the notations in various languages, which were obtained from the authority data resources. It allows to access to additional data not only in Japanese databases but also multilingual databases in other countries without depending on languages and formats of each database. |
Nuno Freire & Markus Muhr | OCS: 151 TITLE: Use of Authorities Open Data in the ARROW Rights Infrastructure ABSTRACT: The ARROW rights infrastructure provides the means to support mass digitisation projects by finding automated ways to clear the rights situation of books to be digitised. ARROW provides seamless interoperability across a distributed network of national data sources, which contain essential information for determining the rights status of works, including national bibliographies from national libraries, books-in-print databases, and rights-holders databases. This paper presents how open data about authors, from the Virtual International Authority File (VIAF) is being used in ARROW to support the data interoperability across ARROW data sources, and how it is being used for the outputs of the rights clearance process. |
Dominique Ritze & Katarina Boland | OCS: 156 TITLE: Integration of Research Data and Research Data Links into Library Catalogues ABSTRACT: Traditionally, research data and publications are held in separate systems. This results in a disadvantageous situation for researchers as they need to use a variety of different systems to find relevant information about a topic. We therefore face the challenge to overcome the boundaries between bibliographic records and research data by providing an integrated search environment for publications and research data. Because of the inherently different system structure and the diverse metadata for publications and datasets respectively, one type of data cannot easily be integrated into information systems for the other data type. We present the challenges that arise when adapting a bibliographic library system to include the additional data and give recommendations for an efficient implementation. By presenting our enhanced prototype, we show the applicability and practicability of our proposed solutions. Since our library catalogue prototype features links between publications and underlying research datasets, we provide direct access to research data metadata stored in remote research data repositories and thus connect both types of information systems. |
Alexander Haffner | OCS: 157; TITLE: IN2N: Cross-institutional Authority Collaboration ABSTRACT: The paper describes the efforts taking place in the Cross-institutional Authority Collaboration (Institutionenübergreifende Integration von Normdaten, IN2N) project. This pilot project, executed in cooperation of the German National Library and the German Film Institute, aims to establish new collaboration models to improve cross-domain authority maintenance. The paper outlines applied strategies for making a shared infrastructure available as well as workflows for exchanging data about persons; interface enhancements to benefit from innovative web approaches; and cross-institutional data search and representation solutions. Furthermore, we discuss specific boundary conditions for an interoperable cataloguing environment. |
Rene Kelpin, Antje von Schmidt, Michael Hepting, Petra Kokus, Alexandra Leipold & Thorsten Schäfer | OCS: 159 TITLE: Using Dublin Core Standard for the Metadata Description of Transport Statistics—Practical Experience from a Project Dedicated to the Set-Up of an Interlinked Statistics Portal ABSTRACT: The analysis of data on transport developments is one major objective in the transport research focus of the German Aerospace Center (DLR). Following this objective, DLR's Institute for Transport Research (VF) offers with the Clearing House of Transport and Mobility a unique collection of publicly funded travel and mobility surveys for Germany and is official provider for German household surveys and statistics. Additionally, information about similar statistical data sets and data portals in Europe is made available. The Institute of Air Transport and Airport Research (FW) as second DLR institute in the mentioned field provides with the "MONITOR" portal detailed information and statistics concerning air transport developments. With a set of indicators the institute performs furthermore analyses of the global long-term air transport development regarding air traffic and financial performance besides sustainability issues. As both institutes use the Dublin Core Standard for the description of the data sources in use, in 2011 the idea came up to realise a common (meta-)data repository for interested users which have the need to combine and investigate different transport statistics. Accordingly, the project "[email protected]" (Search TRAnsport DAta @ DLR) was launched in cooperation with DLR's Facility on Simulations and Software Technology (SC) to create an external search and analysis system allowing directed access to the mentioned data repositories. In this context, the presented project report discusses the usage of Dublin Core Standard in both institutes, as well as the organisational challenges and the technical approach in order to elaborate a harmonised metadata scheme for the implementation of the [email protected] portal. |
Stijn Goedertier, Nikolaos Loutas, João Rodrigues Frade, Christophe Colas, Michiel De Keyzer, Debora Di Giacomo, Makx Dekkers & Vassilios Peristeras | OCS: 164 TITLE: Realising a Federation of Repositories of Reusable Metadata ABSTRACT: Semantic assets and the agreements associated with them are essential elements for organisations to understand the meaning of the information they exchange—without them this information would be of little use. In order to facilitate the access of public administrations in Europe to reusable semantic assets, the Interoperability Solutions for European Public Administrations (ISA) Programme of the European Commission (ISA Programme) has been running for the last 3 years an action on syndicating content from different semantic asset repositories and making it available through a single point of access. In this paper we present the current state of the ADMS-based federation of semantic asset repositories on Joinup, namely a set of online collections of semantic assets maintained by public administrations, standardisation organisations and businesses, which currently counts more than 1500 semantic assets from 20 partner organisations. |
João Aguiar Castro, Cristina Ribeiro, João Rocha da Silva | OCS: 180 TITLE: Designing an Application Profile Using Qualified Dublin Core: A Case Study with Fracture Mechanics Datasets ABSTRACT: Metadata production for research datasets is not a trivial problem. Standardized descriptors are convenient for interoperability, but each area requires specific descriptors in order to guarantee metadata comprehensiveness and accuracy. In this paper, we report on an ongoing research data management experience at U.Porto that relies on prior data survey results and also on a set of tools for uploading and describing datasets. We presented two curation tools to a group of researchers from mechanical engineering, to help them manage and describe their datasets. After monitoring their interactions with the solutions and analyzing the needs of the group, we were able to select a subset of qualified Dublin Core, as well as a series of complementary descriptors, to capture the main aspects of their experiments. The resulting application profile combines generic, standardized DC descriptors with descriptors from a different experimental standard, and introduces extra domain-specific ones. The profile has been validated by the researchers and is now being used in the description of their datasets. |
Inna Kouper, Stacy R Konkiel, Jennifer A Liss & Juliet L Hardesty | OCS: 181 TITLE: Collaborate, Automate, Prepare, Prioritize: Creating Metadata for Legacy Research Data ABSTRACT: Data curation projects frequently deal with data that were not created for the purposes of long-term preservation and re-use. How can curation of such legacy data be improved by supplying necessary metadata? In this report, we address this and other questions by creating robust metadata for twenty legacy research datasets. We report on quantitative and qualitative metrics of creating domain-specific metadata and propose a four-prong framework of metadata creation for legacy research data. Our findings indicate that there is a steep learning curve in encoding metadata using the FGDC content standard for digital geospatial metadata. Our project also demonstrates that data curators who are handed research data "as is" and are tasked with incorporating such data into a data sharing environment can be very successful in creating descriptive metadata—particularly, in conducting subject analysis and assigning keywords based on controlled vocabularies and thesauri. At the same time, they need to be aware of limitations in their efforts when it comes to structural and administrative metadata. |
Muriel Foulonneau, Eric Ras, Elie Abou Zeid & Talar Atéchian | OCS: 190 TITLE: Reusing Textual Resources in Educational Assessment: Adding Text Readability Metrics to Learning Metadata ABSTRACT: Many digital libraries have identified learners as a core audience. Indeed, many of their resources can be reused in educational contexts. Nevertheless, the search criteria used for retrieving texts as a specific multimedia type are limited. They often do not include properties specific to educational contexts. Assigning LOM metadata to a theatre play or a painting is difficult, since it was not created for a particular learning context. However, it is possible to assign metadata to textual resources based on their characteristics and map these characteristics to an IEEE LOM or DCMI Audience metadata element. Text readability metrics for instance can be mapped to educational audiences. In the scope of the iCase project, we are developing an assessment item generation system. We have therefore analyzed metadata models for assessment resources and defined a set of metadata which should be assigned to the multimedia components of assessment items. A major challenge consists in relating multimedia resources to the specific audience metadata. In order to include external resources such as texts, we developed a component available as a Web service to assign metrics related to text readability. In this paper, we present metadata for assessment items and introduce readability metrics. |
Poster Abstracts
Poster Author | Poster Title & Abstract |
Michael D. Crandall, Joseph Tennis, Stuart Sutton, Thomas Baker & David Talley | OCS: 155 TITLE: Planning a Platform for Learning Linked Data ABSTRACT: This poster describes a project under development to create an online environment in support of students and professionals in libraries, museums, and archives for learning the principles and practices of Linked Data. The environment envisioned includes instructional resources for personal study and use as supporting learning resources in formal and informal teaching and training. The project will work at the intersection of a number of current threads in support for anytime, anywhere teaching and learning: (1) the rapid develop of instructional components in the form of microtutorials as seen in the Kahn Academy; and (2) the developing focus in education on organization of learning resources based on the competencies and learning outcomes those resources enable. The project will build on the outcomes of a one-year planning grant from IMLS to engage educators, trainers, technologists and application developers in envisioning such an environment. |
Joachim Neubert | OCS: 169 TITLE: ZBW Labs: Publish Projects as Linked Data ABSTRACT: ZBW Labs (http://zbw.eu/labs) gives insight into software developments of the German National Library for Economics (ZBW). It presents applications and (web) services in an experimental or beta state. In order to make information about these projects available on the web of data, the underlying content management system Drupal is used to add semantic (RDFa) markup, combing the DOAP (description of a project), schema.org and DC vocabularies. Though Drupal 7 has RDF support built in natively, some customizations were required to make Linked Data URIs and nested property structures available. |
Pedro Príncipe, Eloy Rodrigues, Najla Rettberg, Jochen Schirrwagen, Mathias Loesch, Mikael Karstensen Elbæk & Lars Holm Nielsen | OCS: 172 TITLE: OpenAIRE Guidelines for Data Archive, Literature Repository and CRIS Managers ABSTRACT: Exposure and visibility of content from a range of European repositories will be significantly increased when a common and interoperable approach is taken and care to adhere to existing guidelines. This compatibility will lead to future interoperability between research infrastructures, and structured metadata is of benefit to individual data repositories and the knowledge community at large. OpenAIRE is starting to move from a publication infrastructure to a more comprehensive infrastructure that covers all types of scientific output. To put this into practice an integrated suite of guidelines were developed with specific requirements to support the goal of OpenAIRE and the European Commission. The poster will briefly outline the OpenAIRE Guidelines: Guidelines for Data Archive Managers, for Literature Repository Managers and for CRIS Managers. By implementing all three sets of the OpenAIRE Guidelines, repository managers will be able to enable authors who deposit publications in their repository to fulfill the EC Open Access requirements, as well as the requirements of other (national or international) funders with whom OpenAIRE cooperates. In addition it will allow the OpenAIRE infrastructure to add value-added services such as discoverability and linking, and creation of enhanced publications. In short, building the stepping-stones for a linked data infrastructure for research. |
Helder Monteiro Firmino | OCS: 182 TITLE: Upgrading a Flat List of Terms into a Linked Open Data Structures Case study: Portuguese National Authority of Communications (ANACOM) ABSTRACT: ????This poster reports on the results of a project that aims to analyse, organize and publish the terms being used to describe resources at the ANACOM's website. As expected results we plan to relate the resulting controlled vocabularies with other existing vocabularies, encode the resulting controlled vocabularies in SKOS in, at least, two languages, and under a five star LOD perspective and we intend to develop a SPARQL endpoint. To achieve this purpose, we inspired ourselves in two methodologies for building ontologies and/or controlled vocabularies: Ontology Development 101 and the Methodology for Core Vocabularies. |
Marina Morgan, M J Suhonos & Fangmin Wang | OCS: 186 TITLE: Digital Humanities and Metadata: Linking the Past to the Digital Future ABSTRACT: The purpose of this poster is to highlight cross-domain metadata uses, metadata mapping, and success measures at the Ryerson University Library and Archives. The Library is highly involved in Ryerson-based proposals for interdisciplinary projects, especially in the Digital Humanities. Designing an online environment for the preservation and analysis of illustrated texts for children and Canadiana is a collaborative effort that involves cataloguing, metadata mapping, digitization, and website design. |
Miaoling Chai & Jiang Zhu | OCS: 134 TITLE: The Research of Open Conference Resources Organization based on RDA Description ABSTRACT: With the academic exchanges evolving, the internet is flooded with a great quantity of conference information, proceedings and conference literatures. Due to the distribution and the quality uneven of these Open Conference and Resources (OCR), it's hard to use these information sufficiently. Therefore, Chengdu Branch of National Science Library of Chinese Academic Science (CBNSLCAS) implements Acquisition and Service System of Open Conference Resources (ASOCR)[1] to gather these resources efficiently. However, in order to relate the OCR to each other and show resource content efficiently, proper resource description models are required. Under the circumstances, the paper describes the characteristics of OCR, resource organizing modes and described contents. Furthermore, the paper proposes OCR description model based on Resource Description and Access (RDA). And compared with OCR description model based on Dublin Core (DC), RDA description model may be more fit for describing complex relations of OCR in the ASOCR. The program determined the entity and entity relationship. At the end of this paper, future work is discussed. |
Nalini Umashankar | OCS: 192 TITLE: Using Metadata Standards to Improve National and IMF Data ABSTRACT: This poster illustrates how metadata standardization in the International Monetary Fund (IMF) leads to an improvement in the quality of statistical information and a better understanding of data and metadata by users. It enables a more efficient and faster exchange of information at lower costs, which is made possible as a result of collaboration with member countries and other international organizations. Metadata standardization leads to greater efficiencies and lower costs in global exchange and internal production of data. Use of metadata standards enhances the accountability of countries for providing quality information about their economy and improves the understanding of data by users. The IMF experience, as outlined in this abstract, demonstrates how metadata standards have resulted in faster, cheaper and more consistent production and dissemination of data. |
DCMI's work is supported, promoted and improved by « Member organizations » around the world:
DCMI's annual meeting and conference addresses models, technologies and applications of metadata