» Full Papers (Peer Reviewed)
» Project Reports (Peer Reviewed)
» Posters (Peer Reviewed)
» Best Practice Posters
» Best Practice Demonstrations
Full Papers (Peer Reviewed)
Project Reports (Peer Reviewed)
Paper Author | Paper Title & Abstract |
Andrew Weidner, Annie Wu & Santi Thompson | OCS: 218 TITLE: Automated Enhancement of Controlled Vocabularies: Upgrading Legacy Metadata in CONTENTdm ABSTRACT: To ensure robust, reliable, retrievable and sharable metadata, the University of Houston (UH) Libraries initiated a Metadata Upgrade Project in 2013 to systematically audit and refine the quality of the metadata in the University of Houston Digital Library (UHDL). Still in progress, the Metadata Upgrade project has already produced significant improvements in the UHDL's legacy metadata. The final phase of the Metadata Upgrade Project includes aligning controlled vocabulary terms with appropriate authorities and adding and revising descriptive content in the digital library. This is a time intensive process that requires careful evaluation and entry of name and subject authority terms. To improve efficiency and accuracy during the data entry process, the metadata librarian at UH Libraries developed name and subject authority applications that automatically transform legacy controlled vocabulary terms into authorized forms. This project report will provide an overview of the University of Houston's Metadata Upgrade Project, a discussion of how the UHDL’s upgraded metadata improves discoverability of our collections, and an in-depth look at the custom tools that automate the authority alignment process in the CONTENTdm Project Client. |
Stefanie R?hle, Francesca Schulze & Michael B?chner | OCS: 231 TITLE: Applying a Linked Data Compliant Model: The Usage of the Europeana Data Model by the Deutsche Digitale Bibliothek ABSTRACT: In 2013/14 the Deutsche Digitale Bibliothek (DDB) switched its data model from the CIDOC conceptual reference model to the Europeana Data Model (EDM). This decision was taken on the background of two major mandates the DDB has to fulfill: The DDB is as a portal and a platform providing access to digital objects from German cultural heritage and research institutions. On the other hand the DDB aims to become the German aggregator for Europeana. Using EDM as the internal DDB data model was approved as the most reasonable solution to meet these challenges. The DDB uses the model for all portal functions that require semantic links between metadata (search facets, hierarchies, links between authority files and digital objects). The application of EDM for the DDB portal raised some difficulties since not all necessary classes and properties were entirely implemented in Europeana-EDM at that time. Therefore, a DDB-EDM application profile was developed. The DDB publishes metadata under the CC0 Public Domain Dedication license in EDM-RDF/XML via an OAI-PMH interface to serve Europeana and also via an Application Programming Interface (API) for external users to develop new applications on the basis of metadata harmonized by the DDB. |
Sharon Farnel & Ali Shiri | OCS: 236 TITLE: Metadata for Research Data: Current Practices and Trends ABSTRACT: Currently, there are a number of research data service providers that allow deposit of research data or gather metadata for research data housed elsewhere. Examples include Datacite (http://www.datacite.org/), Dataverse Network (http://thedata.org/), Dryad (http://datadryad.org/), and FigShare (http://figshare.com/). These services make use of a broad range of metadata practices and elements. The objective of this study is to examine the metadata standards and formats used by a select number of research data services to address the following specific research questions: 1) What is the number and nature of metadata elements available? 2) Do any of the services provide research data specific metadata elements in addition to common metadata elements? 3) Do the research data management services adhere to widely recognized metadata, interoperability and preservation standards? 4) What research data repositories benefit from and promote controlled vocabularies for subject description and access? 5) Is there support for unique identifiers (e.g., DOIs)? 6) What kind of metadata assistance (documentation, etc.) is provided? 7) What metadata elements are common and different across these services? The results of this study will contribute to a better understanding of the development and application of metadata in research data services as well as to the development of an interoperable research data environment. |
Jing Wan, Yubin Zhou, Gang Chen & Junkai Yi | OCS: 247 TITLE: Designing a Multi-level Metadata Standard based on Dublin Core for Museum Data ABSTRACT: Metadata is a critical aspect of describing, managing and sharing museum data. It is challenging to develop a general standard that will meet the requirements of different museums due to the large range of data types. The capability of concise description and the simplicity of use need to be considered. In this paper, we report on a finished project that aims to design the metadata for museum in China. An extensible metadata standard based on Dublin Core is presented, which includes a core metadata, extension rules and specific metadata. For the core metadata, we introduce the terms, definitions, registration rules and detailed examples of description. The principle of choosing the terms and refinements is discussed. A specific metadata for porcelain is discussed as an extension example. |
Deborah Maron, Cliff Missen & Jane Greenberg | OCS: 259 TITLE: "Lo-Fi to Hi-Fi": A new metadata approach in the Third World with the eGranary Digital Library ABSTRACT: Digital information can bridge age-old gaps in access to information in traditionally underserved areas of the world. However, for those unfamiliar with abundant e-resources, their early exposure to the digital world can be like "drinking from a fire hose." For these audiences, abundant metadata and findability, along with easy-to-use interfaces, are key to their early success and adoption. To hasten the creation of metadata and user interfaces, the authors are experimenting with "crowd cataloging." This report documents their work and Maron's Lo-Fi to Hi-Fi metadata pyramid model guiding a developing metadata initiative being pursued with the eGranary Digital Library, the technology used by Widernet in a global effort to ameliorate information poverty. The Lo-Fi to Hi-Fi model, with principles adapted from technical design processes, aligns with research that has shown that community-based librarians are better poised to identify culturally congruent resources, but many require significant training in metadata concepts and skills. The model has students crowdsource "lo-fi" terms, which domain experts and information professionals can curate and cull in "hi-fi" to enhance findability of resources within the eGranary while simultaneously honing their own computer, information and metadata literacies. Though the focus here is on Africa, the findings and practices can be universalized to eGranaries around the globe, if successful. |
Jeff Keith Mixter, Patrick OBrien & Kenning Arlitsch | OCS: 269 TITLE: Describing Theses and Dissertations Using Schema.org ABSTRACT: This report discusses the development of an extension vocabulary for describing theses and dissertations, using Schema.org as a foundation. Instance data from the Montana State University ScholarWorks institutional repository was used to help drive and test the creation of the extension vocabulary. Once the vocabulary was developed, we used it to convert the entire ScholarWorks data sample into RDF. We then serialized a set of three RDF descriptions as RDFa and posted them online to gather statistics from Google Webmaster Tools. The study successfully demonstrated how a data model consisting of primarily Schema.org terms and supplemented with a list of granular/domain specific terms can be used to describe theses and dissertations in detail. |
Posters (Peer Reviewed)
Best Practice Posters
Poster Author | Poster Title & Abstract |
Jason Thomale & William Hicks | OCS: 272 TITLE: A Library Catalog REST API Framework ABSTRACT: Many library catalogs and systems remain isolated in 2014. Although we have made significant strides over the past decade to open our metadata, many individual libraries rely heavily on the ILS vendors to implement open protocols, standards, and APIs. At the University of North Texas Libraries, we have been developing a REST API framework for exposing our catalog and ILS metadata, taking our first steps toward breaking away from this limited paternalistic model. Catalog resources that we've modeled so far include bibliographic records (modified from MARC), item-level records, branch location records, item type records, and item status records. We are also working on resources that support a shelf-list browser application, which mix user-supplied data with item and bibliographic metadata and demonstrate a real-world use for the API. But, our framework is not merely an API for our particular ILS. Rather, we are developing a toolset to allow us to extract and re-model our ILS data—to use data derived from our ILS but not necessarily to adhere to ILS data models—and expose the data as RESTful, linked resources. Although our initial efforts have focused on modeling resources that do closely align with ILS entities, future development will include extended models for work- and identity-related resources and possibly extending our APIs to expose linked data (using, e.g., JSON-LD). Best practices in this area, exposing ILS metadata as RESTful resources, are hard to come by. Given the mixture of metadata practitioners, systems- and web-oriented individuals that the DCMI conferences attract, we hope that presenting a poster about the project in the Best Practices track might allow us to connect with new dialog partners. Ultimately, we believe an exchange of information about our project so far would be valuable to us and to others in the DCMI community. |
Susan Matveyeva & Lizzy Anne Walker | OCS: 275 TITLE: Building the Bridge: Collaboration between Technical Services and Special Collections ABSTRACT: The poster describes the process and result of the work of a group of librarians from Technical Services and Special Collections on the development of metadata standards and practices for digitization. At Wichita State University Ablah Library, members of Technical Services and Special Collections were assigned a mass digitization project of Special Collections holdings. The departments collaborated to increase the visibility and accessibility of Special Collections and to digitally preserve their brittle rare materials. Both departments scanned collections, added metadata to the scanned images, and uploaded them to CONTENTdm. The departments faced challenges in regard to the mass digitization, such as lack of common standards, inconsistent metadata, and limited CONTENTdm expertise. Additionally, there had not been a dedicated metadata cataloger on staff in Special Collections. Staff from both departments created a metadata group which was responsible for decision making in regard to metadata fields used for manuscripts and printed books. Investigation of standards and best practices, creation of data dictionaries, and mapping templates were only a few of the topics focused on by this subcommittee. The group developed two metadata templates (minimal and core) for published and unpublished materials. The templates focused on access to collections; future migration; and preservation. Both departments agreed on common standards for rare books and manuscripts cataloging and used the same best practices on sharable metadata. This has been a positive learning experience for both departments. Bringing expertise of catalogers and uniqueness of Special Collections together has helped them to be less isolated. The implementation of metadata and cataloging standards creates a layer of interoperability, and increases the potential of users finding what they need. Additionally, the departments have a new working relationship that will hopefully continue in the future. |
Jason W. Dean & Deborah E. Kulczak | OCS: 279 TITLE: Best Practices for Complex Diacritics Handling in CONTENTdm ABSTRACT: This poster is based upon a recently completed project at the University of Arkansas Libraries that dealt with metadata and items in a plethora of languages, from English and French to Quapaw, many of which required the use of unusual diacritical marks. Such diacritics and special characters are ubiquitous not only in cultural resources associated with the humanities, but also in scientific and technical materials, and their correct rendering is often necessary for meaning. The poster will describe best practices for generating, converting, and ingesting diacritics into CONTENTdm Digital Collection Management Software (used by more than 2,000 institutions worldwide) for either metadata in a tab delimited file, or an accompanying text or translation document, or a controlled vocabulary list. Best practices for encoding and diacritics are confusing at best as described in the CONTENTdm support, and this poster aims to fill this knowledge gap. Specific software to be discussed includes Excel, Open Office Calc, and Notepad ++ |
Carolyn Hansen & Sean Crowe | OCS: 281 TITLE: Making Vendor-Generated Metadata Work for Archival Collections Using VRA and Python ABSTRACT: The purpose of this poster is to illustrate a successful workflow for improving vendor-generated metadata for a large digital collection of archival materials by converting the metadata from the Dublin Core standard to the VRA standard using the scripting language Python. |
Julie Fukuyama & Akiko Hashizume | OCS: 282 TITLE: The NDL Great East Japan Earthquake Archive: Features of Metadata Schema ABSTRACT: The National Diet Library (NDL), Japan, in conjunction with numerous other organizations, has developed the Great East Japan Earthquake Archive Project for the collection, preservation, and provision of information related to the earthquake that struck Japan on March 11, 2011. A portal site for this project was developed by the NDL and opened to the public on March 2013. The portal site enables integrated searches of many resources on the earthquake and subsequent disasters, including images, video, websites, reports, and books, that were produced by institutions, such as mass media companies, universities or academic societies. The poster presents the Great East Japan Earthquake Archive Metadata Schema (NDLKN)[1] developed for this portal. This schema is based on the National Diet Library Dublin Core Metadata Description (DC-NDL), which is our own metadata schema, based on the DCMES and DCMI Metadata Terms. There were two major issues to solve in the development of NDLKN. The first was coordination of metadata in various systems over multiple domains. The second was to satisfy requirements for archiving disaster records, which need to have geographic and temporal information. As a solution to the latter issue, for example, some terms were adopted from the Basic Geo (WGS84 lat/long) Vocabulary ([geo:lat] for the latitude, [geo:lond] for the longitude etc.) and Ontology for vCard ([v:region] for the prefecture, [v:locality] for the city etc.) for geospatial information and [dcterms:created] for the date the image or video was recorded. Furthermore, we described the name and URI of disasters in [dcterms:coverage]. |
Ashleigh N. Faith, Eugene Tseytlin & Tanja Bekhuis | OCS: 285 TITLE: Development of the EDDA Study Design Terminology to Enhance Retrieval of Clinical and Bibliographic Records in Dispersed Repositories ABSTRACT: Medical terminology varies across disciplines and reflects linguistic differences in communities of clinicians, researchers, and indexers. Inconsistency of terms for the same concepts impedes interoperable metadata and retrieval of information artifacts, such as records of clinical reports and scientific articles that reside in various repositories. To facilitate information retrieval and, more recently, data sharing, the medical community maintains an assortment of terminologies, thesauri, and ontologies. Valuable resources include the US National Library of Medicine Medical Subject Headings (MeSH), Elsevier Life Science thesaurus (Emtree), and the National Cancer Institute Thesaurus (NCIt). It is increasingly important to identify medical investigations by their design features, as these have implications for evidence regarding research questions. Recently, Bekhuis et al (2013) found that coverage of study designs was poor in MeSH and Emtree. Based on this work, the EDDA Group at the University of Pittsburgh is developing a terminology of designs. In addition to randomized controlled trials, it covers observational or uncontrolled designs. Among the resources analyzed thus far, inconsistent entry points, semantic labels, synonyms, and definitions are common. The EDDA Study Design Terminology is freely available in the NCBO BioPortal (http://purl.bioontology.org/ontology/EDDA). The current version has 169 classes. Some of the preferred terms have several variants, definitions (sometimes competing) labeled for source (MeSH, Emtree, NCIt) and year, as well as IDs such as concept identifiers useful for other researchers. The beta version was developed using the Protégé ontology editor v.4.3 (http://protege.stanford.edu) and distributed as an OWL file. DCMI protocols are in place for recording term metadata and OWL annotations. Further development entails adding definitions from other sources, mapping relationships among terms, and integrating terms from existing vocabularies, particularly the Information Artifact Ontology. A primary goal is to improve identification and retrieval of electronic records describing studies in dispersed data warehouses or electronic repositories. |
Emily Porter | OCS: 288 TITLE: Normalizing Decentralized Metadata Practices Using Business Process Improvement Methodology: a Data-Informed Approach to Identifying Institutional Core Metadata ABSTRACT: The Emory University Libraries and Emory Center for Digital Scholarship have developed numerous digital collections over the past decade. Accompanying metadata originates via multiple business units, authoring tools and schemas, and is delivered to varied destination platforms. Seeking a more uniform metadata strategy, the Libraries' Metadata Working Group initiated a project in 2014 to define a set of core, discovery-focused, schema-agnostic metadata elements supporting local content types. Quantitative and qualitative techniques commonly used in the field of Business Process Improvement were utilized to mitigate complex organizational factors. A key research deliverable emerged from benchmarking: a structured comparison of over 30 element sets, recording for each standard its descriptive element names, their required-ness, and general semantic concepts. Additional structured data collection methodologies included a diagnostic task activity, in which participants with varying metadata expertise created (simple) Dublin Core records for selected digital content. A survey of stakeholders provided greater context for local practices. Multiple public-facing discovery system interfaces were inventoried to log search, browse, filter, and sort options, and available web analytics were reviewed for user activity patterns correlating to these options. Thematic analysis was performed on all benchmarking, system profiles, and web analytics data to map the results to a common set of conceptual themes, facilitating quantification and analysis. A weighted scoring model enabled the ranking of elements' themes: the highest scoring concepts then explicated as an initial set of core elements, mapped to relevant standards and schemas. |
Sean Petiya | OCS: 290 TITLE: Converting Personal Comic Book Collections to Linked Data ABSTRACT: The comic book domain has received a great deal of attention in recent years as superhero movies dominate popular culture, and both graphic novels and manga continue to find their way to library shelves and special collections. This poster describes progress on the Comic Book Ontology (CBO), a metadata vocabulary in development for the description of comic books and comic book collections. It presents a diagram of the model and outlines the methodology and rationale for producing a core application profile composed of a subset of elements from the vocabulary, which represent the minimal commitment necessary to make a unique statement about a resource. Additionally, it illustrates how that core application profile is used to generate XML/RDF records from user data contained in spreadsheets, a popular method of cataloging personal comic book collections. |
Robert H. Estep | OCS: 292 TITLE: How To Build A Local Thesaurus ABSTRACT: A step-by-step approach to building a thesaurus of subject terms, both LC and Local for a specific digitization project. The thesaurus was the responsibility of the Cataloging group which provided enhanced metadata for a large and ongoing collection of images, in the form of individual subject terms and detailed descriptions. |
Constanze Curdt & Dirk Hoffmeister | OCS: 294 TITLE: The TR32DB Metadata Schema: A multi-level Metadata Schema for an Interdisciplinary Project Database ABSTRACT: This poster presents the self-developed, multi-level TR32DB Metadata Schema. It was designed and implemented with the purpose to describe all heterogeneous data, which are created by project participants of an interdisciplinary research project, with accurate, interoperable metadata. The metadata schema considers the interoperability to recent metadata standards and schemas. It is applied in the CRC/TR32 project database (TR32DB, www.tr32db.de), a research data management system, to improve the documentation, searchability and re-use of the data. The TR32DB is established for a multidisciplinary, long-term research project, the Collaborative Research Centre/Transregio 32 ‘Patterns in Soil-Vegetation-Atmosphere Systems: Monitoring, Modelling, and Data Assimilation’ (CRC/TR32, www.tr32.de), funded by the German Research Foundation. |
Michael Dulock | OCS: 295 TITLE: Reusing Legacy Metadata for Digital Projects: the Colorado Coal Project Collection ABSTRACT: Libraries and other cultural institutions are increasingly focused on efforts to unearth hidden and unique collections. Yet the metadata describing these collections, when such exist, may not be in an immediately useable format. In some cases the metadata records may be as exceptional as the materials themselves. In this poster I will discuss my research into how libraries can repurpose metadata in archaic formats using the Colorado Coal Project Collection slides as a case study. The Colorado Coal Project Collection documents the history of coal mining in the western United States, primarily focusing on the early 20th century. The collection comprises ninety video and audio files of interviews with coal miners, community members, and historians, transcripts for most of the interviews, and over four thousand slides depicting life around and in the mine. The collection touches on themes ranging from mine camp life to immigration to labor conditions and strikes. The slides are accompanied by over four thousand McBee edge-notched cards, a manual computing format that saw occasional use for medical, legal, and library records in the mid-20th century. These cards contain written notes as well as punches around the edge which indicate various features of the slides such as buildings, locations, dates, subject matter, and technical details. Transferring this rich metadata from thousands of cards into a format with which the digital initiatives team could work, and eventually import into a digital library collection, was a challenge. The poster will examine the process of transferring the robust metadata recorded on these arcane cards to a 21st century digital library collection, utilizing a combination of student labor, Metadata Services staff, MS Excel, and careful quality control. |
Virginia A. Dressler | OCS: 296 TITLE: Applying Concepts of Linked Data to Local Digital Collections to Enhance Access and Searchability ABSTRACT: Currently, Kent State University Library is preparing to redesign its online exhibits and digital collections onto a different content management system. The plan will entail migrating existing digital collections to another platform, and in so, provide a more inclusive search mechanism to enhance access. In order to prepare for this migration, we are currently mapping the existing digital collections into a new metadata schema for the proposed solution, from a locally created and hosted framework into a more sustainable platform with a consolidated, searchable base for all digital objects and corresponding metadata. This work includes transferring the current tailored, in-house method of operation, and transposing the MySQL data to a RDF structure for the new solution. Principles of Linked Data will also be applied to the accompanying metadata files to further increase connections within the digital collections. The biggest resulting change from this shift of a homegrown solution into an extensible, open access platform is for the capability of searching across multiple collections. Currently, cross collection searching is not possible in the current interface, and there are related materials between several existing digital collections that would benefit as a result of this change. The poster will address the shift in the ideology into this new framework and highlight the benefits in this switch. |
Michael Lauruhn & Elshaimaa Ali | OCS: 298 TITLE: Wikipedia-based extraction of lightweight ontologies for concept level annotation ABSTRACT: This poster describes a project under development in which we propose a framework for automating the development of lightweight ontologies for semantic annotations. When considering building ontologies for annotations in any domain, we follow the process of ontology learning in Stelios 2006, but since we are looking for lightweight ontology, we only consider a subset of these tasks, which are the acquisition of domain terminologies, generating concept hierarchies, learning relations and properties, and ontology evaluation. When developing the framework modules we rely in most of our knowledge base on the structure of the Wikipedia, which is the category and the link structure of the Wikipedia pages in addition to specific sections of the content. To ensure machine understandability and interoperability, ontologies have to be explicit to make an annotation publicly accessible, formal to make an annotation publicly agreeable, and unambiguous to make an annotation publicly identifiable. An important aspect of building the domain ontology is to define an annotation schema that allows the developed ontologies to be reused and be part of linked data, we designed our schema based on annotation elements already defined in the Dublin Core standards and we also used the dbpedia schema for defining annotation elements for named entities. We developed additional annotation elements that will define domain concepts and context, also relations between concepts. These annotation elements are based on the link structure of the Wikipedia, their definition in the Wikipedia page and the category structure of the concepts. The framework modules include; domain concept extraction, semantic relatedness measures, concepts clustering and Wikipedia based relation extraction. |
Timothy Cole, Michael Norman, Patricia Lampron, William Weathers, Ayla Stein, M. Janina Sarol & Myung-Ja Han | OCS: 299 TITLE: MARC to schema.org: Providing Better Access to UIUC Library Holdings Data ABSTRACT: The University of Illinois at Urbana-Champaign (UIUC) Library has shared 5.5 million bibliographic catalog records. As released these include detailed information about physical holdings at Illinois, allowing consumers to know exactly which volumes or parts of the creative work described are available at UIUC. UIUC catalog records are (or soon will be) available as MARCXML, as MODS enriched with links to name and subject authorities, and as RDF (using schema.org semantics). This poster reports on the development of workflows for this project, on the multiple views of the catalog being made available, and on the lessons learned to date. |
Ann Ellis | OCS: 280 TITLE: Designing an Archaeology Database: Mapping Field Notes to Archival Metadata ABSTRACT: The Stephen F. Austin State University Center for Digital Scholarship and Center for Regional Heritage Research engaged in a collaborative project to design and implement a database collection in a digital archive that would accommodate images, data and text related to archaeological artifacts located in East Texas. There were challenges in creating metadata profiles that could effectively manage, retrieve and display the disparate data in multiple discovery platforms. The poster illustrates the steps that were taken to map field notes into useful archival metadata. Using original notes and field record information a preliminary data dictionary was created. After collaborative edits and revisions were made, a comprehensive data dictionary was designed to represent the materials in the collection. From this, a profile was configured in the digital archive platform to allow for upload of the metadata and images, and for discovery and display of the archaeological artifacts and related works. |
Lisa Federer | OCS: 304 TITLE: Utilizing Drupal for the Implementation of a Dublin Core-Based Data Catalog ABSTRACT: As funders and publishers increasingly require data sharing, researchers will need simple, intuitive methods for describing their data. Open-source systems like Drupal and extensible metadata schema like Dublin Core will likely play a large role in data description, thus making data more discoverable and facilitating data re-use. The objective of this project is to create a data catalog suitable for use within the context of biomedical and health sciences research within the National Institutes of Health (NIH) Library. The NIH Library serves the community of NIH intramural researchers, which includes over 1,200 principal investigators and 4,000 postdoctoral fellows conducting basic, translational, and clinical research on its primary campus in Bethesda, MD, and several satellite campuses. The ideal catalog would allow researchers to easily describe their data using Dublin Core Metadata Terms and subject-appropriate controlled vocabularies, as well as provide search and browse capabilities for end users to enable data discovery and facilitate re-use. A pilot system is currently undergoing testing with researchers within the NIH intramural community. Drupal, a free and open-source content management system, was utilized as a framework for a data catalog using the Dublin Core Metadata Terms. Using the Structure function within Drupal, the research data informationist at the NIH Library constructed a pilot system that utilized Dublin Core Metadata schema and relevant biomedical taxonomies. Results will be available by the time of the DCMI 2014 conference. A data catalog that utilizes an extensible metadata schema like Dublin Core and an open-source framework like Drupal provides users a powerful yet uncomplicated method for describing their data. This pilot system can be adapted to the needs of a variety of basic, translational, and clinical research applications. |
Joelen Pastva & Valerie Harris | OCS: 308 TITLE: PunkCore: Developing an Application Profile for the Culture of Punk ABSTRACT: PunkCore is a Dublin Core Application Profile (DCAP) for the description of the culture of Punk, including its music, its places, its fashions, its artistic expression through film and art, and its artifacts such as fliers, patches, buttons, and other ephemera. The structure of PunkCore is designed to be simple enough for non-experts yet specific enough to meet the needs of information professionals and to capture the unique qualities of materials classified as Punk. In the interest of interoperability and adoptability, PunkCore is drawn from existing metadata schema, and the development of PunkCore is intended to be open and collaborative to appeal to the entire Punk community. Our poster illustrates the initial development of the PunkCore standard and outlines future plans to bring PunkCore to the community. The PunkCore DCAP is in its first phase of development, which follows Singapore Framework stages 1 and 2, including the creation of a functional requirements document and domain model. In order to capture the specificity of Punk culture, a preliminary genre vocabulary has been also been developed. The functional requirements document, domain model, and genre vocabulary will be published on a wiki for community discussion and feedback. The remaining phases of development, including the creation of a description set profile and usage guidelines, will be initiated following our review of community interest and comments. The ultimate goal of this DCAP is to reach the Punk community and achieve broad adoption. The outcome of our work would aid in the effective acquisition and dissemination of Punk materials, or their metadata, in a variety of settings. Our project will also be useful to other niche communities documenting their cultural contributions because it provides a model that incorporates community outreach with traditional metadata development to lend more credibility and visibility to the end result. |
Serhiy Polyakov & Oksana L Zavalina | OCS: 309 TITLE: Approaches to Teaching Metadata Course at the University of North Texas ABSTRACT: This best practices poster discusses approaches to teaching Metadata and Networked Information Organization and Retrieval course in the Department of Library and Information Sciences, University of North Texas. The poster describes how this course was developed and evolved, teaching methods, topics covered, students' activities, and technology used. We share the experiences of using real-life projects which facilitate the development of practical skills of the students. The approaches to teaching Metadata course in UNT include combination of theoretical preparation, team work, and extensive practical experience which are very important assets on the job market. |
Best Practice Demonstrations
DCMI's work is supported, promoted and improved by « Member organizations » around the world:
DCMI's annual meeting and conference addresses models, technologies and applications of metadata