Biological and Chemical Oceanography Data Management Office (BCO-DMO)
Permanent URI for this community
The Biological and Chemical Oceanography Data Management Office (BCO-DMO) staff members work with investigators to serve data online from research projects funded by the Biological and Chemical Oceanography Sections, the Division of Polar Programs Arctic Sciences and Antarctic Organisms & Ecosystems Program at the U.S. National Science Foundation.
BCO-DMO is a combination of the formerly independent Data Management Offices formed in support of the US JGOFS and US GLOBEC programs. The BCO-DMO staff members are the curators of the data collections created by those respective programs, as well as data from more recent NSF Geosciences Directorate (GEO) Division of Ocean Sciences (OCE) Biological and Chemical Oceanography Sections, Division of Polar Programs (PLR) Antarctic Sciences (ANT) Organisms & Ecosystems, and Arctic Sciences (ARC) awards. The BCO-DMO project is funded by NSF OCE and ANT programs, NSF award number OCE-1435578.
Data sets managed by BCO-DMO and hosted in WHOAS, can be found here.
Browse
Browsing Biological and Chemical Oceanography Data Management Office (BCO-DMO) by Issue Date
Results Per Page
Sort Options
-
Moving ImageCollaborative research : EarthCube building blocks, leveraging semantics and linked data for geoscience data sharing and discovery, OceanLink( 2013-10-28) Wiebe, Peter H. ; Chandler, Cynthia L. ; Raymond, Lisa ; Shepherd, Adam ; Finin, Tim ; Narock, Tom ; Arko, Robert A. ; Carbotte, Suzanne M. ; Hitzler, Pascal ; Cheatham, Michelle ; Krisnadhi, AdilaThe OceanLink EarthCube project will apply state-of-the-art Semantic Web Technologies to support data representation, discovery, analysis, sharing, and integration of datasets from the global oceans, and related resources including meeting abstracts and library holdings. Ships are a principal platform from which a wide spectrum of oceanographic data are collected. At the University of Maryland, Baltimore County, semantic relationships will be extracted from text for use in developing methods that efficiently identify relationships across distributed oceanographic datasets. At Wright State University integration of disparate data will occur by refining and applying leading edge technology from the Semantic Web, ontologies, and linked data. From the MBLWHOI Library, DSpace content will be published as Linked Open Data, providing relationships between oceanographic datasets, publications, conference presentations, and funded National Science Foundation projects. Teams of researchers at the Lamont-Doherty Earth Observatory and the Woods Hole Oceanographic Institution will develop Use Cases that represent the needs of the oceanographic research community and will publish oceanographic dataset catalogs as Linked Open Data. A key contribution will be semantically-enabled cyberinfrastructure components capable of automated data integration across distributed repositories. These efforts will ultimately lead to generalized computational techniques applicable to all of EarthCube.
-
OtherEnd-User Workshop Report: Articulating the Cyberinfrastructure Needs of the Ocean Ecosystem Dynamics Community( 2013-12-10) Kinkade, Danie ; Chandler, Cynthia L. ; Glover, David M. ; Groman, Robert C. ; Kline, David ; Nahorniak, Jasmine ; O'Brien, Todd D. ; Perry, Mary J. ; Pierson, James J. ; Wiebe, PeterAn EarthCube Water Column Domain End-User Workshop hosted by the Biological and Chemical Oceanographic Data Management Office (BCO-DMO) was held October 7-8, 2013 at Woods Hole Oceanographic Institution. The goal of the workshop was to articulate cyberinfrastructure needs of the ocean ecosystem dynamics community with particular focus on the challenges presented by multi-disciplinary marine ecosystem research that requires investigations in four dimensions. The workshop included 50 participants in the domain of oceanic ecosystem dynamics (established and early career researchers, teaching faculty, graduate students, postdocs, data and information managers and cyber-related researchers) to explore and document the community’s cyberinfrastructure needs from the users’ viewpoint.
-
ArticleBringing dark data into the light : a case study of the recovery of Northwestern Atlantic zooplankton data collected in the 1970s and 1980s(Elsevier, 2015-04-06) Wiebe, Peter H. ; Allison, M. DicksonData generated as a result of publicly funded research in the USA and other countries are now required to be available in public data repositories. However, many scientific data over the past 50+ years were collected at a time when the technology for curation, storage, and dissemination were primitive or non-existent and consequently many of these datasets are not available publicly. These so-called “dark data” sets are essential to the understanding of how the ocean has changed chemically and biologically in response to the documented shifts in temperature and salinity (aka climate change). An effort is underway to bring into the light, dark data about zooplankton collected in the 1970s and 1980s as part of the cold-core and warm-core rings multidisciplinary programs and other related projects. Zooplankton biomass and euphausiid species abundance from 306 tows and related environmental data including many depth specific tows taken on 34 research cruises in the Northwest Atlantic are online and accessible from the Biological and Chemical Oceanography Data Management Office (BCO-DMO).
-
ArticleData management strategy to improve global use of ocean acidification data and information(The Oceanography Society, 2015-06) Garcia, Hernan E. ; Cosca, Catherine E. ; Kozyr, Alex ; Mayorga, Emilio ; Chandler, Cynthia L. ; Thomas, Robert W. ; O’Brien, Kevin ; Appeltans, Ward ; Hankin, Steve ; Newton, Jan A. ; Gutierrez, Angelica ; Gattuso, Jean-Pierre ; Hansson, Lina ; Zweng, Melissa ; Pfeil, BenjaminOcean acidification (OA) refers to the general decrease in pH of the global ocean as a result of absorbing anthropogenic CO2 emitted in the atmosphere since preindustrial times (Sabine et al., 2004). There is, however, considerable variability in ocean acidification, and many careful measurements need to be made and compared in order to obtain scientifically valid information for the assessment of patterns, trends, and impacts over a range of spatial and temporal scales, and to understand the processes involved. A single country or institution cannot undertake measurements of worldwide coastal and open ocean OA changes; therefore, international cooperation is needed to achieve that goal. The OA data that have been, and are being, collected represent a significant public investment. To this end, it is critically important that researchers (and others) around the world are easily able to find and use reliable OA information that range from observing data (from time-series moorings, process studies, and research cruises), to biological response experiments (e.g., mesocosm), data products, and model output.
-
PreprintExperiences of a “semantics smackdown”( 2016-02) Leadbetter, Adam ; Shepherd, Adam ; Arko, Robert A. ; Chandler, Cynthia L. ; Chen, Yanning ; Dockery, Nkemdirim ; Ferreira, Renata ; Fu, Linyun ; Thomas, Robert ; West, Patrick ; Zednik, StephanWithin the field of ocean science there is a long history of using controlled vocabularies and other Semantic Web techniques to provide a common and easily exchanged description of datasets. As an activity within the European Union, United States, Australian-funded project “Ocean Data Interoperability Platform”, a workshop took place in June 2014 at Rensselaer Polytechnic Institute to further the use of these Semantic Web techniques with the aim of producing a set of Linked Data publication patterns which describe many parts of a marine science dataset. During the workshop, a Semantic Web development methodology was followed which promoted the use of a team with mixed skills (computer, data and marine science experts) to rapidly prototype a Linked Data publication pattern which could be iterated in the future. In this paper we outline the methodology employed in the workshop, and examine both the technical and sociological outcomes of a workshop of this kind.
-
ArticleToward a new data standard for combined marine biological and environmental datasets - expanding OBIS beyond species occurrences(Pensoft, 2017-01-09) De Pooter, Daphnis ; Appeltans, Ward ; Bailly, Nicolas ; Bristol, Sky ; Deneudt, Klaas ; Eliezer, Menashè ; Fujioka, Ei ; Giorgetti, Alessandra ; Goldstein, Philip ; Lewis, Mirtha ; Lipizer, Marina ; Mackay, Kevin ; Marin, Maria ; Moncoiffe, Gwenaelle ; Nikolopoulou, Stamatina ; Provoost, Pieter ; Rauch, Shannon ; Roubicek, Andres ; Torres, Carlos ; van de Putte, Anton ; Vandepitte, Leen ; Vanhoorne, Bart ; Vinci, Matteo ; Wambiji, Nina ; Watts, David ; Salas, Eduardo Klein ; Hernandez, FranciscoThe Ocean Biogeographic Information System (OBIS) is the world’s most comprehensive online, open-access database of marine species distributions. OBIS grows with millions of new species observations every year. Contributions come from a network of hundreds of institutions, projects and individuals with common goals: to build a scientific knowledge base that is open to the public for scientific discovery and exploration and to detect trends and changes that inform society as essential elements in conservation management and sustainable development. Until now, OBIS has focused solely on the collection of biogeographic data (the presence of marine species in space and time) and operated with optimized data flows, quality control procedures and data standards specifically targeted to these data. Based on requirements from the growing OBIS community to manage datasets that combine biological, physical and chemical measurements, the OBIS-ENV-DATA pilot project was launched to develop a proposed standard and guidelines to make sure these combined datasets can stay together and are not, as is often the case, split and sent to different repositories. The proposal in this paper allows for the management of sampling methodology, animal tracking and telemetry data, biological measurements (e.g., body length, percent live cover, ...) as well as environmental measurements such as nutrient concentrations, sediment characteristics or other abiotic parameters measured during sampling to characterize the environment from which biogeographic data was collected. The recommended practice builds on the Darwin Core Archive (DwC-A) standard and on practices adopted by the Global Biodiversity Information Facility (GBIF). It consists of a DwC Event Core in combination with a DwC Occurrence Extension and a proposed enhancement to the DwC MeasurementOrFact Extension. This new structure enables the linkage of measurements or facts - quantitative and qualitative properties - to both sampling events and species occurrences, and includes additional fields for property standardization. We also embrace the use of the new parentEventID DwC term, which enables the creation of a sampling event hierarchy. We believe that the adoption of this recommended practice as a new data standard for managing and sharing biological and associated environmental datasets by IODE and the wider international scientific community would be key to improving the effectiveness of the knowledge base, and will enhance integration and management of critical data needed to understand ecological and biological processes in the ocean, and on land.
-
PresentationThe advantages of machine aided co-reference resolution for research cruise metadata( 2017-05-31) Shepherd, Adam ; Chandler, Cynthia L. ; Arko, Robert A. ; Fils, Douglas ; Kinkade, DanieOne of the central incentives of deploying linked open data is the opportunity to leverage the linkages between source datasets to retrieve related information. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) reaps these benefits by linking its cruise-level metadata to the Rolling Deck to Repository (R2R) – the trusted, authoritative source for cruises undertaken by the U.S. academic research fleet. Even though the process of identifying a link between these two repositories is easy for a human, this talk will explore the advantages of using a machine-aided process to suggest links to R2R cruises to a BCO-DMO data manager.
-
ArticleUsing peer review to support development of community resources for research data management(University of Massachusetts Medical School, 2017-09-08) Soyka, Heather ; Budden, Amber ; Hutchison, Viv ; Bloom, David ; Duckles, Jonah ; Hodge, Amy ; Mayernik, Matthew ; Poisot, Timothée ; Rauch, Shannon ; Steinhart, Gail ; Wasser, Leah ; Whitmire, Amanda ; Wright, StephanieTo ensure that resources designed to teach skills and best practices for scientific research data sharing and management are useful, the maintainers of those materials need to evaluate and update them to ensure their accuracy, currency, and quality. This paper advances the use and process of outside peer review for community resources in addressing ongoing accuracy, quality, and currency issues. It further describes the next step of moving the updated materials to an online collaborative community platform for future iterative review in order to build upon mechanisms for open science, ongoing iteration, participation, and transparent community engagement.
-
PresentationIn search of Frictionless Data(Biological and Chemical Oceanography Data Management Office, 2017-09-21) Shepherd, Adam
-
PresentationThe Frictionless Data Package : data containerization for automated scientific workflows [poster]( 2017-12-13) Shepherd, Adam ; Fils, Douglas ; Kinkade, Danie ; Saito, Mak A.As cross-disciplinary geoscience research increasingly relies on machines to discover and access data, one of the critical questions facing data repositories is how data and supporting materials should be packaged for consumption. Traditionally, data repositories have relied on a human's involvement throughout discovery and access workflows. This human could assess fitness for purpose by reading loosely coupled, unstructured information from web pages and documentation. In attempts to shorten the time to science and access data resources across may disciplines, expectations for machines to mediate the process of discovery and access is challenging data repository infrastructure. This challenge is to find ways to deliver data and information in ways that enable machines to make better decisions by enabling them to understand the data and metadata of many data types. Additionally, once machines have recommended a data resource as relevant to an investigator's needs, the data resource should be easy to integrate into that investigator's toolkits for analysis and visualization. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) supports NSF-funded OCE and PLR investigators with their project's data management needs. These needs involve a number of varying data types some of which require multiple files with differing formats. Presently, BCO-DMO has described these data types and the important relationships between the type's data files through human-readable documentation on web pages. For machines directly accessing data files from BCO-DMO, this documentation could be overlooked and lead to misinterpreting the data. Instead, BCO-DMO is exploring the idea of data containerization, or packaging data and related information for easier transport, interpretation, and use. In researching the landscape of data containerization, the Frictionlessdata Data Package (http://frictionlessdata.io/) provides a number of valuable advantages over similar solutions. This presentation will focus on these advantages and how the Frictionlessdata Data Package addresses a number of real-world use cases faced for data discovery, access, analysis and visualization.
-
ArticleSeaView : bringing together an ocean of data(The Oceanography Society, 2018-02-09) Stocks, Karen ; Diggs, Stephen ; Olson, Christopher ; Pham, Anh ; Arko, Robert A. ; Shepherd, Adam ; Kinkade, DanieThe Ocean Observatories Initiative (OOI) supports a comprehensive information management system for data collected by OOI assets, providing access to a wealth of new information for scientists. But what of those wishing to access data from the region of an OOI research array that is not from OOI assets, perhaps to look at longer term trends from before the launch of OOI, or to build a larger regional context? Despite the excellent work of ocean data repositories, finding, accessing, understanding, and reformatting data for use in a desired visualization or analysis tool remains challenging, especially when data are held in multiple repositories.
-
PresentationBiological & Chemical Oceanography Data Management Office : a domain-specific repository for oceanographic data from around the world [poster]( 2018-02-14) Ake, Hannah ; Biddle, Matt ; Copley, Nancy ; Kinkade, Danie ; Rauch, Shannon ; Saito, Mak A. ; Shepherd, Adam ; Switzer, Megan ; Wiebe, Peter ; York, AmberThe Biological and Chemical Oceanography Data Management Office (BCO-DMO) is a domain-specific digital data repository that works with investigators funded under the National Science Foundation’s Division of Ocean Sciences and Office of Polar Programs to manage their data free of charge. Data managers work closely with investigators to satisfy their data sharing requirements and to develop comprehensive Data Management Plans, as well as to ensure that their data will be well described with extensive metadata creation. Additionally, BCO-DMO offers tools to find and reuse these high-quality data and metadata packages, and services such as DOI generation for publication and attribution. These resources are free for all to discover, access, and utilize. As a repository embedded in our research community, BCO-DMO is well positioned to offer knowledge and expertise from both domain trained data managers and the scientific community at large. BCO-DMO is currently home to more than 9000 datasets and 900 projects, all of which are or will be submitted for archive at the National Centers for Environmental Information (NCEI). Our data holdings continue to grow, and encompass a wide range of oceanographic research areas, including biological, chemical, physical, and ecological. These data represent cruises and experiments from around the world, and are managed using community best practices, standards, and technologies to ensure accuracy and promote re-use. BCO-DMO is a repository and tool for investigators, offering both ocean science data and resources for data dissemination and publication.
-
PresentationThe Frictionless Data Package : data containerization for addressing big data challenges [poster]( 2018-02-15) Shepherd, Adam ; Fils, Douglas ; Kinkade, Danie ; Saito, Mak A.At the Biological and Chemical Oceanography Data Management Office (BCO-DMO) Big Data challenges have been steadily increasing. The sizes of data submissions have grown as instrumentation improves. Complex data types can sometimes be stored across different repositories . This signals a paradigm shift where data and information that is meant to be tightly-coupled and has traditionally been stored under the same roof is now distributed across repositories and data stores. For domain-specific repositories like BCO-DMO, a new mechanism for assembling data, metadata and supporting documentation is needed. Traditionally, data repositories have relied on a human's involvement throughout discovery and access workflows. This human could assess fitness for purpose by reading loosely coupled, unstructured information from web pages and documentation. Distributed storage was something that could be communicated in text that a human could read and understand. However, as machines play larger roles in the process of discovery and access of data, distributed resources must be described and packaged in ways that fit into machine automated workflows of discovery and access for assessing fitness for purpose by the end-user. Once machines have recommended a data resource as relevant to an investigator's needs, the data should be easy to integrate into that investigator's toolkits for analysis and visualization. BCO-DMO is exploring the idea of data containerization, or packaging data and related information for easier transport, interpretation, and use. Data containerization reduces not only the friction data repositories experience trying to describe complex data resources, but also for end-users trying to access data with their own toolkits. In researching the landscape of data containerization, the Frictionlessdata Data Package (http://frictionlessdata.io/) provides a number of valuable advantages over similar solutions. This presentation will focus on these advantages and how the Frictionlessdata Data Package addresses a number of real-world use cases faced for data discovery, access, analysis and visualization in the age of Big Data.
-
OtherBCO-DMO Quick Guide( 2018-09-19) Kinkade, Danie ; Shepherd, Adam ; Ake, Hannah ; Biddle, Matt ; Copley, Nancy ; Rauch, Shannon ; York, AmberCurating and providing open access to research data is a collaborative process. This process may be thought of as a life cycle with data passing through various phases. Each phase has its own associated actors, roles, and critical activities. Good data management practices are necessary for all phases, from proposal to preservation.
-
PresentationTowards capturing data curation provenance using Frictionless Data Package Pipelines [poster]( 2018-10-10) Shepherd, Adam ; Schloer, Conrad ; York, Amber ; Kinkade, DanieAt domain-specific data repositories, curation that strives for FAIR principles often entails transforming data submissions to improve understanding and reuse. The Biological and Chemical Oceanography Data Management Office (BCO-DMO, https://www.bco-dmo.org) has been adopting the data containerization specification of the Frictionless Data project (https://frictionlessdata.io) in an effort to improve its data curation process efficiency. In doing so, BCO-DMO has been using the Frictionless Data Package Pipelines library (https://github.com/frictionlessdata/datapackage-pipelines) to define the processing steps that transform original submissions to final data products. Because these pipelines are defined using a declarative language they can be serialized into formal provenance data structures using the Provenance Ontology (PROV-O, https://www.w3.org/TR/prov-o/). While there may still be some curation steps that cannot be easily automated, this method is a step towards reproducible transforms that bridge the original data submission to its published state in machine-actionable ways that benefit the research community through transparency in the data curation process.
-
PresentationThe Data Management Process and Lessons Learned From U.S. GEOTRACES(Woods Hole Oceanographic Institution, 2018-11-09) Rauch, Shannon ; Kinkade, Danie ; Shepherd, Adam ; Copley, Nancy ; Biddle, Matt ; York, AmberIn an effort to explore and develop international community interest for a potential future "Biogeotraces-like" program, a working group of 28 scientists from 9 nations met in Woods Hole in November 2018. The result of this workshop is a new research effort termed "Biogeoscapes". This presentation highlighted data management lessons and recommendations from based on past experience handling data from a similarly-scaled global research project, GEOTRACES.
-
PresentationWhat role should a domain-specific repository play in treating code as a first class research product? [poster]( 2018-12-13) Biddle, Matt ; Ake, Hannah ; Copley, Nancy ; Kinkade, Danie ; Rauch, Shannon ; Saito, Mak A. ; Shepherd, Adam ; Wiebe, Peter ; York, AmberThe Biological and Chemical Oceanography Data Management Office (BCO-DMO) is a publicly accessible earth science data repository created to curate, publicly serve (publish), and archive digital data and information from biological, chemical and biogeochemical research conducted in coastal, marine, great lakes and laboratory environments. The BCO-DMO repository works closely with investigators funded through the NSF OCE Division’s Biological and Chemical Sections and Antarctic Organisms & Ecosystems. The office provides services that span the full data life cycle, from data management planning support and DOI creation, to archiving with appropriate national facilities. Recently, more and more of the projects submitted to BCO-DMO represent modeling efforts which further increase our knowledge of the chemical and biological properties within the ocean ecosystem. But, as a repository traditionally focused on observational data as a primary research output, what roles should domain-specific data repositories play in this field? Recognizing code as a first class research product, how should repositories support the discovery, access and reuse of code and software used in hypothesis driven research? We feel the time is at hand for the community to begin a concerted and holistic approach to the curation of code and software. Such strategy development should begin with asking what is the appropriate output to curate? What is the minimum metadata required for re-use? How should code be stored and accessed? Should repositories support or facilitate peer reviewing code? The answers to these questions will better inform domain-specific repositories on how to better manage code as a first class research asset in order to support the scientific community. This presentation will explore these topics, inviting discussion from the audience to advance a collective strategy.
-
PresentationTowards Capturing Provenance of the Data Curation Process at Domain-specific Repositories( 2018-12-14) Shepherd, Adam ; Rauch, Shannon ; Schloer, Conrad ; Kinkade, Danie ; Biddle, Matt ; Copley, Nancy ; Saito, Mak A. ; Wiebe, Peter ; York, AmberData repositories often transform submissions to improve understanding and reuse of data by researchers other than the original submitter. However, scientific workflows built by the data submitters often depend on the original data format. In some cases, this makes the repository’s final data product less useful to the submitter. As a result, these two workable but different versions of the data provide value to two disparate, non-interoperable research communities around what should be a single dataset. Data repositories could bridge these two communities by exposing provenance explaining the transform from original submission to final product. A subsequent benefit of this provenance would be the transparent value-add of domain repository data curation. To improve its data management process efficiency, the Biological and Chemical Oceanography Data Management Office (BCO-DMO, https://www.bco-dmo.org) has been adopting the data containerization specification defined by the Frictionless Data project (https://frictionlessdata.io). Recently, BCO-DMO has been using the Frictionless Data Package Pipelines Python library (https://github.com/frictionlessdata/datapackage-pipelines) to capture the data curation processing steps that transform original submissions to final data products. Because these processing steps are stored using a declarative language they can be converted to a structured provenance record using the Provenance Ontology (PROV-O, https://www.w3.org/TR/prov-o/). PROV-O abstracts the Frictionless Data elements of BCO-DMO’s workflow for capturing necessary curation provenance and enables interoperability with other external provenance sources and tools. Users who are familiar with PROV-O or the Frictionless Data Pipelines can use either record to reproduce the final data product in a machine-actionable way. While there may still be some curation steps that cannot be easily automated, this process is a step towards end-to-end reproducible transforms throughout the data curation process. In this presentation, BCO-DMO will demonstrate how Frictionless Data Package Pipelines can be used to capture data curation provenance from original submission to final data product exposing the concrete value-add of domain-specific repositories.
-
PresentationFrictionless Data Processing in the Wild( 2019-05-08) York, Amber ; Schloer, Conrad ; Copley, Nancy ; Biddle, Matt ; Rauch, Shannon ; Haskins, Christina ; Soenen, Karen ; Shepherd, Adam ; Kinkade, DanieFrictionless Data (FD) initiatives out of the Open Knowledge Foundation provide attractive informatics and processing capabilities. The BCO-DMO data repository used FD tools on real-world datasets, and we have some lessons learned to share. By building upon existing FD tools, we found ways to reduce the amount of time data managers spend generating metadata, and writing custom scripts. We are also developing ways for data managers with varying levels of scripting ability to make use of Frictionless Data tools.
-
ArticleChallenges and future directions for data management in the geosciences(American Meteorological Society, 2019-06-04) Schuster, Douglas C. ; Mayernik, Matthew ; Hou, Chung-Yi ; Stossmeister, Greg ; Downs, Robert R. ; Kinkade, Danie ; Nguyen, Tran B. ; Ramamurthy, Mohan ; Zhang, FuqingThe open availability and wide accessibility of digital scientific resources, such as articles and datasets, is becoming the norm for twenty-first-century science. Geoscience researchers are now being asked by funding agencies and scientific publishers to archive and cite data to support open access but often struggle to understand, interpret, and fulfill these requirements. To fulfill the promise of new open data initiatives, 1) scientific resources (e.g., data and software) must be collected and documented properly; 2) repository services, including preservation and storage capabilities, must be maintained, supported, and improved over time; and 3) governance institutions must be established. These issues were discussed in the Geoscience Digital Data Resource and Repository Service (GeoDaRRS) workshop,1 held in August 2018, at NCAR. The workshop brought together more than 60 geoscience researchers, technology experts, scientific publishers, funders, and data repository personnel to discuss data management challenges and opportunities within the geosciences. This included exploring whether new services are needed to complement existing data facilities, particularly in the areas of 1) data management planning support resources and 2) repository services for geoscience researchers who have data that do not fit in any existing repository. More details on the workshop agenda and recommendations are available in the final workshop report (Mayernik et al. 2018).