Janekovic Ivica

No Thumbnail Available
Last Name
First Name

Search Results

Now showing 1 - 2 of 2
  • Article
    A standardisation framework for bio-logging data to advance ecological research and conservation
    (Wiley, 2021-03-15) Sequeira, Ana M. M. ; O'Toole, Malcolm ; Keates, Theresa R. ; McDonnell, Laura H. ; Braun, Camrin D. ; Hoenner, Xavier ; Jaine, Fabrice R. A. ; Jonsen, Ian ; Newman, Peggy ; Pye, Jonathan ; Bograd, Steven ; Hays, Graeme ; Hazen, Elliott L. ; Holland, Melinda ; Tsontos, Vardis ; Blight, Clint ; Cagnacci, Francesca ; Davidson, Sarah C. ; Dettki, Holger ; Duarte, Carlos M. ; Dunn, Daniel C. ; Eguíluz, Víctor M. ; Fedak, Michael ; Gleiss, Adrian C. ; Hammerschlag, Neil ; Hindell, Mark ; Holland, Kim ; Janekovic, Ivica ; McKinzie, Megan K. ; Muelbert, Monica M. C. ; Pattiaratchi, Charitha ; Rutz, Christian ; Sims, David W. ; Simmons, Samantha E. ; Townsend, Brendal ; Whoriskey, Frederick G. ; Woodward, Bill ; Costa, Daniel P. ; Heupel, Michelle R. ; McMahon, Clive R. ; Harcourt, Robert ; Weise, Michael
    1. Bio-logging data obtained by tagging animals are key to addressing global conservation challenges. However, the many thousands of existing bio-logging datasets are not easily discoverable, universally comparable, nor readily accessible through existing repositories and across platforms, slowing down ecological research and effective management. A set of universal standards is needed to ensure discoverability, interoperability and effective translation of bio-logging data into research and management recommendations. 2. We propose a standardisation framework adhering to existing data principles (FAIR: Findable, Accessible, Interoperable and Reusable; and TRUST: Transparency, Responsibility, User focus, Sustainability and Technology) and involving the use of simple templates to create a data flow from manufacturers and researchers to compliant repositories, where automated procedures should be in place to prepare data availability into four standardised levels: (a) decoded raw data, (b) curated data, (c) interpolated data and (d) gridded data. Our framework allows for integration of simple tabular arrays (e.g. csv files) and creation of sharable and interoperable network Common Data Form (netCDF) files containing all the needed information for accuracy-of-use, rightful attribution (ensuring data providers keep ownership through the entire process) and data preservation security. 3. We show the standardisation benefits for all stakeholders involved, and illustrate the application of our framework by focusing on marine animals and by providing examples of the workflow across all data levels, including filled templates and code to process data between levels, as well as templates to prepare netCDF files ready for sharing. 4. Adoption of our framework will facilitate collection of Essential Ocean Variables (EOVs) in support of the Global Ocean Observing System (GOOS) and inter-governmental assessments (e.g. the World Ocean Assessment), and will provide a starting point for broader efforts to establish interoperable bio-logging data formats across all fields in animal ecology.
  • Preprint
    Collaboration tools and techniques for large model datasets
    ( 2006-08-08) Signell, Richard P. ; Carniel, Sandro ; Chiggiato, Jacopo ; Janekovic, Ivica ; Pullen, Julie ; Sherwood, Christopher R.
    In MREA and many other marine applications, it is common to have multiple models running with different grids, run by different institutions. Techniques and tools are described for low-bandwidth delivery of data from large multidimensional data sets, such as those from meteorological and oceanographic models, directly into generic analysis and visualization tools. Output is stored using the NetCDF CF Metadata Conventions, and then delivered to collaborators over the web via OPeNDAP. OPeNDAP datasets served by different institutions are then organized via THREDDS catalogs. Tools and procedures are then used which enable scientists to explore data on the original model grids using tools they are familiar with. It is also low-bandwidth, enabling users to extract just the data they require, an important feature for access from ship or remote areas. The entire implementation is simple enough to be handled by modelers working with their webmasters – no advanced programming support is necessary.