intern: Integrated Toolkit for Extensible and Reproducible Neuroscience

2020
As neuroscience datasets continue to grow in size, the complexity of data analyses can require a detailed understanding and implementation of systems computer science for storage, access, processing, and sharing. Currently, several general data standards (e.g., Zarr, HDF5, precompute, tensorstore) and purpose-built ecosystems (e.g., BossDB, CloudVolume, DVID, and Knossos) exist. Each of these systems has advantages and limitations and is most appropriate for different use cases. Using datasets that don9t fit into RAM in this heterogeneous environment is challenging, and significant barriers exist to leverage underlying research investments. In this manuscript, we outline our perspective for how to approach this challenge through the use of community provided, standardized interfaces that unify various computational backends and abstract computer science challenges from the scientist. We introduce desirable design patterns and our reference implementation called intern.
    • Correction
    • Source
    • Cite
    • Save
    14
    References
    2
    Citations
    NaN
    KQI
    []
    Baidu
    map