Skip to content

JackKelly/light-speed-io

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Light Speed IO (LSIO)

Warning

I've paused development on LSIO for now. I've shifted my focus to hypergrib. In its current state, LSIO is a very minimal proof-of-concept that io_uring is faster than object_store when reading many small chunks of files from local PCIe 5 SSDs on Linux. There is no Python API yet.

The ultimate ambition is to enable folks to efficiently load and process large, multi-dimensional datasets as fast as modern CPUs & I/O subsystems will allow.

For now, this repo is just a place for me to tinker with ideas.

Under the hood, light-speed-io uses io_uring on Linux for local files, and will use object_store for all other data I/O.

My first use-case for light-speed-io is to help to speed up reading Zarr. After that, I'm interested in helping to create fast readers for "native" geospatial file formats like GRIB2 and EUMETSAT native files. And, even further than that, I'm interested in efficient & fast computation on out-of-core, chunked, labelled, multi-dimensional data.

See planned_design.md for more info on the planned design. And please see this blogpost for my motivations for wanting to help speed up Zarr.

Roadmap

(This will almost certainly change!)

The list below is in (rough) chronological order. This roadmap is also represented in the GitHub milestones for this project, when sorted alphabetically.

Throw-away prototype

  • Initial prototype where a single crate does the IO and compute
  • io_uring prototype
  • io_uring prototype using Rayon to loop through io_uring completion queue
  • io_uring async/await implementation with object_store-like API
  • Try mixing Tokio with Rayon
  • Don't initialise buffers
  • Use aligned buffers and O_DIRECT
  • Benchmark against object_store using criterion
  • Chain open, read, close ops in io_uring
  • Build new workstation (with PCIe5 SSD)
  • Try using trait objects vs enum vs Box::into_raw for tracking in-flight operations
  • Try using fixed (registered) file descriptors
  • Try using Rayon for the IO threadpool
  • Investigate Rust's Stream.

Fresh start. Laying the foundations. New crates:

  • lsio_aligned_bytes: Shareable buffer which can be aligned to arbitrary boundaries at runtime
  • lsio_threadpool: Work-stealing threadpool (based on crossbeam-deque)
  • lsio_io: Traits for all LSIO IO backends

MVP IO layer

MVP Compute layer

  • Build a general-purpose work-steeling framework for applying arbitrary functions to chunks of data in parallel. And respect groups.
  • Wrap a few decompression algorithms

MVP File format layer: Read from Zarr

  • MVP Zarr library (just for reading data)
  • Python API for lsio_zarr
  • Benchmark lsio_zarr vs zarr-python v3 (from Python)

Improve the IO layer:

  • Optimise (merge and split) IO operations

Improve the compute layer

  • Investigate how xarray can "push down" chunkwise computation to LSIO

MVP End-user applications!

  • Compute simple stats of a large dataset (to see if we hit our target of processing 1 TB per 5 mins on a laptop!)
  • Load Zarr into a PyTorch training pipeline
  • Implement merging multiple datasets on-the-fly (e.g. NWP and satellite).

First release!

  • Docs; GitHub actions for Python releases; more rigorous automated testing; etc.
  • Release!
  • Enable Zarr-Python to use LSIO as a storage and codec pipeline?

Implement writing

  • Implement writing using lsio_uring
  • Implement writing using lsio_object_store_bridge
  • Implement writing in lsio_zarr

Improve IO:

Improve the file formats layer: Add GRIB support???

(Although maybe this won't be necessary because dynamical.org are converting datasets to Zarr)

  • Implement simple GRIB reader?
  • Convert GRIB to Zarr?
  • Load GRIB into a PyTorch training pipeline?

Grow the team? (Only if the preceding work has shown promise)

  • Try to raise grant funding?
  • Hire???

Future work (in no particular order, and no promise any of these will be done!)

  • Multi-dataset abstraction layer (under the hood, the same data would be chunked differently for different use-cases. But that complexity would be hidden from users. Users would just interact with a single "logical dataset".)
  • Allow xarray to "push down" all its operations to LSIO
  • xarray-like data structures implemented in Rust? (notes)
  • Fast indexing operations for xarray (notes)
  • Support for kerchunk / VirtualiZarr / Zarr Manifest Storage Transformer
  • Compute using SIMD / NPUs / GPUs, perhaps using Bend / Mojo
  • Support many compression algorithms
  • Automatically tune performance
  • "Smart" scheduling of compute and IO (see notes)
  • Tile-based algorithms for numpy
  • EUMETSAT Native file format
  • NetCDF
  • Warping / spatial reprojection
  • Rechunking Zarr
  • Converting between formats (e.g. convert EUMETSAT .nat files to 10-bit per channel bit-packed Zarr). If there's no computation to be done on the data during conversion then do all the copying with io_uring: open source file -> read chunks from source -> write to destination -> etc.
  • Write a wiki (or a book) on high-performance multi-dimensional data IO and compute
  • Integrate with Dask to run tasks across many machines
  • Use LSIO as the storage and compute backend for other software packages

Project structure

Light Speed IO is organised as a Cargo workspace with multiple (small) crates. The crates are organised in a flat crate structure. The flat crate structure is used by projects such as Ruff, Polars, and rust-analyser.

LSIO crate names use snake_case, following in the footsteps of the Rust Book and Ruff. (The choice of snake_case versus hyphens is, as far as I can tell, entirely arbitrary: Polars and rust-analyser both use hyphens. I just prefer the look of underscores!)

Releases

No releases published

Packages

No packages published

Languages