( 未来 )
Minimalist Async Evaluation Framework for R
High performance parallel code execution and distributed
computing.
Designed for simplicity, a ‘mirai’ evaluates an
R expression asynchronously, on local or network resources, resolving
automatically upon completion.
Modern networking and
concurrency built on nanonext
and NNG (Nanomsg Next Gen) ensures reliable
and efficient scheduling, over fast inter-process communications or
TCP/IP secured by TLS.
mirai パッケージを試してみたところ、かなり速くて驚きました
Use mirai()
to evaluate an expression asynchronously in a separate,
clean R process.
A ‘mirai’ object is returned immediately.
library(mirai)
input <- list(x = 2, y = 5, z = double(1e8))
m <- mirai(
{
res <- rnorm(1e6, mean = mean, sd = sd)
max(res) - min(res)
},
mean = input$x,
sd = input$y
)
Above, all name = value
pairs are passed through to the mirai via the
...
argument.
Whilst the async operation is ongoing, attempting to access the data yields an ‘unresolved’ logical NA.
m
#> < mirai [] >
m$data
#> 'unresolved' logi NA
To check whether a mirai has resolved:
unresolved(m)
#> [1] TRUE
To wait for and collect the evaluated result, use the mirai’s []
method:
m[]
#> [1] 47.4314
It is not necessary to wait, as the mirai resolves automatically
whenever the async operation completes, the evaluated result then
available at $data
.
m
#> < mirai [$data] >
m$data
#> [1] 47.4314
Daemons are persistent background processes created to receive ‘mirai’ requests.
They may be deployed for:
Local parallel processing; or
Remote network distributed computing.
Launchers allow daemons to be started both on the local machine and across the network via SSH etc.
Secure TLS connections can be automatically-configured on-the-fly for remote daemon connections.
The mirai vignette may be accessed within R by:
vignette("mirai", package = "mirai")
The following core integrations are documented, with usage examples in the linked vignettes:
Provides an alternative communications backend for R, implementing a
low-level feature request by R-Core at R Project Sprint 2023.
‘miraiCluster’ may also be used with
foreach
, which is supported via
doParallel
.
Implements the next generation of completely event-driven, non-polling
promises. ‘mirai’ may be used interchageably with ‘promises’, including
with the promise pipe
%...>%
.
Asynchronous parallel / distributed backend, supporting the next level
of responsiveness and scalability for Shiny. Launches ExtendedTasks, or
plugs directly into the reactive framework for advanced uses.
Asynchronous parallel / distributed backend, capable of scaling
Plumber applications in production usage.
Allows queries using the Apache Arrow format to be handled seamlessly
over ADBC database connections hosted in daemon processes.
Allows Torch tensors and complex objects such as models and optimizers
to be used seamlessly across parallel processes.
Targets, a Make-like pipeline tool for statistics and data science,
has integrated and adopted
crew
as its default high-performance
computing backend.
Crew is a distributed worker-launcher extending
mirai
to different
distributed computing platforms, from traditional clusters to cloud
services.
crew.cluster
enables mirai-based workflows on traditional
high-performance computing clusters using LFS, PBS/TORQUE, SGE and
SLURM.
crew.aws.batch
extends mirai
to cloud computing using AWS Batch.
We would like to thank in particular:
Will Landau for being instrumental in
shaping development of the package, from initiating the original request
for persistent daemons, through to orchestrating robustness testing for
the high performance computing requirements of crew
and targets
.
Joe Cheng for optimising the promises
method to make mirai
work seamlessly within Shiny, and prototyping
non-polling promises, which is implemented across nanonext
and
mirai
.
Luke Tierney of R Core, for discussion
on L’Ecuyer-CMRG streams to ensure statistical independence in parallel
processing, and making it possible for mirai
to be the first
‘alternative communications backend for R’.
Henrik Bengtsson for valuable insights leading to the interface accepting broader usage patterns.
Daniel Falbel for discussion around an
efficient solution to serialization and transmission of torch
tensors.
Kirill Müller for discussion on using ‘daemons’ to host Arrow database connections.
for funding work on the TLS implementation in
nanonext
, used to
provide secure connections in mirai
.
Install the latest release from CRAN:
install.packages("mirai")
Or the development version from R-universe:
install.packages("mirai", repos = "https://shikokuchuo.r-universe.dev")
◈ mirai R package: https://shikokuchuo.net/mirai/
◈ nanonext R
package: https://shikokuchuo.net/nanonext/
mirai is listed in CRAN High Performance Computing Task View:
https://cran.r-project.org/view=HighPerformanceComputing
–
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.