Friday, November 5, 2021

Ray

Ray Core provides simple primitives for building distributed Python applications. It’s great for parallelizing single-machine Python applications with minimal code changes. Built on top of Ray Core is a rich ecosystem of high-level libraries and frameworks for scaling specific workloads like reinforcement learning and model serving. 

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning
  • RLlib: Scalable Reinforcement Learning
  • Train: Distributed Deep Learning (alpha)
  • Datasets: Distributed Data Loading and Compute (beta)

As well as libraries for taking ML and distributed apps to production:

  • Serve: Scalable and Programmable Serving
  • Workflows: Fast, Durable Application Flows (alpha)

There are also many community integrations with Ray, including Dask, MARS, Modin, Horovod, Hugging Face, Scikit-learn, and others. Check out the full list of Ray distributed libraries here.

https://www.ray.io/ 

https://github.com/ray-project/ray

https://towardsdatascience.com/modern-parallel-and-distributed-python-a-quick-tutorial-on-ray-99f8d70369b8 

https://www.anyscale.com/blog/writing-your-first-distributed-python-application-with-ray

No comments:

Post a Comment