Skip to content

Latest commit

 

History

History
224 lines (169 loc) · 10.5 KB

hpo.md

File metadata and controls

224 lines (169 loc) · 10.5 KB
title description tagline button_text button_link layout
RAPIDS + HPO
Learn How to Use RAPIDS with HPO in the Cloud
Use RAPIDS with Hyper Parameter Optimization
Get Started
default

![RAPIDS CSP HPO]({{ site.baseurl }}{% link /assets/images/csp+hpo.png %}){: .projects-logo}

Accelerate Hyperparameter Optimization
in the Cloud

{: .section-title-full}

{% capture intro_content %}

Machine learning models can have dozens of options, or “hyperparameters,” that make the difference between a great model and an inaccurate one. Accelerated machine learning models in RAPIDS give you the flexibility to use hyperparameter optimization (HPO) experiments to explore all of these options to find the most accurate possible model for your problem. The acceleration of GPUs lets data scientists iterate through hundreds or thousands of variants over a lunch break, even for complex models and large datasets. {: .subtitle}

RAPIDS Integration into Cloud / Distributed Frameworks

{: .section-title-full}

![RAPIDS CSP HPO]({{ site.baseurl }}{% link /assets/images/HPO-space-2.png %}){: .center} {% endcapture %}

{% include section-single.html background="background-white" padding-top="3em" padding-bottom="1em" content-single=intro_content %}

{% capture yd_header %}

Benefits With RAPIDS

{: .section-title-full}

{% endcapture %} {% capture yd_left %}

Smooth Integration

RAPIDS matches popular PyData APIs, making it an easy drop-in for existing workloads built on Pandas and scikit-learn.

{% endcapture %} {% capture yd_mid %}

High Performance

With GPU acceleration, RAPIDS models can train 40x faster than CPU equivalents, enabling more experimentation in less time.

{% endcapture %} {% capture yd_right %}

Deploy on Any Platform

The RAPIDS team works closely with major cloud providers and open source hyperparameter optimization solutions to provide code samples so you can get started with HPO in minutes on the cloud of your choice.

{% endcapture %}

{% include section-single.html background="background-white" padding-top="2em" padding-bottom="0em" content-single=yd_header %} {% include section-thirds.html background="background-white" padding-top="0em" padding-bottom="10em" content-left-third=yd_left content-middle-third=yd_mid content-right-third=yd_right %}

{% capture start_left %}

Getting Started

{: .section-title-halfs}

RAPIDS supports hyperparameter optimization and AutoML solutions based on AWS SageMaker, Azure ML, Google Cloud AI, Dask ML, Optuna, Ray Tune and TPOT frameworks, so you can easily integrate with whichever framework you use today. RAPIDS also integrates easily with MLflow to track and orchestrate experiments from any of these frameworks.

Get the HPO example code

Our GitHub repo contains helper code, sample notebooks, and step-by-step instructions to get you up and running on each HPO platform. See our README {: target="_blank"}

Clone the Repo

Start by cloning the open-source cloud-ml-examples repository from RAPIDSai GitHub. See our Repo {: target="_blank"}

{% endcapture %}

{% capture start_right %}

Notebook examples

{: .section-subtitle-top-1}

The repo will walk you through step-by-step instructions for a sample hyperparameter optimization job. To start running your experiments with HPO, navigate to the directory for your framework or CSP, and check out the README.md file there. Walk Through The Notebooks {: target="_blank"}

Video Tutorials

Watch tutorials of accelerated HPO examples on Amazon SageMaker{: target="_blank"} and Azure ML{: target="_blank"} from the RAPIDSAI YouTube Channel, and Optuna+MLflow{: target="_blank"} from JupyterCon 2020.

Blog Posts

RAPIDS and Amazon SageMaker{: target="_blank"}: Scale up and scale out to tackle ML challenges {: .no-tb-margins }

An End to End Guide to Hyperparameter Optimization using RAPIDS and MLflow on Google’s Kubernetes Engine (GKE){: target="_blank"} {: .no-tb-margins }

Hyperparameter Optimization with Optuna and RAPIDS{: target="_blank"} {: .no-tb-margins }

Faster AutoML with TPOT and RAPIDS{: target="_blank"} {: .no-tb-margins }

Optimizing Machine Learning Models with Hyperopt and RAPIDS on Databricks Cloud{: target="_blank"}
{: .no-tb-margins }

Managing and Deploying High-Performance Machine Learning Models on GPUs with RAPIDS and MLFlow{: target="_blank"} {: .no-tb-margins }

30x Faster Hyperparameter Search with Ray Tune and RAPIDS{: target="_blank"}

{% endcapture %}

{% capture chart_single %}

Minimize Cost, Accelerate Turnaround

{: .section-title-full}

![100 job cost]({{ site.baseurl }}{% link /assets/images/100-Job HPO.png %}){: .full-image-center}

{% endcapture %}

{% include slopecap.html background="background-purple" position="top" slope="down" %} {% include section-halfs.html background="background-purple" padding-top="5em" padding-bottom="0em" content-left-half=start_left content-right-half=start_right %} {% include section-single.html background="background-purple" padding-top="0em" padding-bottom="10em" content-single=chart_single %}

{% capture cl_single%}

Run your experiments with HPO

It’s easy to work in the cloud of your choice to find the best quality model. {: .subtitle}

{% endcapture %} {% capture cl_left_top %}

RAPIDS on Cloud
Machine Learning Services

Azure ML, AWS SageMaker, and Google Cloud AI hyperparameter optimization services free users from the details of managing their own infrastructure. Launch a job from a RAPIDS sample notebook, and the platform will automatically scale up and launch as many instances as you need to complete the experiments quickly. From a centralized interface, you can manage your jobs, view results, and find the best model to deploy. For various deployment options and instructions, check out our Deploying RAPIDS in the Cloud page{: target="_blank"}.

{% endcapture %}

{% capture cl_right_top %}

Bring Your Own Cloud
On-Prem or Public

Whether running a cluster on-prem, or managing instances in a public cloud, RAPIDS integrates with HPO platforms that can run on your infrastructure. RayTune and Dask-ML both provide cloud-neutral platforms for hyperparameter optimization. RayTune combines the scalable Ray platform with state-of-the-art HPO algorithms, including PBT, Vizier’s stopping rule, and more. Dask-ML HPO offers GPU-aware caching of intermediate datasets and a familiar, Pythonic API. Both can benefit from high-performance estimators from RAPIDS.

{% endcapture %}

{% include slopecap.html background="background-white" position="top" slope="up" %} {% include section-single.html background="background-white" padding-top="5em" padding-bottom="0em" content-single=cl_single %} {% include section-halfs.html background="background-white" padding-top="0em" padding-bottom="10em" content-left-half=cl_left_top content-right-half=cl_right_top %}

{% capture end_bottom %}

Get Started with Hyperopt

{: .section-title-full .text-white}

{% endcapture %} {% include slopecap.html background="background-darkpurple" position="top" slope="down" %} {% include section-single.html background="background-darkpurple" padding-top="0em" padding-bottom="0em" content-single=end_bottom %} {% include cta-footer-hpo.html background="background-darkpurple" %}