MiniKF 1.4 is here! MiniKF is the fastest way to get the Kubeflow MLOps platform running on either AWS or Google Cloud. Recall that Kubeflow is not a single binary or executable file, but instead a complex platform made up of multiple services where each component has its own list of dependencies. It’s not uncommon for engineers to spend hours or even days setting up their Kubernetes environments, deploying the Kubeflow components, and then performing the necessary QA to make sure it all works together. With MiniKF Arrikto has cut out all the complexity, while also leveling up the capabilities, so you can get to building and serving models in the least amount of time.
In a nutshell, MiniKF is a single user deployment preconfigured with all the Kubeflow components you need to develop and serve your models. MiniKF 1.4:
- Runs on top of Kubernetes on a single VM
- Supports AWS and GCP
- Ships with the latest Kubeflow v1.4 release
- Comes with the popular Kale and Rok components already pre-configured
If you are looking to deploy on EKS, AKS or GKE, with multi-user support, plus enhanced security and data management capabilities, then we recommend checking out Arrikto’s Enterprise Kubeflow (EKF) distribution.
What’s inside MiniKF 1.4?
To make your Kubeflow experience as fast and easy as possible, MiniKF 1.4 supports or ships with the following preconfigured components, all verified to work together:
- Kubeflow v1.4
- Kubernetes v1.19.15
- Minikube v1.23.2
- Istio v1.9.6
- Notebooks v1.4
- Training Operators v1.3
- Katib v0.12
- Kubeflow Pipelines v1.7
- KFServing v0.6.1
- Kale – JupyterLab extension
- Rok v1.4 – data management
Ok, let’s dig into the new capabilities in this release!
Support for PyTorch distributed jobs inside Notebooks
With MiniKF 1.4 you can now easily deploy a PyTorch distributed training job from inside your Notebook using the Kale JupyterLab extension.
You can learn more about how to set up distributed PyTorch training jobs in MiniKF by checking out the “Distributed training on Kubernetes made easy with Kubeflow, Kale, and PyTorch” tutorial.
Create AutoML workflows with the click of a button
With the latest release of MiniKF, you can now create AutoML workflows with the click of a button. The process is simple:
- Start with a dataset
- Define a task
- Discover, train, and optimize a model from inside your notebook
Enhanced dashboard and user interface
In this release you’ll find an enhanced version of Kubeflow’s central dashboard along with the ability to view notebook servers across all namespaces in the Notebooks view.
New Notebook and Pipeline features
Here’s the list of new Notebook and Pipeline specific capabilities available in MiniKF 1.4 that should appeal to data scientists and MLOps engineers:
- Ability to expose Kubernetes metadata, resources, and specifications in the Kale SDK
- Set limits, requests, labels, annotations, or use the nodeSelector via the Kale SDK
- Environment variables can now be set in Kale steps via the Kale SDK
- The size of the Kale marshal volume can now be configured
- Kale and existing Docker images can be used to build pipeline steps
- Kale now supports conditionals with the outputs of the pipeline steps
- Predictions using an existing KFServing Inference Service via the Kale API can now be made
New data management capabilities
Here are the new data and volume management, as well as snapshotting capabilities in MiniKF 1.4:
- You can now mount an existing volume to a notebook server
- MiniKF now supports ReadWriteMany (RWX) volumes
New monitoring and resource management features
Here’s what’s new in regards to monitoring and resource management:
- Ability to monitor the last activity of the Notebook servers
- A configurable way to stop idle Notebook servers automatically
- An automatic log gathering process
Getting started with Kubeflow via MiniKF
Now that you’ve seen what’s new in MiniKF 1.4, let’s see just how easy it is to get started. Saying it’s easy to install is one thing, showing it in action is another.
Up and running on AWS
Here are the simple steps you need to perform to get Kubeflow running on AWS via MiniKF.
- Locate MiniKF in the AWS Marketplace
- Spin up an m5.2xlarge instance
- Follow the installation progress by typing “minikf” at the command line
- Log into the Kubeflow central dashboard
Learn more in the short video below or find detailed installation instructions here.
Up and running on Google Cloud
As with AWS, it’s just as simple to get up and running on Google Cloud.
- Locate MiniKF in the GCP Marketplace
- Spin up an n1-standard-8 instance
- Follow the installation progress by typing “minikf” at the command line
- Log into the Kubeflow central dashboard
Watch the short video below to learn more or check out the detailed installation instructions here.
Special offer: deploy Kubeflow via MiniKF and Arrikto will cover the hosting costs!
As of this writing, it costs ~$0.51 per hour to run an instance of MiniKF on AWS and ~$0.57 per hour on GCP.
Through March 31, 2022, Arrikto is offering to cover your costs related to hosting a MiniKF deployment by reimbursing you with a $25 Amazon Gift Card. This should be enough to run a MiniKF instance for ~49 hrs on AWS and ~43 hrs on GCP.
Simply install Kubeflow via MiniKF and click on the Gift Card link in your central dashboard.
FREE Kubeflow and MLOps workshop: book yours today
Arrikto is now offering 60 minute virtual Kubeflow workshops for your team on the topic of your choice –everything from introductory, advanced, and use case specific topics. Book your FREE workshop today!