You are viewing documentation for Kubeflow 1.3

This is a static snapshot from the time of the Kubeflow 1.3 release.
For up-to-date information, see the latest version.

Tools for Serving

Serving of ML models in Kubeflow

Overview

Model serving overview

Seldon Core Serving

Model serving using Seldon

BentoML

Model serving with BentoML

MLRun Serving Pipelines

Real-time Serving Pipelines and Model Monitoring with MLRun and Nuclio

NVIDIA Triton Inference Server

Model serving with Triton Inference Server

TensorFlow Serving

Serving TensorFlow models

TensorFlow Batch Prediction

See Kubeflow v0.6 docs for batch prediction with TensorFlow models