Tools for Serving

Serving of ML models in Kubeflow

Overview

Model serving overview

Seldon Core Serving

Model serving using Seldon

BentoML

Model serving with BentoML

MLRun Serving Pipelines

Real-time Serving Pipelines and Model Monitoring with MLRun and Nuclio

NVIDIA Triton Inference Server

Model serving with Triton Inference Server

TensorFlow Serving

Serving TensorFlow models

TensorFlow Batch Prediction

See Kubeflow v0.6 docs for batch prediction with TensorFlow models