Skip to main content

KServe with MLFlow


모델 준비

MLServer Local Test

python3 -m pip install mlserver mlserver-mlflow
.
├── model/
│ ├── conda.yaml
│ ├── MLmodel
│ ├── model.pkl
│ ├── python_env.yaml
│ └── requirements.txt
├── model-settings.json
└── ...
model-settings.json
{
"name": "mlserver-mlflow-test",
"implementation": "mlserver_mlflow.MLflowRuntime",
"parameters": {
"uri": "./model"
}
}
mlserver start .
  • :8080/v2/docs 에 접속하면 API 정보를 볼 수 있습니다.

InferenceService

apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: <name>
namespace: <namespace>
spec:
predictor:
model:
modelFormat:
name: mlflow
protocolVersion: v2
storageUri: <modelUri>