Building an End-to-End ML Deployment Pipeline with MLflow, FastAPI, and Docker
· 3 min read
Deploying machine learning models is more than just training — it’s about tracking, versioning, serving, and monitoring. In this post, I’ll walk you through how I built a production-ready ML pipeline using:
- MLflow for experiment tracking and model registry
- FastAPI for serving models via REST API
- MinIO for artifact storage (S3-compatible)
- Docker Compose for orchestration
👉 Full source code:
🔗 github.com/liviaerxin/mlops-fastapi-mlflow-minio