Discussion about this post

User's avatar
Minaz's avatar

You've highlighted a common disconnect that many teams encounter when implementing MLOps, specifically the challenge of coordinating model delivery and deployment. Without a coordinated mechanism between training and deployment, even the most effective models struggle to quickly launch and generate business value.

I'm curious: What do you think is the most effective "bridge"? Is it a unified CI/CD process? Or perhaps adjustments to the team's organizational structure, such as forming cross-functional teams? I look forward to hearing your thoughts!

Expand full comment
Thomas Woudsma's avatar

Thanks for the nice tutorial! If you now want to really step up your game, you can start the local MLflow Inference Server as part of the startup behavior of the test. In that way, the test is not dependent on whether the server has started or not. It could even use a random port then! To do the final supercharging you can run the whole test code in a docker container to make sure that it will run on any system :)

Expand full comment
1 more comment...

No posts