Discussion about this post

User's avatar
Thomas Woudsma's avatar

Thanks for the nice tutorial! If you now want to really step up your game, you can start the local MLflow Inference Server as part of the startup behavior of the test. In that way, the test is not dependent on whether the server has started or not. It could even use a random port then! To do the final supercharging you can run the whole test code in a docker container to make sure that it will run on any system :)

Expand full comment
1 more comment...

No posts