Thanks for the nice tutorial! If you now want to really step up your game, you can start the local MLflow Inference Server as part of the startup behavior of the test. In that way, the test is not dependent on whether the server has started or not. It could even use a random port then! To do the final supercharging you can run the whole test code in a docker container to make sure that it will run on any system :)
Thanks for the nice tutorial! If you now want to really step up your game, you can start the local MLflow Inference Server as part of the startup behavior of the test. In that way, the test is not dependent on whether the server has started or not. It could even use a random port then! To do the final supercharging you can run the whole test code in a docker container to make sure that it will run on any system :)
good suggestion! it definitely makes it more sophisticated! docker might be a bit of an overkill for what we are trying to achieve :-)