I also donβt think Docker Containers might be the right solution here.
Even though they are the most widely used deployment solution with an estimated 20 million Devs using it each month (What Is Docker? | IBM), proving it is absolutely useful, I do see the port access issue as a major constraint.
It is simply not designed for robotics but for pure software.
But I think it is very important to consider why Docker - besides this major obstacle - might be interesting for the reliable deployment of entire automated Protocols:
What do Containers solve?
Mainly
- ensuring identical environments during code deployment, leading to truly robust deployment
- scalability
Robustness
On the host side:
Everyone has different host computers, different OS, different versions of OS, different Python distros, different Python path configurations, running via venv, conda, or some funkier virtual environment solutions, β¦ All of this affects even our purposefully slim PyLabRobot.
E.g. 1. I recently had an issue with an HID machine because I was using anaconda on a Mac and the path couldnβt find the necessary Python hid library, apparently not an issue when using venv or a different OS(?)
E.g. 2. to my knowledge, the order in which weβre installing PLR via GitHub clone matters: if I install jupyterlab first and then PLR, the PLR installation fails due to a jsonschema conflict with jupyterlabβs installed jsonschema version. The other way around doesnβt matter. And this seems to be a hard constraint because Opentrons refuses to upgrade their libraryβs jsonschema requirement(?).
Do we now have to know this every time we want to run e.g. an ELISA aP?
On the aP/script side:
Iβve never written or seen an aP that just ran on PyLabRobot, there are always extra dependencies, e.g. numpy, pandas, opencv, pytorch, scikit-learn/image, sql libraries, β¦
These extra dependencies need to be correctly installed and their dependency trees have to be carefully managed.
And different aPs have different dependency needs.
Do we expect this dependency management to be redone every time from scratch on every newly installed host PC for every aP?
Both of these considerations, varying host PCs + varying dependencies, are big obstacles for deployment.
But of course, these are not new problems, and as a response containers were invented, with Docker simply being the most popular containerisation solution:
Setup your environment and all dependencies once, package them into a Docker image, and anytime you need to run the application (even if the application is executed only via a single Python file) you simply spin up a Container from your image.
That container recreates your entire OS + environment + dependencies exactly as youβve set them upβ¦ A very elegant solution - but the port access limitation, which is an obstacle for robotics applications (that use port-based machine communication), is a designed isolation feature of Containers 
Validation files do not address these issues, they are powerful tools afterwards, i.e. they solve the issue of how to validate an aP but they do not help with the setup/deployment of the aP.
Scalability
One example: What if you have multiple workcells (identical or almost identical), and you want to run many different aPs.
But you need to constantly adjust your throughput: simple example: youβre a cloud lab and youβre hired to quickly scale up your diagnostics aP.
Ideally youβd just have one functional diagnostics aP Image ready for this purpose and send it to all available workcells for execution. This βmulti-tenancyβ deployment would ensure that each workcell performs the same operations in less than an hour, even if the local control PCs for each workcell are completely different OSs, hardware, β¦
The alternative would be to install the diagnostics aP virtual environment on each control PC separately (?).
But then you might encounter issues with specific setups which interfere with your new installation and youβd have to spend time to debug why the installation of dependency X worked on workcell 4 but not on workcell 9.
The reason why this matters is because we all want to generate a PLR Protocol Library in which every added aP can be used robustly by anyone with the necessary machines 
We donβt have to find solutions instantly but Iβd recommend keeping this in mind during development.