I was wondering what execution mode people have been using and found useful for execution by end users?
Most (if not all) of PLR is run through Jupyter Notebooks.
However, even opening a Notebook an executing separate cells requires some level of coding knowledge.
Many end users might not have these skills, and I was therefore wondering whether someone has already used different execution modes and found some particularly useful?
For example, the actual execution of Opentrons’ OT-2 is supported via Jupyter Notebook, .py file drop-in into the GUI and JSON drop-in into the GUI.
This has turned out to be very versatile, though I would like to avoid a PLR GUI (at least for now).
A direct .py file execution would be very handy though, ideally directly via the command line.
However, in that case it would be useful to establish some best practices and document them of converting PLR Jupyter Notebooks into .py files without having to modify too much (especially in regards to the I/O PLR design and the asynchronous function nature).
We also use jupyter notebooks almost exclusively for executing protocols. While we are still in the stage where robots often need to be supervised or we want to iterate on the protocol every time it runs. In a jupyter notebook, the entire environment persists through the session.
When a part is working, I like to copy it over to a Python file/module. These files serve as importable functions that are then used in a jupyter notebook, or in the end in another Python file. With the autoreload extension, you still get the high iteration speed.
If you have a library of notebooks, it could be worth seeing if you can restructure your code into modules that are then imported by every notebook. (Some of this code may live in PLR or another shared lib, some will always be internal.) Then it should be easy to call this from both
In the case where you want to convert a jupyter notebook to a Python file, you can use something like nbconvert. You would run into the issue where async functions are suddenly called from the top level, which is not supported. For that, you can make this simple change:
async def main():
<rest of program>
if __name__ == "__main__":
import asyncio
asyncio.run(main())
chat created a simple program to make this conversion automatically: convert.py · GitHub
Hey @CamilloMoschner
This is not perfectly related to your question, but worth saying. My biggest challenge currently with PLR is the lack of documentation.
When i started using Opentrons. I quickly found PDF explaining all the commands. This PDF is not online anymore but the information can be found here: Opentrons API — Opentrons API v2.0
With the PDF i was able to write my first programs and get experience from day 1. With PLR there is a few road blocks that can be challenging for non programmers or people like me that have limited programming experience.
The connection to the robot using Zadig is something that i would not have figured out on my own.
PLR assumes a good knowledge of Github. I initially pip installed PLR but could not get that to work. Think this is everyones first action and many jump of when they have problems getting it to work. I learned that i needed to download the github and that helped a lot. But there is still a learning curve on how to get updates from PLR into your forked repository. If possible if pip was updated more frequently so a user could use that to start with.
Having an overview of the commands such as the Opentrons PDF would be a great start. Trying to read the code it self has been very challenging for me.
The user guides (jupyter notebooks) are good and really helped me a lot with getting started. It would be a huge boost if these could work out of the box. Unfortunately there are small pitfalls that made it hard to run the script. Such as using a Evo and not a Star.
Simulation. Having a good simulating is a great way to make people feel competent with using the software. The current PLR simulator is superior to the opentrons_simulate function but the opentrons_simulator is not as buggy. Could be good to focus on getting a great simulator working.
I really like PLR and i am not trying to critique the project. I want the project to go well and i am very passionate about the vision of having one python interface to role all robots
stripe has really good documentation where they edit the tutorials based on the user’s configuration. I want to have that too: the user selects their robot, and we will automatically prefill all the deck and backend parameters. maybe a good christmas project.
are you talking about the chatterbox or the visualizer? (both need improvement)
you can critique the project all you like. this is a pro free speech forum, as long as it’s at least plr related hearing critique and criticism is super useful, i love it!
This is a great topic, super interesting as more lab scientist who are not coding or automation savvy yet (hello, that’s me!) want to use PLR. @VilhelmKM 's comments hit on the need to lower the barrier to entry. The User Guide is very detailed and will be where I start next week to install and run PLR (once we finish installing the robot and the Raspberry Pi arrives). @VilhelmKM maybe I’ll bug you or other on this forum if I run into early roadblocks.
wearing the grumpy admin hat: after this, let’s keep this thread to Camillo’s original topic of how people run plr notebooks/code. we can have separate threads for docs / simulator / starting / etc. this keeps everything searchable (on google you see “thread name”) thx all