@hazlamshamin, making this type of animation by itself for a showcase is fairly simple but upgrading the 2D PLR Visualizer to perform it robustly is tricky - that’s why I’m trying to understand exactly what you’d like to see and what benefit it provides for PLR runtime usage.
Technical Challenge
The main problem I see with these “liquid transfer indicators” is that they require explicit knowledge of (1) source containers and (2) destination containers - in memory!
i.e. the visualiser would need to store all aspiration containers, wait until a dispensation occurs and only then, having the memory of both, can draw the “liquid transfer indicators” (i.e. arrows)`.
But after that we’d also have to keep a counter of how many “timepoints” after each indicator arrow has been generated, decreasing its opacity until it reaches 0 and is removed.
This does not work if we don’t have the “timepoint stepper” that you suggested because I don’t think the Visualizer keeps a record of timepoints like this yet.
There is additional complexity to figure out: e.g. dynamic multi-dispensing, one of the biggest benefits of using automated liquid handlers.
How would we want the “liquid transfer indicators” to look/feel (UX) to the PLR user and programmer if there is one aspiration container → multiple dispense containers?
And how would the Visualizer know (implementation) this multi-dispense relationship?
The Visualizer would have to “listen” after all aspirations for appearing dispensations… for all pipettes/channels/heads… managing states for each of them - that is quite a challenge
If we wanted something much simpler that is still visual we could do something like VENUS:
if my memory of my long-gone VENUS days doesn’t fail me, VENUS just highlights the containers in which something is happening when it is happening, i.e. some colour change of the wells that are part of a command.
This would be quite simple in comparison, and could take many forms: cyan discoloration for aspiration (?), magenta discoloration for dispense (?), or overlay a ring on top of the wells in action (that way we’re not interfering with the existing PLR LiquidTracker, …
User Experience Gain
My question with this is “What benefit do we want for PLR programming, and can we achieve that benefit in a simpler, faster way?”
I love making these visualisations for various purposes I mentioned (explaining complex actions, sanity checks, DoE, …).
But during run time I found they are not useful to me:
For truly complex commands they don’t tell me what to look out for when at the machine (they get so detailed and complex I cannot quickly check that they are correct based on looking at wellplate images with arrows on them), while for simple commands they are not needed.
Instead, I just print out a formatted table of [[source_container → destination_container], … ] at runtime
That way multi-channel/pipette action is nicely formatted, and I just check for a couple of the liquid transfers, displayed in a humanly readable format in the appearing table, that the machine performs what I expect and don’t get overwhelmed with transfer indicator arrows appearing and disappearing in an image.
Thanks for continuing the discussion! Here are my thoughts and what I see could work.
Let me first start with my observations of the significance.
Instead, I just print out a formatted table of [[source_container → destination_container], … ] at runtime
That’s nice to have too, but arguably, we can also use visualizer at the end of the run (simulated backend or real hardware backend) to visualize the pipette transfers from which container/wells to which, instead of the table. But this still, will just allow us to see the spatial dimension easier (ie easier to imagine/visualise the source/destination in terms of space). However, what I propose is to enable users and developers to better inspect in terms of time.
If you mean run time refers to the one with the hardware, I agree that is not as useful. But my main reason of implementing this from the start is for simulation all the way, when the protocol is not yet finalised or in draft phase. It is at least, helpful to me (and hopefully more). My argument is that, when we enable the visualization of state changes through time, a better inspection of what’s biologically important in a protocol could be achieved better than the simpler formatted table that does not give information about the time (eg protocols that we really need to know the sequences/order is done as needed, or if a crucial component A is added first before B, or a mixture C is prepared first before aspirated from it to make mixture CA).
Next, is the how, including the challenges and what I think could work.
Yes, and right now I think we have no way of doing it without storing it in memory. But I think, we can do it efficiently. And this would lead to the “timepoint stepper” and how we want to do it (more on this below).
I would like to propose how we can tackle this challenge as well, by how introducing a “StateKeeper” (i was thinking about a TimeKeeper because that sounds cool but i think what we actually care about is the State).
We have no choice but to store the states, instead of just passing/emitting it from resource callback to server to browser. To enable an arguably more efficient in-memory storage of the states during runtime, I suggest we could store the initial state of the deck/machines/labwares/children, AND the StateDelta (ie when there is changes of state as steps progresses). In this, we can also use Frames, where one can imagine the initial frame has the initial state and the next frame is the previous frame + StateDelta between them. Thus, all States can then be rebuilt back, only when needed (as they are sequential anyway).
Let me illustrate how this fits the current visualizer. Current visualizer has already enabled visualization of state when it has happened (eg tip spots in a tiprack is available/gone, or the volume in a well). We could use Frame in here, specifically a StateFrame. Imagine at Frame 1, well A1 has 25µL, while after aspirating, Frame 2, has 10µL.
We can also expand Frame with MotionFrame, ie frame that capture what will happen between 2 immediate StateFrames. For example using the same case of well A1, after StateFrame 1, MotionFrame 1 can illustrate liquid will be aspirated from A1 by highlighting the well, then StateFrame 2 will show when the liquid has been aspirated. Dispensing can also use StateFrame and MotionFrame. Any atomic command of liquid handler (and perhaps can be applied to other machines eg thermocycler) can benefit from these 2 Frames distinction without interfering with what has already worked in PLR (see more below).
By these, we could solve the challenge below
as MotionFrame handle transitions between a previous state and the next state, in a non-blocking way, as we define MotionFrame as what will happen. The visualization of these 2 different frames can also be separated so it won’t interfere with each other. MotionFrame for LH can also be used for pickup tips (eg just highlight the tipspot going to be used from which tiprack), then the next StateFrame shows that tipspot is now gone.
In terms of the complexity here:
That leaves us with the question of how StateFrame is defined for this. If it’s aspirate A1→dispense B1→aspirate A1, dispense B2…, we still can apply it as how the hardware run does it. So that, StateFrame will honestly report the volume of each well at different frames like the reality. MotionFrame just visualised what will happen similar to how a normal aspirate-dispense:
will pickup tip: highlights which tip spot at which tiprack
will aspirate: highlights which source (eg well A1, plate A, using standardised colour A)
will dispense: highlights destination with standardised colour B, with arrows shaft connecting the source and destination (interestingly, when dispensing during mixing, which is self aspirate and dispense, arrow can be head-only without the shaft)
By this honest, distinct and identical mirror of reality for how we define Frames, we can implement this to complex stuffs (eg multi-dispensing without multi-aspirating, multichannel, and more to think of).
Overall, I want this to work and for it to work, I suggest a different StateKeeper (or other names) outside of Visualizer and can be used by it, and other purposes (like the playback i mentioned in previous thread). This StateKeeper will handle Frames (StateFrame and MotionFrame) in a (perhaps) efficient manner and unblocking way during runtime (hardware and chatterbox). Importantly, they handle state changes through the time dimension, and will enable visualizing with time, as well as playback to go back to the past and future of a run (and perhaps enable proper pause during a run at a StateFrame, use playback, and continue back running [like the ClockWork in Danny Phantom]).
Please comment more on the feasibility of this. I don’t understand PLR fully as you guys so still need disagreement on how this might disrupt current PLR so we can properly implement this, if we do.
but it’s not my job to tell people what works best for them. i would say it’s within scope, so if we can do it and people want it i think we should. let’s discuss if this is technically feasible
exactly, 100%. and also yes: this is entirely downstream from state tracking.
on that: diffs is currently how the visualizer gets updates, and it would simply be a matter of adding a “backward” option for state updates. that is easy. the existing code should accommodate rendering. and keeping the diffs instead of discarding them, this would probably best be done in the python layer (Visualizer class) so we aren’t dependent on the browser being open/connected. on opening of the browser, it will pull the entire history of states.
i think this requires tracking of the state of channels on the LH, which we don’t currently do in the visualizer. easy enough to add through state callbacks
I really like the idea of Frames, just like in animation creation
Could you please elaborate on what the variables represent here?
This is a really interesting point!
But I don’t yet understand how state is tracked when actions haven’t yet happened?
e.g.: PLR is imo so powerful during development because of its interactive, fast nature, i.e. in a Jupyter Notebook you execute cell by cell in a sequential manner.
But if an aspiration has taken place and one or multiple dispense have not, how would the transfer indicator arrow be generated? The information about the dispense is not yet know at that point in time.
This is very interesting and crucial observation. You’re right. If we truly want this to be available during a runtime, instead of available only after the run has finished, I think what we can do is to tie what will happen with the command to be executed. By this, the information about the dispense is already known at that point in time, as in preparing to dispense it on software but might not yet completed it on hardware.
To be clear, perhaps we will not be able to generate the transfer indicator during aspiration step. But rather, as mentioned earlier, we will break down the commands as what the reality is: transfer is aspirate then dispense, meanwhile mixing is cycle of aspirate→dispense in place.
In the Jupyter notebook for example, in a cell of aspirating, we know it will aspirate, so we only illustrate that. Meanwhile for cell of dispensing, we might know the aspiration history from previous state. But if not, then we can illustrate a safe default arrow pointing to destination only.
examples, pick up tip, aspirate, dispense, mixing (aspirate→dispense in place) (MotionFrames only):