What do you wish PLR did that it cannot currently do?

this seems useful for many scenarios, firstly as it implies the need for a log for state changes

having deck.undo() or deck.redo() that steps back/forward along discrete resource location states is useful even for an interactive notebook (a very common pattern: you are teaching a hotel movement that fails on first try, you put the plate back in the original spot, then spend a laborious 10-20 seconds thinking about exactly what resource you need to assign where to get back to original location and rotation)

at the cost of some complexity, that could be instant for the plr user

3 Likes

i kinda get it but not 100% get it in terms of the function at the high-level. i find that the chatterbox, like u said, help us build protocols without having a real machine. yes, but since the logging of the chatterbox is the atomic commands, it’s hard to inspect and see what’s going on at the high level, especially when it involves spatial movement like the LH (i agree with what you said that LH and the resource model is the most for simulation).

so do you mean, to get what I feel is needed, the simulation with visualization, i should just do protocols→chatterbox→ visualizer instead of protocols→visualizer?

the exact pipette path motion (ie the full journey) is not what i meant here. i simply meant the path of the displacement. sorry for the vague term. the point i feel this is important, is because, in current visualizer, it’s not as intuititive and need careful attention to inspect from where a liquid is aspirated and where it is dispensed to. the liquid volume tracker does change bit of colour but that isn’t obvious for small amount of transferred liquid to see it.

i understand some might argue we should know the “pipette path” since we write the protocols. but for complex protocol that use lots of logics (eg preparing master mix, serial dilution, auto-channel use), the atomic commands for the liquid handling is not so obvious and helpful if can inspect in visualizer.

this is what i meant:

2 Likes

I agree :slight_smile:

@hazlamshamin, are you thinking of something like this?:

Moschner_PLR_transfer_pattern_showcase_0000-0489 (2)

I have been creating these visualisations for a while to explain what “not humanly-executable” means to my stakeholders.

Also very useful for run time estimations, sanity checks, DoE, …

4 Likes

its beautiful :heart_eyes: what software?

Thank you :smiling_face_with_three_hearts:

what software?

…Python :eyes:

(My answer to 90% of the times I’m asked this question :joy:)

Plus, video editing in Blender

The transfer patterns do get a lot more interesting (and complicated) and faster when utilising the parallelization capabilities of the y-independent channels on a Hamilton STAR, or the independent row actuators of a Tempest or an I.DOT :fire:

3 Likes

We could build a three.js library that takes labware definitions and outputs 3d models for visualisation

1 Like

the problem with this is the labware in PLR is modeled using cuboids so it will look very ugly

Lol i guess I meant which libraries and how haha. Ingesting simulation results, or running real time? If you’re open to sharing, maybe separate thread?

2 Likes

yea start with a cuboid and then run it through a diffusion model that takes a picture of the labware + hard-coded definitions and spits out a prettier model

YESSS EXACTLYY :fire:

Started a dedicated thread for this:

1 Like