this seems useful for many scenarios, firstly as it implies the need for a log for state changes
having deck.undo() or deck.redo() that steps back/forward along discrete resource location states is useful even for an interactive notebook (a very common pattern: you are teaching a hotel movement that fails on first try, you put the plate back in the original spot, then spend a laborious 10-20 seconds thinking about exactly what resource you need to assign where to get back to original location and rotation)
at the cost of some complexity, that could be instant for the plr user
i kinda get it but not 100% get it in terms of the function at the high-level. i find that the chatterbox, like u said, help us build protocols without having a real machine. yes, but since the logging of the chatterbox is the atomic commands, itâs hard to inspect and see whatâs going on at the high level, especially when it involves spatial movement like the LH (i agree with what you said that LH and the resource model is the most for simulation).
so do you mean, to get what I feel is needed, the simulation with visualization, i should just do protocolsâchatterboxâ visualizer instead of protocolsâvisualizer?
the exact pipette path motion (ie the full journey) is not what i meant here. i simply meant the path of the displacement. sorry for the vague term. the point i feel this is important, is because, in current visualizer, itâs not as intuititive and need careful attention to inspect from where a liquid is aspirated and where it is dispensed to. the liquid volume tracker does change bit of colour but that isnât obvious for small amount of transferred liquid to see it.
i understand some might argue we should know the âpipette pathâ since we write the protocols. but for complex protocol that use lots of logics (eg preparing master mix, serial dilution, auto-channel use), the atomic commands for the liquid handling is not so obvious and helpful if can inspect in visualizer.
(My answer to 90% of the times Iâm asked this question )
Plus, video editing in Blender
The transfer patterns do get a lot more interesting (and complicated) and faster when utilising the parallelization capabilities of the y-independent channels on a Hamilton STAR, or the independent row actuators of a Tempest or an I.DOT
Lol i guess I meant which libraries and how haha. Ingesting simulation results, or running real time? If youâre open to sharing, maybe separate thread?
yea start with a cuboid and then run it through a diffusion model that takes a picture of the labware + hard-coded definitions and spits out a prettier model