What do you wish PLR did that it cannot currently do?

what do you wish PLR did that it cannot currently do?

1 Like

Tecan Fluent support :wink:

1 Like

chatting with Tecan tomorrow, hopefully they understand the urgency :crossed_fingers:

after telling people about PLR, I have seen multiple labs buy Hamiltons instead of other liquid handlers specifically because of PLR support. the same is true for other types of machines

4 Likes

Maybe this discussion will persuade them further:

2 Likes

Nice, I think regular user feedback & suggestions will guide our development to be even faster :slight_smile:

Things on my (ever-growing) wishlist and to-do-list right now:

  • harmonise / standardise STARbackend method argument naming (see PR#692 and Rick’s #696) - it is currently a pain to memorise different names for the same argument in different methods
  • documentation, documentation, documentation :books:
  • creation of a robust versioning system with simpler installation process, aiming to include (entry-mid-level) Python scripters into PLR
  • increased stakeholder engagement, many ways possible, focusing on clarifying how to execute PLR-empowered automation
  • Resource management system updates:
    • create a subclass of ResourceHolder that models loading_tray behaviour (?)
    • create a Rack class with subclasses for different containers? - I want to be able to place a small rack for for tubes / troughs / tips onto any ResourceHolder and/or PlateSite
    • expand MFXCarrier accepted child resources, currently hard-coded to only accept a ResourceHolder but ResourceHolder accepts only 1 (!) child_location, i.e. it currently fails to accommodate for a “ResourceHolder” with multiple child_locationS. Example:

      → Solution 1: expand ResourceHolder definition to accept multiple child_locations? (that might be the quickest and simplest way to achieve all needed behaviour without having to create any new Resource subclasses)
      → Solution 2 / hack: model this resource a Carrier that is being assigned to a mfx_carrier’s site?
      (Note: definition of Rack vs ResourecHolder with multiple child_locations can become a bit messy, and maybe there should not be a Rack at all?)
    • complete overhaul of the Tip + TipRack modelling system
    • create NestedTipRackStack
  • error handling for (at least) the atomic commands (1) tip_pickup, (2) aspirate/dispense, (3) drop_tip - happy to have separate handlers for these atomically-separated commands, rather than one universal one that is difficult to manage.
  • new machine integrations: big one == arms
1 Like

no firmware docs for the fluent, and apparently the fluent firmware is at the motor level. (so not even an aspirate command, just motor instructions). I think getting it into PLR is unrealistic right now.

my suggestion is to use a hamilton (every model but nimbus), opentrons or tecan evo :smiley:

erlich was right!

4 Likes

That’s what I thought unfortunately they went with I2C and some crazy way of interacting with the motors, the error logs are 1000’s of lines its insane to trouble shoot.

Unfortunately we’re forced to use a Tecan fluent as it is the only robot capable of high throughput robo-columns; the water FCA tips which act as mini AKTA’s.

2 Likes

A repository of some basic protocols people can riff off would be sweet. This would really help my non-technical colleagues, and I imagine more scientists uptake if there’s more examples.

I may be able to contribute my cherrypicking protocol but I’ll have to double check with my company.

Some ideal protocols to riff off of:

  • CherryPicking
  • Normalization
  • Serial dilution
  • Plate stamper
  • Visual plate transfer selector
2 Likes

Created a separate thread to discuss RoboColumn usage on different liquid handling workstations:

1 Like

Would like to potentially float the idea of a Floi8 liquid handler in pylabrobot if you’re talking to manufactures:

These things were sick when we demo’d one but it was locked behind programming from a ultra basic drag and drop UI you’d find on a tablet… They made it ultra low code but packed in all the hardware features of a Hamilton and more.

If they unlocked this little guy it would be a beast. It’s got cLLD, side touch stress sensors, z-touch sensors, and camera integration. The cLLD tips can detect the height of the liquid in the tip by using the resistance of graphite strips on the sides of the tips.

awesome! thanks for the suggestion. I’ll keep an eye out

these days it’s still a little hard if no one in the community has a particular machine, but if someone were willing to sponsor it I’d be down! :smiley:

4 Likes

Was thinking of some things I would like in visualize that are present in the current Hamilton vis:

  1. Wells that have been transferred to/from are indicated by a green circle
  2. Wells that had error handling occur (I know this is not fully implemented yet) are highlighted red in the vis
  3. An option to toggle plate names/tip names/resources names like layers on google maps

thanks for the suggestions. I like the well coloring, I will consider how to best implement this

does this not create a big mess when you have 96 tip names in a tip rack? or do you only show for some resources?

the way I currently address this is users can hover their mouse over any resource to show the name

hmmm yeah ideally it would only show it to the plate and tip rack level as default. Maybe a a check box tree, getting more and more granular so you have the option to check visually every resource down to the well?

1 Like

ohh visualizing the resource tree on the side would be interesting!

3 Likes

the visualizer currently just emit the live state of the resources to the browser, but i find this less helpful for simulation, as simulating usually finish execution in less than a second (in which opening the browser already showed the final state of the run). i’m thinking and trying to integrate new features in visualizer to:

  1. capture the state of each frames (state of the resources) at the server side then enable playback instead of the current live emitting→live monitoring only (if not mistaken what I have tried)

  2. enable better inspection by implementing manual click for next/previous frame, as well as FPS for simulation to slow frames shifting without needing to embed explicit delay in protocol

  3. enable pipette path motion, for aspirate-dispense, to better visualize liquid handling movements. i’m thinking of temporary motion path shown in between frame, mocking pipette moving, as currently it is hard to visualise the path from liquid tracking only.

however, this would need me to change quite much of the visualizer middle processing. i love the Konva visualization and gif implementation, which helps for actual hardware run. but, i find that the visualizer can act better during simulation and inspection before hardware run, even better than the chatterbox. what do you guys think?

1 Like

concept:

this is exactly the point of the visualizer :grinning_face_with_smiling_eyes:

the goal is visualization not simulation. simulation means virtually mirroring/predicting the state of the lab

my view on simulation is we should always be simulating because it helps us have smart features (like return_tips) and prevent errors. this should always happen behind the scenes. no separate simulator should be required.

hm this is interesting. being able to click forward and backward through the states like a debugger :thinking:

to clarify, the visualizer and chatterbox are different concepts for different purposes. they are entirely independent.

  • chatterbox: a “fake” backend to debug protocols (for humans and ai agents). it accepts all commands and just prints out what is happening. the machine-agnostic tutorials in the user guide use this
  • visualizer: shows the current model PLR has of the lab. it can be used when you are using the chatterbox, or any other backend. with chatterbox you don’t have to have a robot so you can develop in-silico, with a real backend you can make sure PLR’s model of the deck is what you see in front of you. (the architecture is set up so you can visualize any resource subtree, including just plates when you are calibrating them)

LiquidHandler and the resource model are responsible for most of the “simulation”, and neither the chatterbox nor the visualizer simulates anything - they are just passive objects.

I agree the visualizer is very useful for inspection, and the chatterbox is useful for developing protocols when not connected to a machine.

overall: 1) the goal of the visualizer is not simulation, 2) I really like your idea of being able to time travel through the states in the visualizer. I will work on implementing this

1 Like

this is largely done by firmware on all machines, so simulating it (anywhere in PLR - separate from visualization) will be a pain