Plr & ot

This seems equally possible with the opentrons definitions - once you calibrate a basic labware (single 384 well plate, single 96 well plate, etc), you really only need a Z calibration for depth. In order to get that basic labware definition and Z calibration, you must do actual calibrations on a robot. In this way, I’ve found that manufacturer specs are kind of useless - the grounding in physical reality is irrelevant to the specs (because it is just defined by the calibrations). This makes the labware model of opentrons pretty attractive to me, because these abstractions are very easy in it.

Hmmm, this doesn’t sit quite well. A coordinate model (correct me if I am wrong) is assuming that the coordinate system is consistent across the robot. Ie, if you ask to go to an XYZ position, as defined by the deck, and you have a labware you’ve calibrated and have the exact specifications for, you can find a given well position. But this abstraction is empirically untrue (at least for Opentrons) - the fact is that the XYZ positioning is not consistent across the deck. You have two methods - locally calibrate labwares (such that the underlying physical calibrations are only relevant to a deck position on the robot), or create offsets specific to local positions.

I can already do that, just constrained to the XYZ coordinates of a given labware/location.

Here is another way: You send identifiers with a deck location. For example:

{
"labware": "corning_384_wellplate_112ul_flat",
"position": "3",
"well": "A1"
}

The server knows of the labwares defined within PLR, or of a new labware if you so define it. It simply keeps the same labware until you want to change it at runtime. Here is how I change it at runtime in the plate protocol:

        # Only delete deck 5 if we need the second plate
        if needs_second_plate:
            del protocol.deck["1"] # https://github.com/Opentrons/opentrons/issues/6214
            agar_plates = [
                protocol.load_labware("biorad_96_wellplate_200ul_pcr", x) for x in [3, 1]
            ]
        else:
            agar_plates = [protocol.load_labware("biorad_96_wellplate_200ul_pcr", 3)]

My argument for something closer to Opentron’s current API rather than absolute coordinate system is based off of the (perhaps specific to OT2) problems that the deck isn’t consistent, so coordinate positions that are not relative to particular positions on the deck can cause problems on 384 well plates.

Perhaps on Hamiltons you can rely on the fact that the coordinate system is accurate, so you can simply define the deck layout, and then derive from good definitions grounded in physical reality. But it doesn’t seem that way on the opentrons.

One thing to note: opentrons CAN move to a certain location on the deck, regardless of relative coordinate space. So if their calibration routine was replaced, it would be feasible to have absolute coordinate positions, like plr expects. (or find a way to pull their calibration data properly)

:melting_face:

Technically, it would be possible to define the offsets (on the backend) as a vector field of offsets (probably interpolations between calibrated points).

It’s a bit arbitrary how you calibrate: just the robot, just the labware, or both? I think the best strategy is to minimize the sum of all offsets. The goal is to make it easy to autonomously set up decks, which also accelerates partial automation. When you place any plate on any position on the deck, it should work without further calibration (which is possible if both the deck slot and the plate are carefully calibrated).

is it possible to say {pick up tip, aspirate, dispense, drop tip} at x,y,z without having a specific labware at that place?

yes, this is what I mentioned we have in the current PLROT api. I wish there were a way to just move in a single dimension, which we will probably have to do by querying the existing location and updating the dimension in which you want to move, then moving to the complete 3d coordinate.

Code in hand is worth two in the bush. This app integrates opentronsfastapi plus adds one endpoint for plr. It’s very much meant to be a prototype, just so we can experiment and talk about something real. It does work on an actual opentrons through and operates as expected.

You gotta do some bs to get it uploaded

# visit ROBOT_IP:48888 for a jupyter notebook. This runs within it
# Write key using echo and redirection
!echo "ssh-rsa YOUR_KEY HOST" > /root/.ssh/authorized_keys
# Restart dropbear
!systemctl restart dropbear
# you have to use older rsa with dropbear. No fancy ed25519
# ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa -o HostkeyAlgorithms=+ssh-rsa root@ROBOT_IP

Then just scp or whatever.

Aspirate and dispense, yes you can do it wherever. Pick up and drop I think you need a location, but those can be constructed from a Point.

Decks stay pretty well calibrated over time, and it doesn’t take THAT much time to calibrate decks. If the goal is to make it as easy as possible to calibrate, I think it makes sense to just keep Opentron’s calibration routine, UNLESS we want to make it an explicit goal that we are replacing their calibration routine. If we go that route, the vector offsets and everything start making a lot more sense, I think.

I think if we replace the calibration routine of opentrons, then it becomes a lot more reasonable to do interpolations between calibrated points, though I personally would still like to be able to be explicit about bottom calibrations on random labware (ie, tell the machine we want the Z of THIS labware), mostly for comfort, though I could be persuaded otherwise once I actually use it.

    @property
    def location_cache(self) -> Optional[Location]:
        """The cache used by the robot to determine where it last was."""
        return self._core.get_last_location()

I think it is possible if we dig a little into the code

2 Likes

just to note, the nice part of having the opentronsfastapi is that we get guarantees around serializing, plus the ability to query things as they run, or see how previous commands ran. All dependencies are built into opentron’s buildroot already, so it essentially dependency-free. Makes it easy to make an API around basic OT functions.

Also, the hash function in protocol_version_flag basically makes it so you can have a guarantee that the expected backend is running on the machine, because it hashes with get_protocol_hash your particular endpoint.

1 Like

wow great work!

  • PLR will send commands interactively (one per backend call), so PLRRequest could possibly be simplified to
class PLRRequest(BaseModel):
    command: Command  # single
  • It seems the x, y and z in MoveTo are relative to a deck slot. Would it be possible to use absolute coordinates? At some layer of abstraction, the robot obviously converts deck slots / labware to 3d coordinates.
  • Similar for PickUpTip and co: can we just pass a coordinates as opposed to a labware item?
  • Are parameters like flow_rate, blow_out_air_volume, etc. accessible for asp/disp? Could you build those into the API?

With these changes (“should” be easy), I think we are ready to fully use all of PLR on the OT.

A ‘location’ as i understand is an xyz tuple + a labware item, where the labware item can be None. Correct?

I see the calibration with OT software as the robot’s own calibration (which as you mention doesn’t change much). For changes beyond that, we could use the PLR model.

I personally would still like to be able to be explicit about bottom calibrations on random labware

This is obviously possible.

:disappointed_relieved:

if we have lh.move_channel_z(idx, z=Z) for OT, and aspirate/dispense at arbitrary locations, you can freely calibrated to bottom/top and take numbers. This is how we do it on STARs (technically, until Camillo introduced z-probing). This means we could update the opentrons api version, if that improves your http api or makes the entire thing more stable. :man_shrugging:

If possible I’d like to keep to an array of commands, mainly because it makes testing easier. If you want to use a not-plr backend, sending a whole bunch of commands at once simplifies scripting of protocols. Of course, then PLR can just put a single command in the array for identical performance (other than the over-the-wire overhead)

Yes, it is possible! However, I don’t know how to do that (quite yet), so I need a chunk of hacking time to get that working.

Yes {I think}

Easily! I use those pretty often in my own protocols, so I’ll be looking at adding them in.

Yep

Well the problem with that is I don’t precisely trust the gantry. If the gantry is slightly off, we will have to save labware offsets and then do interpolation in order to get to the “real” spot on the deck.

if we have lh.move_channel_z(idx, z=Z) for OT, and aspirate/dispense at arbitrary locations, you can freely calibrated to bottom/top and take numbers

Yeah, I think this might be the way we do it if we want to use absolute positioning on the deck, which seems like a bit yes.

I mainly just need to find a chunk of time to write this up as code. But once I get some time I’ll be putting it together

1 Like

sounds really good!

of course, your project. “could be simplified”, but that makes sense!

Internally, it converts the slots’ x positions to a single motor movement. I don’t think it’s fundamentally more accurate that way. Obviously, things need to be calibrated (which will be easier in plr)

Noting here:

# Method 1: Using protocol.gantry.position
def get_current_position(protocol):
    position = protocol.gantry.position
    return {
        'X': position[0],
        'Y': position[1],
        'Z': position[2],
        'A': position[3]  # A axis for pipette plunger
    }

# Method 2: Using instrument position
def get_pipette_position(pipette):
    current_position = pipette.current_position
    return {
        'X': current_position['x'],
        'Y': current_position['y'],
        'Z': current_position['z']
    }

And

def run(ctx):
    # input x, y, z values here (in mm). 
    # The coordinates are absolute 
    # with reference to the bottom left corner of slot 1 as origin.
    # x, y, and z can be a float or integer
    loc = Location(Point(x, y, z), None)
    
    #pipette and labware
    tiprack = ctx.load_labware('opentrons_96_tiprack_20ul', '11')
    pip = ctx.load_instrument('p20_single_gen2', 'right', tip_racks=[tiprack])
    
    #commands
    pip.pick_up_tip()
    pip.move_to(loc)

And finally

labware_origin = plate.wells()[0].bottom().point

Now, the problem is that the particular position of any given calibrated labware lives on the opentrons itself. So one option is that we have an API endpoint for gathering the XYZ coordinates of the labwares that we want to use. Then, we can totally just use absolute coordinate positioning of PLR to go places - the “interpolation” is kinda null and void because we know the offsets to apply.

In practice, this looks like having both a real and fake endpoint that returns XYZ coordinates if queried for labware. If you are running on a real robot, plr will attempt to query the robot to get the proper XYZ coordinate offsets. If you are running on a fake robot, we just simulate the fastapi server, then serve from the opentrons simulate function.

How does this sound @rickwierenga ? I’m not sure how well this replicates the STAR API, so wanted to check first before diving in.

1 Like

this is based

we still need to apply the offsets

the interpolation was an idea to resolve the x-parameter x-reality discrepancy you described. it might not be needed.

labware needs to be defined in PLR - not OT. All we need from the robot is to execute the four fundamental & atomic operations at particular coordinates on request.

In that case we probably don’t need this backend at all, plr does enough simulation in the front end layer that we can test protocols (at a basic level) with just the backend. For more thorough testing, the io-layer and log-parity testing will be useful.

It could obviously be an option if you want, but I don’t think it is necessary.

How can we find the particular coordinates to request without knowing the origin of the labwares (ie, getting the offsets)? We can execute the fundamental atomic operations, and we can have the labwares in PLR, but if we do not request the origins and do not implement calibration, we cannot combine the two. The only way we could do this is to reimplement calibration from scratch, without using Opentron’s calibration routine. Is that what you are proposing?

Well, theoretically yes, but this doesn’t make very much sense to me. You’re gonna have to mock the API in order to do the simulation. But you also just have the server, sitting there, with a prebuilt simulator in it. It will likely catch things that the execute function would also fail on, but perhaps our software would not. Why wouldn’t we use it?

we know from the PLR labware model. The offsets should be in your protocol code (that is: through PLR). (At first, you probably want to export these cords from the ot software to get started)

I’m a fan of making the server as light as possible, and using as much (universal) code that is already written in PLR, with a very light server implementation. It will make everything much easier. I kind of see that as the whole point of having a PLR integration, others will just want to use the official OT software.

Depends on what you want to test. For basic protocol validation, pure PLR is enough. You’d essentially just mock out the backend for the chatterbox.

For testing whether we can generate the API calls correctly for each atomic command, we can do that with unit tests (as we currently do).

Together, these should converge on 100% reliability.

The user might want to additionally test their protocol with the real server that is running in simulation mode, which sounds like little incremental work.

You can’t get them through PLR because they vary robot per robot. The offsets are created from not only the definition of the deck, but the calibration of the labwares.

Exportation of those coordinates is what I am proposing, through an API.

an export option would be super convenient

i think the tip pickup/drop&asp/disp api should do what it promises: ask to go to x, go to x (per calibrated motor definition) → we handle the rest in plr

Makes sense, that’s what I’m tryna figure out how to do

  1. We can move the robot to XYZ coordinates on the deck
  2. We can encode labware in PLR. All we need to know is the position of the labware.
  3. We cannot know the position of a labware without querying the robot (or implementing calibration)
  4. The position of a labware will change upon every calibration

This seems like more than just an exportation, it seems to me like something that PLR ↔ OT have to negotiate. If you just had an exportation, where you took offsets and had them as data for PLR to consume, it would rapidly become outdated upon every calibration. That’s why I’d argue for more than just the 4 fundamental commands (or 5, since I think homing is necessary too, no encoders).

One endpoint from running commands, and one endpoint for getting data from the server about its particular calibrations.

This kinda makes even less sense to me. Why would I go out of my way to mock the backend, when the simulator backend exists already? And it is actually complete? Pure PLR mock, that I handwrite, isn’t going to catch the places where a properly generated atomic command will not run on the server, but the simulator server, which already exists and works, will catch those. Not using the simulator seems like it is creating work where we already have a more functional thing existing.

Again with the API calls (which are just filling in python objects, in this case) - Why would I test just generating API calls? Why not go actually test them? I know why you currently just generate them - you don’t actually have simulators - but that’s not true here.

i think this is a good idea for STARs as well - there is great utility to querying pipette & plunger location. one can imagine a STAR + OT-2 compatible calibration script that is used to:

  1. define labware in the first place
  2. quickly apply “MachineOffsets” to every carrier site based on feedback by an operator at the start of a fully automated run (not to correct for poor labware definitions, but instead for calibration drift & validation concerns)

eventually, it will be an autonomous operator using a camera to apply custom offsets centering the pipette in a well. before then it remains the human backend

1 Like

Huh, I guess I forgot to ask - how do you currently calibrate the hamilton robots? Do you assume good labware definitions + good XYZ on your robot, or do you do something about calibration drift / validation concerns?

we use the regular service macros to align pipettes & plate grippers, which accounts for the majority of miscalibration concerns. these apply y and z offsets internally to the pips, which is enough if our x,y rotation & x translation are aligned identically on all 8+ pips

offsets are almost never required unless labware is poorly defined or carriers are not properly pushed in

the calibration tools are surprisingly robust: today we crashed the iswap in a terrible way, but after physically bending the metal back with a clamp, brute force, and running the iSWAP macro w/ conductive alignment tool, zero external integrations or internal plate locations needed offsets

1 Like

is this calibration of the robot or your labware? how often does it happen?

yes

not as big a use-case as for stars where no such service exists. Testing/iteration/development on the user side with the real server is obviously preferred where possible, I’m not arguing against that.

Where it might still be useful: unit testing (ci) where you don’t have the server running. For unit testing, it is actually preferred (as a general software dev practice) to test at every layer of abstraction: plr → request in plr ci, request → correct behavior for your api. The rationale is that it’s quicker to see what breaks, and also to make sure that if users were to call the api directly it would work as expected. (in all fairness I didn’t know this 3 years ago, so some parts in plr violate this principle, but I am trying to use it for all new code from the start)

Given PLR’s architecture, it will be easy to do whichever: you can just switch out backends.