Plr & ot

following PLR Dev Roadmap 2025 - Q1 & Q2 - #2 by koeng?

The current PLR integration for OT is whack. At a high level it works like this:

We use a slightly older version of the Opentrons HTTP API, through a python wrapper i wrote. when i wrote this, the OT required to have an internal model of resources. This required that every time a resource is defined in plr, we make a request to the ot saying “there is now a new resource named X that looks like this”. Operations like pickup tip / aspirate etc. send the name/identifier of a well rather than a location. Since the OT model is extremely constrained (resources have to follow a certain grid format) and there is/was no easy way to remove resources, the PLR integration with OT was bad. We could only use a subset of our resource model, and it was not editable at runtime.

Compare this to a STAR or EVO, where we can arbitrarily say “aspirate at xyz”. This is powerful, because we can manage the deck state and just tell the robot to execute operations at locations where we know containers/tips of interest are.

Recently I checked and saw that the OT api now supports manual channel movement and “aspirate in place”. PLR should migrate to this new api version. With this new version, we should no longer make requests to the OT onboard computer to tell it about resources and just manage everything in PLR. When it’s time to execute an operation, just move and do the operation in place. We get full flexibility to use the very accurate plr model, custom resources, etc. We can use our Retro $14 3d-printed tilt module.

In the past, using PLR with OT was a pain since you couldn’t use the benefits of PLR. Instead, it was actually worse in some ways because the HTTP API didnt express the full functionality of the machine. With this upgrade, PLR could be a better way to use the OT than the official python api.

As an alternative to using the new http api, it is also possible to write our own http server that would run on the OT onboard computer, like below. I initially considered doing this for the first implementation, but since we weren’t really using it and it wasn’t needed for proof of concept I decided against it. The benefit of this would be even more control (do we need it?). Arguably, we are less dependent on OT making changes, but we should still want to call into the OT stack at some point and there’s a risk they might change that. Also, the OT stack is complicated with many layers of abstraction that cross reference. If the new API is good, my vote would be going that way.

I might test and see if it’s easy to make this change one weekend, but it’s not a priority for me. Happy to discuss more or help someone else who’s willing to lead this project!

1 Like

Hey there! Willing to lead.

The primary thing that I personally need is to keep my opentrons version at the best version, version 4.7.0 (it’s even the version they have in their downgrading doc). This is because it is the last version with my favorite feature, calibrate to bottom, which I’ve been bothering them to add back for years. Calibrate to top sucks, honestly. I can’t believe people use it.

Here is something I’m a little worried about:

class LiquidHandler(Resource, Machine):
  """
  Front end for liquid handlers.

  This class is the front end for liquid handlers; it provides a high-level interface for
  interacting with liquid handlers. In the background, this class uses the low-level backend (
  defined in `pyhamilton.liquid_handling.backends`) to communicate with the liquid handler.
  """

  ALLOWED_CALLBACKS = {
    "aspirate",
    "aspirate96",
    "dispense",
    "dispense96",
    "drop_tips",
    "drop_tips96",
    "move_resource",
    "pick_up_tips",
    "pick_up_tips96",
  }

Here are the allowed callbacks that are allowed. The one I am worried about being missing is a move_to command, which I commonly use in Opentrons protocols. For example, here is how I do plating with an opentrons - plating 96 transformants onto two plates with a single serial dilution (works surprisingly well):

    # ======================================================================= #
    #                               Plating                                   #
    # ======================================================================= #
    #                 ._________._________._________.
    #                 |         |         |         |
    #                 | thermo  | thermo  |  trash  |
    #                 |_________|_________|_________|
    #                 | (96pcr) |         | (24tube)|
    #                 | thermo  | thermo  |  temp   |
    #                 |_________|_________|_________|
    #                 | (96pcr) |         |         |
    #                 |  mag    |*p20tip *|*p20tip *| .1
    #                 |_________|_________|_________|
    #                 |         |         |         |
    #              2. |* agar  *| 12 well |* agar  *|
    #                 |_________|_________|_________|
    #
    # 1. Refresh tips on p20 tipbox on 6.
    # 2. Add agar plates to 1 and 3.
    #
    if "plate" in build_steps:
        # Calculate needed columns (8 samples per column)
        num_columns = (num_samples + 7) // 8  # Round up division

        # Determine if we need both plates (>6 columns)
        needs_second_plate = num_columns > 6

        # Only delete deck 5 if we need the second plate
        if needs_second_plate:
            #protocol.pause("Remove p300m tips. Replace p20m single tips. " + f"Add agar plates to deck ({1}{', 3' if needs_second_plate else ''})")
            del protocol.deck["1"] # https://github.com/Opentrons/opentrons/issues/6214
            agar_plates = [
                protocol.load_labware("biorad_96_wellplate_200ul_pcr", x) for x in [3, 1]
            ]
        else:
            agar_plates = [protocol.load_labware("biorad_96_wellplate_200ul_pcr", 3)]

        plating_tips = [tips[0], single_tip_box]
        # Process each plate
        for plate_idx, agar_plate in enumerate(agar_plates):
            # Calculate columns for this plate
            start_col = plate_idx * 6
            cols_this_plate = min(6, num_columns - start_col)
            if cols_this_plate <= 0:
                break

            # Process each column
            for col_idx in range(cols_this_plate):
                for dilution in range(2):  # Two dilutions per column
                    current_lane = (col_idx * 2) + dilution
                    target_column = pcr_plate.columns()[start_col + col_idx][0].bottom(0.3)
                    tip_col = plating_tips[plate_idx].rows()[0][current_lane]

                    p20m.pick_up_tip(tip_col)

                    # Only dilute for second plating
                    if dilution > 0:
                        p20m.transfer(
                            6,
                            resuspend_water,
                            target_column,
                            new_tip="never",
                        )

                    p20m.mix(2, 5, target_column)
                    p20m.aspirate(6, target_column)

                    # Plate
                    p20m.move_to(agar_plate.rows()[0][current_lane].top(4))
                    p20m.dispense(5)
                    p20m.move_to(agar_plate.rows()[0][current_lane].bottom())
                    p20m.move_to(agar_plate.rows()[0][current_lane].top())

                    p20m.mix(3, 15, cleaning_water)
                    p20m.drop_tip(tip_col)

The movements at the end are really important. I don’t actually want to dispense or aspirate at particular locations because the method is dependent upon droplet formation at the end of the tip, and then a stabbing motion to create consistent sized, separate micro-plates on one big agar plate (sbs format).

On the particular implementation, I was thinking of just using opentronsfastapi, which I developed with Tim Dobbs to run bioartbot and some random parameterized stuff in my own lab. It’ll handle making sure that multiple commands don’t run at once, and give a simple interface for checking if things have finished running.

Arguably, we are less dependent on OT making changes, but we should still want to call into the OT stack at some point and there’s a risk they might change that

I think it is better to develop something that uses mid-level opentrons commands. The kind of commands that are simple enough (dispense, aspirate, move_to) that Opentrons can’t really change without breaking every single protocol ever. The kind of commands that have remained consistent since the OT1, so are nearly guaranteed to be the same in the future. On the other hand, the APIs that Opentrons uses have not been consistent (here are 8 different protocol schemas).

The only opentrons stack thing that opentronsfastapi hits is the opentrons_execute, which is another stable method for calling opentrons protocols (mainly because it separates simulation and execution).

2 Likes

amazing!

makes sense! that is something i had not considered above. but it’s a solid argument for writing a custom server. the new verser will almost certainly be compatible only with newer versions. you mentioned you already have your own fastapi server, which would be great to use in this stack.

using venv, it might be possible to do a simultaneously installation. I imagine we might have an install_plr.sh script that users run once on their OT, which will install and run your server. If not possible, I guess the new version will be so good that people won’t even need to ever go back to the ot:main (“edge”) api (4.7.0 should suffice for what’s not in plr).

These are just methods for which callbacks are available. We do have methods lh.move_channel_{x,y,z}(channel, pos) that we use all the time on STAR.

There are backend methods to facilitate this LiquidHandlerBackend.move_channel_{x,y,z} which just aren’t implemented. However, there is already a method OpentronsBackend.move_pipette_head that works. The reason the three individual commands are not implemented is because they are supposed to move only along one axis. The API currently requires x, y and z coordinates. I was not able to find a way to query the location. This means that it is impossible to send the robot to a coordinate in one dimension while keeping the positions in other dimensions. With your new API, this should be trivial to fix.

Having one unified way of moving channels on the LH level is super convenient. One example use case is our custom tilter, which uses channel movement to tip it between tilted/non-tilted positions. The positions are dynamically computed based on the tilter location + plate definition. If PLR-OT supported the individual channel movement, the tilter would automatically work.

this is great, exactly what we need. It would be great if the OpentronsBackend were a client interface to this API as opposed to the official opentrons API.


I think with PLR we have the velocity and agency to solve all OT software problems fast (no “it’s on our internal radar for 3 years”). This could fundamentally change how people use their Opentrons.

1 Like

Having one unified way of moving channels on the LH level is super convenient. One example use case is our custom tilter, which uses channel movement to tip it between tilted/non-tilted positions. The positions are dynamically computed based on the tilter location + plate definition. If PLR-OT supported the individual channel movement

Do you have documentation of the interface that should be implemented? One of my favorite things about Go is the idea of interfaces - which mostly becomes powerful when combined with ways to operate with files. For example, here is an interface for reading files with my Go package dnadesign. All the 7 parsers in that project support the same interface. Becomes really easy to build new parsers - I just support the two simple functions, and the rest of the generic typed tools surrounding the interface do the rest.

I imagine there is something like that in plr - but the standard.py doesn’t have a sufficient number of functions for everything described, like move_channel. If no documentation, where would the code be that I’m looking for?

The interface is pretty important because, while I can code the app pretty easily, I need to actually run it on an opentrons for testing, because simulate and execute do things quite differently when it comes to threading.

Yep. The hard part I remember was figuring out the systemd for restarting the service upon restarts (something about the underlying linux system they were using) but I think Tim figured that one out.

in python these are abc’s (abstract backend class). every device category has one. you can find the one for liquid handlers here: pylabrobot/pylabrobot/liquid_handling/backends/backend.py at c598918946806107d0058aed62afa5d37e0a3935 · PyLabRobot/pylabrobot · GitHub

that looks correct. systemd is very nice to use

in python these are abc’s (abstract backend class). every device category has one. you can find the one for liquid handlers here

Perfect! I haven’t personally used them in python before, but it is good to know they are here and available.

I see that the move operations aren’t implemented. Is it normal to just have those as functions of the actual class implementation?

Another question:

I see that most lab modules, like thermocyclers or temperature decks, are controlled by PLR itself. Meanwhile, typically with opentrons, these are handled with the opentrons themselves. It’s a bit inconvenient to have a separate pi for each module. My thought would be to have them controlled by the same backend opentrons server, but at different endpoints so not to interfere with opentrons liquid handler interface. Is that the right way to do it?

The move operations aren’t implemented on the OpentronsBackend for the reason mentioned above (there is no way to move channels along one single axis)

the way this works right now in plr is a bit hacky: you initialize the OT backend, which initializes the ot_api module with the robot ip and port. Then, backends like OpentronsTemperatureModuleBackend simply call into this module. Currently, the OT server already houses different endpoints. One server process is probably the simplest, and perhaps even required on the opentrons_execute layer. On the client side, one can either 1) have two backends taking the ip of the server or 2) have one super-backend that implements both LiquidHandlerBackend and TemperatureControllerBackend. I think 1 is cleaner. Perhaps there is a third option.

there is no single “right” way to do it I think. it’s up for discussion

Hello! This is a bit off topic. I wanted to share some thoughts, as a hardware developer, about the first post.

Given its aim, I feel like PLR is overly judgy of the hardware. It is too opinionated for an agnostic front-end.

I might argue that it is not the OT2 API that was constrained. If I am guessing correctly, it may instead have been that the APIs on the STAR and EVO did not offer a good abstraction for operations in the first place, and PLR (PyHamilton at the time) compensated.

After all, it would have been easier to just command “aspirate from tube1 and pour in well3”, instead of spatially modelling every aspect of the workspace.

To me PLR is about backends and decks - thin adapters for what the hardware offers - and exposing a uniform front-end to all users. This front-end is still not fully uniform (e.g. backend_kwargs and some idiosyncrasies around resources) and ironing that out would be more interesting from my perspective.

In this sense, updating PLR to the latest OT API sounds PLR-ly, and I’m glad to see this happening, while installing third-party software on the robot (to make it do what others do) sounds strange. Unless PLR’s aim is to take over the robots, the message is that others should be like Hamilton.

I think that PLR should focus on more abstract, non-spatial commands instead, and expect less from the machines. That is, to move away from the hardware instead of towards it. This is what I expect from an agnostic front-end.

Sorry for the intrusion. I hope these opinions can be helpful :slight_smile:

1 Like

thanks for comments. these discussions are very welcome

yes it would have been easier, but much more constrained to only allow deck > plate > well. PLR is about hardware agnostic interfaces, but the goal is also to give people more control over their machines.

If we only focused on what is possible on every machine, that would constrain us to the union of limits of every machine (intersection of functionality). It would mean we can’t use the nice features of individual machines, like channels that move independently along the y-axis on Hamilton machines. Another example, how would we be able to use a tilt module on the OT if their software is limited and doesn’t support rotations?

The way I have phrased this is in the past is that on the front end layer we make a guarantee that if someone writes a protocol, it will also work on other robots. At the same time, it should be possible to use unique hardware features as long as it is obvious that that happens (eg by seeing lh.backend). Users should choose whether they focus on maximum universality or choose to use lower-level more powerful control.

STAR and EVO do have good abstractions: near-atomic commands that ignore the user’s resource model. They expose commands aspirate at x,y,z. The parameters all describe physical reality, and that makes it universal and easy to use. With Opentrons, you are forced to adopt their design decisions.

For a while at the start we had some Hamilton specific parts in the frontend, but I’m pretty sure that’s all gone now. What part is still left?

What is an alternative to backend_kwargs? Which idiosyncrasies around resources? (Tecan has some weird parameters, but it’s not actively used or developed rn)

I suppose “expecting less” is exactly what we are doing: we expect to be able to perform the atomic hardware operations (tip pickup/drop, aspirate, dispense) within the Cartesian space. We don’t expect (or want) a limited resource model in between, yet that is what the OT at version 4.7.0 provides.

Hamilton did a great job with the firmware. It is very easy to use and do custom things because every single parameter in the firmware refers to a basic physical dimension. Since it’s near-optimal from a firmware api perspective, I think it’s unfair to critique improving the OT interface by saying it becomes more like someone who did a good job. In this case, OT should do exactly what Hamilton/Tecan did (and that is what they did with their new http api. Unfortunately, using the new http api version would require calibrating to top which is dumb.)

In bigger terms: the “aim of PLR” is to fix the software of lab automation. OT is doing a decent job, but we see some parts we can improve and we will.

In this sense, updating PLR to the latest OT API sounds PLR-ly, and I’m glad to see this happening, while installing third-party software on the robot (to make it do what others do) sounds strange. Unless PLR’s aim is to take over the robots, the message is that others should be like Hamilton.

While I would agree to this in theory (update against newest API, don’t change software on the machine), I think I disagree in practice, in particular for Opentrons. They’ve been mostly unresponsive to problems, with very long windows for updating bugs, plus have often changed the API. It would have been especially easy to just use the JSON schema provided by Opentrons - just have PLR generate a “protocol” for a series of steps - but you do not have control over how it works (simulation steps, which schema the opentrons expects, etc).

By taking over the communication layer, we can ensure long term stability independent of what opentrons the company provides. Which I’m a little suspicious of, at this point, honestly.

I suppose “expecting less” is exactly what we are doing: we expect to be able to perform the atomic hardware operations (tip pickup/drop, aspirate, dispense) within the Cartesian space. We don’t expect (or want) a limited resource model in between, yet that is what the OT at version 4.7.0 provides.

How do you think calibrations in combination with opentrons resource model would look like? Ie, if I set up calibrations for my particular plate (which I do not have dimensions for - use a Chinese supplier that makes ultra cheap plates), how would the API look for doing an aspirate at cartesian space coordinates, but against that limited resource model? I ask this especially because there are some nice things about the resource model - for example, I’ve noticed the OT2 isn’t completely stable, so plates calibrated at slot 3 actually DO have ever so slightly different placement than say plates at slot 7 (this mainly matters for 384 well plates).

In 4.7.0 they only let you calibrate at a single spot, so my protocols actually take this into account (384 well plates ALWAYS go slot 3 or 6), whereas the newer opentrons APIs I think you can calibrate per slot (but can’t calibrate to bottom, frustrating).

1 Like

Thanks for the reply Rick!

I feel that this is paradoxical. If a user can write a hardware-specific protocol, you can’t guarantee that it will run elsewhere. This is confusing to me.

Instead if you write high-level functions for those “unique” features, the backends can clearly choose to support it or not. This is what you have done with 96-well aspirate commands, so why not do it for the rest?

I don’t have access to these robots, but this sounds just like a G1 GCODE command. It is the least abstraction you can get.

We actually discussed one in the recent PR I made about single-channel aspirations. I’ve always come across more of them eventually.

Yes, just as any proprietary hardware does. That’s not a bad thing, you get what you buy. If this won’t do for you, then that’s what open source hardware is specifically meant to address.

Furthermore, they are not forcing me to buy their hardware, and they are not forcing PLR to support them. They have their own API after all, and PLR is competing with it.

This might be relevant to @koeng 's comment too.

You could also install my software stack on an OT2 and drive it however you want. With some work of course, but it might actually be less work than using theirs.

Creating a high-level command for each use case for backend_kwargs instead.

I have only discussed that “tips are not resources” so far. There should be more.

I think you misunderstand my point. Let the frontend explicitly map each feature to a high-level function, and let backends implement it if they can, however they want.

What is calibrating to top?

Even if you are right, I don’t think that judging the robot you are supporting is the right strategy in general.

Its like PLR wants to be in the middle, but does not want to find middle ground.

I have spent too many hours recently, trying to bridge PLR’s deck to my deck. I thought it would make sense to populate my decks with PLR, but it has been almost impossible.

And, from my side, this is because PLR imposes constraints on what can be what, on how you define labware dimensionally, and many other decisions that were made for EVOs and TECANs, not for agnosticity.

I really feel this bias when writing things for PLR, and it is discouraging.

In PLR, there are a few ways to calibrate, depending on the exact aspect of the process that is wrong.

  • If the offset issue is with the machine, you would prefer to use an internal register on the machine so that is works across protocols. On both OT and STAR this is possible.
  • If a particular deck site is off, you can change the location of this deck site.
  • If a particular resource is off, you can change the location of this resource wrt the deck site.
  • If a channel is off, you can apply offsets to that channel when aspirating/dispensing (we should really have an internal dictionary in LiquidHandler to manage channel_idx->Coordinate offsets)

The entire resource model in plr is flexible and editable at runtime*. Positions are calculated just-in-time for an operation.

*with the exception of the current OT actually, because resource’s have to be ‘mirrored’ to the server and can’t be edited, or at least I wasn’t able to.

1 Like

It is almost an axiom that users should be able to express hardware-specific behavior in this framework, because the functionality that is shared across ALL liquid handling robots would be very small, essentially be single-channel aspiration. Some robots don’t even have an 8 channel head.

What we try to do is make it obvious to the user when they are using the general layer and when they are using robot-specific methods.

Aiming for hardware agnosticity leads to good software design imo (eg LiquidHandler doing most takes, so implementing new backends is easy). It also allows us to reuse components like labware, tilt modules, or simple sequences like serial dilution between robots. For simple protocols, it will be relatively easy to take them somewhere else. For less complex machines, switching hardware is even easier. With PLR, for the first time Hamilton users can share codebases between STARs and Vantages. Or soon the OT and MLPrep. These are the practical benefits that we aim to support. But not all hardware supports all operations, and in biology the specific hardware features are extremely important. I feel like this objective is subtly different from ‘hardware agnostic’ software like Python or javascript that are the exact same on every computer.

And we should keep fixing them as you come across them. You’re right, tips are still in progress.

With PLR, I made the decision to start implementing and let standardization follow. This model has proven successful for building the internet (BBN beat AT&T), and it will be successful in lab automation as well. It is extremely difficult, arguably impossible, to spend months thinking about abstractions and universal design. By starting with the implementation, we learn what is actually useful and necessary, and we refine as we go.

Please make new threads if you find abstractions that don’t work for your robot. This has been the case for every new machine that we have added.

Isn’t that what Keoni did with his custom server? I don’t understand the obsession with ‘having to use’ the software provided by the manufacturer (PLR is actually one big project entirely built on replacing legacy software).

could you elaborate on what you mean by judging? I don’t follow.

Please share the specific things (in new threads) that the PLR deck model doesn’t support. If we missed things, we should fix that.

I feel like there are some underlying issues in this thread about PLR design and mismatches with your robot. I would find it more productive to discuss those cases specifically. Also I’m happy to just elaborate on why I made certain choices with PLR. To keep things organized, let’s keep this specific thread about the new OT integration, and create a new thread for each topic that we should discuss.

1 Like

I think I both disagree and agree to this. To me, the most important part of PLR is having an API that doesn’t suck for writing against any given machine, and that has a standard way of interacting with a bunch of machines. Hence, http is important to me. Personally, I am quite happy with just using opentrons-specific abstractions, and having to completely rewrite a protocol for a different liquid handler with fundamentally different abstractions if needed (LLMs should be quite good at this, if we give em examples).

In this way, I’m happy to abandon all higher-level functionality to just ensure I can use reasonable software tools to make that higher level functionality. On the flip side, I realize that people are less ok with that, hence wanting to support the main PLR interface.

I think you misunderstand my point. Let the frontend explicitly map each feature to a high-level function, and let backends implement it if they can, however they want.

My preference here would be have the frontend ONLY guarantee robotic integration. We know you’ll have to change a lot of smaller aspects when translating across hardware or liquid handlers - but I don’t think that PLR is built to be a compiler, which this would literally be (compile a protocol against a certain set of machines). We are too early for that, so lower-level commands work just fine.

I have spent too many hours recently, trying to bridge PLR’s deck to my deck. I thought it would make sense to populate my decks with PLR, but it has been almost impossible.

In the case that PLR just supports specific abstractions for your particular robot (like what I was describing), would that help solve this?

Interesting, so I should figure out how to access the internal calibrations. I think that is doable.

Coming from Opentrons - most my labware or modules are simple (96 well plate, where only thing that matters is the bottom) or Opentrons-specific (ie, all the modules). Is it common in other robots to have different module support? Coming from where I am, I would challenge the usefulness of reusing modules, but honestly I might be completely wrong (not sure about your tilt module)

The irony here, of course, is that I DID buy open source hardware. The OT2 is (technically) open source. I went all in because of this. I then learned that open source hardware is just a feel good term without a manufacturer behind you. Hence, rewriting the software.

I’ll be more specific here - how important is it to you that the cartesian aspect of PLR liquid handling is maintained in the opentrons API? On one side, it makes more sense coming from an PLR abstraction if you already have that. On the other side, keeping the opentrons-style constraints, it becomes a lot easier to write protocols coming from outside PLR (because they’re going to match the abstractions of Opentrons)

From a “getting new users” perspective, this would let more people who are checking it out to quickly translate an opentrons protocol to use plr. Hell, we could even just implement the opentrons interface itself from a few simple commands, if that is useful. From a more pure software standpoint, it does kind of muddy the hardware agnosticism you’re going for (which hasn’t been useful to me yet, but maybe that is just because I don’t have it)

2 Likes

On the contrary. I’d prefer replacing the software and firmware on the OT2 with my own, and such things. On the other hand, I am a manufacturer, so I prefer my software of course. :slight_smile:

It has nice data schemas, shares state through a standalone database, it constantly interacts with a web UI, and I like its definition of state a bit more than PLR’s. In a way, this software already covers what PLR does and more, but it is missing a python front-end that others will be familiar with.

I think that you judge OT negatively (e.g. extremely constrained, or “dumb” in a certain aspect) because it doesn’t fit 100% in PLR, and not because it’s objectively worse. But perhaps I’ve misread you.

OT’s software is open, but the hardware is not, and has never really been. The open parts are the onboard RPi and Smoothieware board, but those are not OT’s designs. You get the STEP file for a sheet metal case and a deck, that’s it.

Not even the OT1 was open source hardware, it was “available STL models” in a git repo. That’s as open source as a compiled executable is.

Folks at OSHWA and GOSH take the open hardware term seriously, just as several truly open hardware manufacturers do. I don’t think its a feel good term, its just that some companies use it for openwashing.

I’ll now withdraw and let you work. Thanks for the chat!

I dunno, kind seems like they’ve posted their PCB files and such, which is more than just a pi and smoothieboard. It’s a little unfair to say that that the OT1 wasn’t open source considering nobody asked for more than the STLs and it could have just been an honest mistake (and I suspect it was just a mistake). I know several OT1s that were actually created from their open source files (regardless if these are STLs or not), which is more than I can say about, well, literally any other “open source” lab hardware system.

Personally, I don’t trust gatekeepers to open source like OSHWA and GOSH. Opentrons used to not be a faceless corporation. You coulda just asked for things like that back in the day in the true spirit of open source, and they would have provided them! It was great!

Now, not so much.

I respect your opinion, but I don’t believe that there are any gatekeepers at GOSH. OSHWA merely offers a definition of what open and what source mean in open source hardware. You can self-certify as open source, don’t need their approval as long as you fit a rather simple definition.

Actually they did, in these two they specifically ask for CAD sources of the OT1:

And yes, OT2 has schematics for PCBs, and some like the endstop boards are original. But hardware is also the mechanical parts, especially the pipettes, and all of the integrated labware equipment.

Which ones do you know?

likely using the gui: this is just the mapping of motor values to their coordinate model. It should be possible to write your own tool if you wanted.

there are a few parameters that fully define a plate, and we found the only way to create a reliable library of labware was through looking at manufacturer specs + custom calibration. Many of the manufacturer’s library are calibrated somehow to their specific software (not even robot), not to physical reality. As Camillo put it: “this is a house of cards”. As we talked about in person, users are expecting to apply offsets here and there, which makes truly autonomous research impossible. How will Chat know what offset is needed? We just need good definitions grounded in physical reality and go from there. Being able to share this labware library is obviously doable and needed.

very important to me, and I think it will be important to you as well. If only for the calibration mentioned above: with a resource model that you manage, you can send the pipettes to the exact locations that you want.

I don’t see another way. Not sending coordinates means you’re sending identifiers. Sending identifiers means you must have defined what those identifiers are. Is the resource model as flexible as plr’s? Can we easily edit at runtime? Fundamentally, it’s duplicating work.

on the user-level (LiquidHandler), it will be nice and obviously use identifiers instead of Coordinates (unless the user needs it). Look at the existing liquid handlig api. Users won’t care how the backend works under the hood. I’m sure Chat will be able to translate protocols.

adding onto this, with Hamilton/VENUS we found it’s a mistake to make plr more like the legacy software people are used to. It leads to bad api decisions. We now only look at physical reality and first principles. Projects that provide python interfaces to venus already exist (like pyhamilton). Perhaps opentrons is slightly different because it already has a Python API, but I still think we shouldn’t try to make PLR similar - rather, just look at what makes sense.

1 Like