Updating PLR API for machine interfaces discussion

why does this need a Device class? I like having the Machine class to manage a machine’s lifecycle

The specific classes for individual machines like STAR and Cytation10 will combine the Machine (and capabilities) and Resource

We can probably even lose Machine since all work will be done by actual implementations like STAR

I don’t see why we need to have a Device class or what it would do

In the absence of Device, where does higher level tasks like volume and tip tracking live?

Will they have to be reimplemented for every single … Device? (Not having a name for that highest level class is very confusing and makes even this communication hard)

Regarding organisation: it is all about usage (even when there are edge cases), most people buy devices based on the features they mostly associate them with: for plate reading of any sort they want to find plate readers, for microscopes microscopes, and for liquid handlers the same.

It would be incredibly tedious to have to look through all folders just to find all the different plate readers that PLR supports and compare their codebases.

Resources are actually the same but we’ve been discussing packaging them up into a big database for a while to solve this, and have a simple interface on the docs pages for easily searching through them.

same as right now: volume tracking for containers lives in the resource model, for channels inside LiquidHandler

we can still have this on the website and similar to resources have a system to find them

I don’t understand why people keep saying this when there is there are specific examples of that model failing

The model fails on an accuracy level but works on a human level. And ultimately humans are using automation. If we do want to have a device-centric, accurate-to-features model then we need a higher level search system for devices which enables searching devices based on features … that could work I think?

So tips are only tracked per device?

That’s not useful to me anymore now that we have off-lh storage of tips to resupply the lh. We need workcell-wide state tracking.

it does not:

cytation 1 is a microscope, not a plate reader

cytation 5 is a plate reader to humans, plus a microscope

yes filter by capability on the website, very easy

when they are mounted yes, but the current system for this works quite well and is not a part of this refactor so not a part of this discussion

it would introduce a huge web of dependencies

now suddenly plate reading would depend on microscopy because the citations are in the plate reader module

what if we had another machine that you consider a microscope that depends on plate reading?

it leads to circular dependencies and clearly does not work

are there better ideas than by manufacturer?

I think I wasn’t expressing myself clearly:

The fact that microscopes are currently in the plate reading directory was always a mistake. Microscopes by definition take images (i.e. via cameras as sensors, e.g. CMOS), while plate readers take point measurements (usually via a PMT) without directly creating a larger data structure over at least 2 physical dimensions (usually x-y for microscopes, but can be z- and t- and lambda-dimesnions etc). I.e. there is a fundamental difference to these two categories (in addition to their detection mode differences [abs, fluorescence, luminescence]) which they should have been divided by from the start.

What I meant was that when someone is searching for a machine to buy/use they look for devices that are sold for that main feature (e.g. absorbance plate reading or imaging).

Correctly executed, that is what PLR provides in its current codebase and that works well for both software developers searching/importing backends, and interested parties to see what PLR provides (like the supported devices page on the docs)

That being said, if we have a database of supported devices with search functions, then I actually agree with the true-to-feature separation by manufacturer.

The most important part is to find an organisation that is usable by PLR users.

If separation by manufacturer is not intuitive then we need a helper structure that makes it usable.

the current cytation 5 is a plate reader + imager, cytation 1 is just an imager. you were saying cytation should be in plate reader, but it sounds like you changed your mind and are now in favor of by manufacturer

Maybe not.

Just to summarize then where I think this could land:

The capability folders (plate_reading, imaging, spectrophotometry, liquid_handling, etc.) stay as they are and continue to hold the universal frontends.

The concrete device classes move to a separate devices/ (or /machines) folder, which can be organized internally by vendor. The vendor name wouldn’t need to be part of the public import: devices/__init__.py can re-export everything so users simply write from pylabrobot.devices import Cytation5. That also solves the BioTek → Agilent issue: a folder rename and a small change in devicdes/__init__.py, without breaking user-facing imports.

One personal note, PLR’s hardware-agnostic architecture feels like more than just a software design choice. Being able to swap a CLARIOstar for a SpectraMax without rewriting protocols is the point. So while vendor folders make sense as internal organization, keeping them invisible to users feels important. That’s probably a personal preference more than a blocking concern, and the __init__.pypattern addresses it well enough.

My concern was mainly that moving to vendor organization might replace the capability folders as the primary entry point, but I don’t think that’s actually possible given that universal frontends need a vendor-neutral home by definition.

1 Like

I like that

100% agreed and it remains a core principle of PLR

however as we have learned through implementation: it is not the devices themselves that are interchangeable, but their capabilities

yes exactly

yes. the detection mode /sensor type already hints at the data structure: a PMT gives you a point measurement (a scalar), a CMOS produces a spatial data structure over physical dimensions (x–y, potentially z/t/λ), a photodiode array yields a spectrum, an ion trap a mass spectrum.

That seems aligned with the distinction you’re making between point measurements and data structures over physical dimensions. At the same time, the schema really sits at the method level rather than the frontend level, a plate reader can return a point measurement, a spectrum, or even a time series depending on the read, and some instruments (e.g. liquid handlers, peelers) don’t return measurement data at all but just execute an action and might return an event status.

So it might be worth thinking about being a bit more explicit about the detection mode and resulting data structure each method returns, to help keep capability boundaries and serialization aligned as we add new capabilities.

yes, a separate point @CamilloMoschner and @ben talked about during the meeting :slight_smile:

we can (should) align timing with beta 2 release, but the schema is a topic for a separate thread

1 Like

I used AI to clean up the presentation but I’ve been mulling this over for a while. I’d be interested in your thoughts on this method. I may be way off base but if you’re not embarrassing yourself occasionally you’re probably not doing enough!

————————————

Proposal: Capability-Based Machine Interfaces

I’ve been thinking about the fundamental tension in the current proposals: we’re trying to model what machines ARE, but what we actually need in protocols is what machines CAN DO.

The Core Insight

Lab automation has a limited action space:

  • Move liquid (aspirate/dispense)

  • Move plates (pick up, transport, place)

  • Control temperature

  • Shake

  • Read plates

  • etc.

The number of possible actions is small and finite. The number of machine configurations is infinite.

Current Problem

When we write class STAR(LiquidHandler, Arm), we’re saying “A STAR IS both these things.” But:

  • Most liquid handlers don’t have arms

  • Some machines have 2 arms

  • Some incubators have temperature control, others don’t

  • Where does “plate reading” fit if a Cytation also does liquid handling?

Proposed Alternative: Capabilities

Instead of inheritance, machines expose capabilities:

class STAR(Resource, Machine):

def \__init_\_(*self*, *backend*: STARBackend):

    super().\__init_\_(*name*="STAR", *backend*=backend)

    

    *# What this machine CAN DO*

    self.liquid_handling = LiquidHandlingCapability(backend)

    self.plate_moving = ArmCapability(backend, *arm_type*="iswap")

    *# Could have multiple: self.core_gripper = ArmCapability(backend, arm_type="core")*

Benefits

  1. Optional features are natural:

    class Cytomat(Resource, Machine):

    def \__init_\_(*self*, *with_environment*=True):
    
        self.storage = PlateStorageCapability(backend)
    
        self.temperature = TemperatureCapability(backend) *if* with_environment *else* None
    
        *# Same class works for both configurations*
    
  2. Multiple subsystems are clean:

    # EVO with two RoMa arms

    class EVO(Resource, Machine):

    def \__init_\_(*self*):
    
        self.arms = \[
    
            ArmCapability(backend, *arm_id*=0),
    
            ArmCapability(backend, *arm_id*=1)
    
        \]
    

    # Use them explicitly

    await evo.arms[0].move_plate(plate, destination)

    read_plate_byonoy(arm=evo.arms[1], …)

  3. Type-safe protocols:

    # Functions declare what they NEED, not what they ARE

    async def serial_dilution(aspirator: AspirateCapability, …):

    *await* aspirator.aspirate(...)
    

    # Can pass any machine with that capability

    await serial_dilution(star.liquid_handling, …)

    await serial_dilution(tecan.liquid_handling, …)

Preserving PLR’s Core Principles

This maintains:

  • Digital twin: STAR is still a Resource + Machine

  • Backend swapping: Same capability, different backend

  • Clarity: The machine IS the thing; capabilities are its interface

Migration Path

This could be introduced gradually:

  1. Keep existing frontends as they are

  2. Internally refactor to use capabilities

  3. Expose machine.liquid_handling as a property that returns the capability

  4. Eventually lh.aspirate() delegates to lh._capability.aspirate()

1 Like

late on the discussion. it is a very heavy yet very much needed one i feel. i have seen similar problem before actually. for example, on device that could actually has more than one purpose. for example, the microplate reader, where it can also be a shaking incubator as well as heater shaker, since most of them support kinetics.

i do not understand the architecture deep enough as Rick or Camillo, but generally I feel like Proposal 2 is better and has a defined architecture. Proposal 1 seems like patching, while 2 seems scalable (but indeed, it is radical). 2 makes a cleaner mental model as it turns the machine frontends explicitly into capabilities attached to a resource. those images from Camillo and Rick helps a lot, and i think developers really need this to understand the beta version.

the api sure will change and one would argue that the api might be more verbose (e.g. star.lh.aspirate instead of star.aspirate) but i believe this would also help developers to intuitively see what other attributes does a machine have. machine.frontend_attribute.capability.

also just an observation, i feel like this is moving from single-frontend machine to multi-frontend machine (which is arguably a workstation), and this is closer towards workcell (which is usually different machines but multiple frontends). i believe this will help for developers to define custom workcell easier with the new architecture.

2 Likes

thanks!

spot on, our workcell classes are very similar to this pattern :slight_smile:

wc.lh., wc.pr etc.

2 Likes

I came around the new convention and also agree it will be very useful.
A transition path does have to be created and communicated clearly and a lot of details have to be worked out first - ideally PLR would have more PLR definitions at that point to unambiguate terminology and avoid confusion.

I do think though that we cannot call device features (liquid handling, shaking, temperature control, absorbance plate reading, …) “frontend” anymore because they are no longer the user front-facing interface (that would be the Device), but are instead just features of the device.

This does also bring PyLabRobot a lot more in alignment with the SiLA nomenclature standard (which is an interesting evolution [and different to the SiLA server/communication system]).

Building on the Option 2 and the great discussion so far, I want to add four connected points. I put this together by studying how other instrument control frameworks handle similar problems, Bluesky, QMI, EPICS, QCoDeS, MicroManager, ScopeFoundry, ACQ4, labscript suite and others, and with my llm to synthesize things quicker.

Apologies if this is too long or if some of it might be out of scope, but happy to discuss any of it. The four points:

  • a concurrency concern that the new architecture introduces (maybe allready raised here)
  • a proposal for how drivers declare capabilities via duck typing (similar to what Keoni mentioned in his post here)
  • a proposal for how drivers declare observable state via signals (related to the duck typing)
  • a folder structure that follows from the capabilities architecture.

1. A concurrency note for shared-wire capabilities

Option 2 gives STAR two capabilities on one driver:

class STAR(ResourceHolder, Machine):
    def __init__(self):
        driver = STARDriver()
        self.liquid_handling = LiquidHandlingCapability(driver=driver, device=self)
        self.arm = ArmCapability(driver=driver, device=self)

Both capabilities share one USB wire. If two async tasks call star.liquid_handling.aspirate() and star.arm.move_plate() simultaneously, commands collide on the wire. I think we need a lock at the driver level, one transaction at a time:

class STARConnection:
    def __init__(self):
        self._usb = USB(id_vendor=0x08AF, id_product=0x8000)
        self.lock = asyncio.Lock()

    async def transact(self, cmd: bytes) -> bytes:
        async with self.lock:
            await self._usb.write(cmd)
            return await self._usb.read()

Both capabilities route through STARConnection.transact(). The lock is invisible to capabilities — they just call driver methods. This is the same pattern QMI uses for shared instrument connections.

For the Cytation the situation is different: one driver but two independent physical connections (FTDI for plate reading, Spinnaker for imaging). No lock needed there, the connections don’t share a wire, and sequencing is enforced by await in protocol code. Also, the capabilties are not yet separated anyway, but for the future that might be needed a plate_reading (or motion) and and imaging capability.

2. Duck typing for drivers — Protocols at the driver boundary

Keoni raised something interesting in post #5:

Rick’s response was that LiquidHandler is more than a spec — it provides shared logic like validation and volume tracking that can’t live in a pure Protocol. That’s correct. But I think duck typing belongs one level lower: at the driver, not at the capability.

The problem with ABC at the driver level:

class ByonoyAbsorbance96Driver(PlateReaderBackend):  # forced to inherit
    async def read_fluorescence(self, ...):
        raise NotImplementedError   # silent lie — hardware can't do this

The type checker sees no problem. The user hits a runtime error deep inside a protocol run.

The proposal: drivers declare what they can do, nothing more.

Protocols replace ABC at the driver boundary:

# capabilities/photometry/protocols.py
@runtime_checkable
class CanReadAbsorbance(Protocol):
    async def read_absorbance(self, plate, wavelength: int) -> list[list[float]]: ...

@runtime_checkable
class CanReadFluorescence(Protocol):
    async def read_fluorescence(self, plate, excitation_wavelength: int,
                                emission_wavelength: int, focal_height: float) -> list[list[float]]: ...

Drivers implement only what the hardware supports, no inheritance, no NotImplementedError:

# devices/byonoy/absorbance96/driver.py
class ByonoyDriver:                          # no PlateReaderBackend parent
    absorbance = SignalR(unit="OD")          # honest declaration

    async def read_absorbance(self, plate, wavelength): ...
    # read_fluorescence simply does not exist

The capability checks at call time:

await byonoy.plate_reading.read_fluorescence(...)
# → AttributeError: 'ByonoyDriver' does not support 'fluorescence'
#   Available signals: ['absorbance']

Instead of a silent NotImplementedError, a clear message before it ever reaches the driver.

We are currently working on a driver for the NanoDrop — and this is exactly the device that prompted thinking about this architecture. The NanoDrop measures absorbance on single samples, not plates. It is not a plate reader, but it shares the photometry capability with CLARIOstar and Byonoy. Forcing it into PlateReaderBackend would be the wrong model. With duck typing it simply implements CanReadAbsorbance and gets the photometry capability directly — no forced inheritance, no wrong category.

The capability layer (with its shared logic, validation, volume tracking) stays exactly as Rick designed it. Duck typing only replaces the ABC at the driver boundary.

3. Signals — alongside Protocols

Bluesky introduced the idea that instruments don’t just execute commands, they also continuously expose readable state: a temperature, a pressure, a busy flag, a last measurement value. Bluesky calls these “readings” from a Readable device. The same concept appears in TANGO (Attributes), EPICS (Process Variables), and ophyd-async (SignalR/SignalRW/SignalX). This is useful beyond just driving hardware, it is the foundation for telemetry, monitoring, live UI, and background data acquisition.

I’d like to propose we adopt the same pattern in PLR. Next to Protocols (for commands), drivers also declare Signals, observable device state:

class SignalR:   # read-only measurand: absorbance, temperature
class SignalRW:  # read-write parameter: temperature setpoint
class SignalX:   # trigger: open_drawer, start_shaking
class CLARIOstarDriver:
    absorbance   = SignalR(unit="OD",  description="Absorbance")
    fluorescence = SignalR(unit="RFU", description="Fluorescence intensity")
    luminescence = SignalR(unit="RLU", description="Luminescence intensity")
    temperature  = SignalR(unit="°C",  description="Plate temperature")
class ByonoyDriver:
    absorbance = SignalR(unit="OD")
    # fluorescence not declared — not a lie, just absence

Signals and Protocols are complementary. Both are declaration-based, not inheritance-based. Together they replace the ABC at the driver boundary, the capability layer itself can keep whatever structure makes sense.

4. Folder structure

the big change will be that the frontend does not live in the same folder as the backend anymore for the machines. Instead the device lives there. and the capabilties is a new folder, that was previsouly the frontend.

pylabrobot/
│
├── io/                           ← generic transports (unchanged)
│     usb.py
│     ftdi.py
│     serial.py
│     hid.py
│     socket.py
│
├── resources/                    ← digital twin (unchanged)
│
├── machines/                     ← Machine class (unchanged)
│     machine.py
│
├── capabilities/
│     ├── photometry/             ← ATOMIC
│     │     protocols.py          ← CanReadAbsorbance, CanReadFluorescence
│     │     signals.py            ← SignalR definitions
│     │     capability.py         ← PhotometryCapability
│     ├── imaging/                ← ATOMIC
│     │     protocols.py
│     │     signals.py
│     │     capability.py
│     ├── motion/                 ← ATOMIC
│     │     protocols.py
│     │     signals.py
│     │     capability.py
│     ├── liquid_handling/        ← ATOMIC (existing)
│     │     protocols.py
│     │     signals.py
│     │     capability.py
│     └── plate_reading/          ← COMPOSITE (existing, unchanged)
│           protocols.py
│           capability.py
│
└── devices/
      ├── __init__.py             ← re-exports all; users never see vendor paths
      ├── hamilton/
      │     └── star/
      │           device.py       ← STAR(ResourceHolder, Machine)
      │           driver.py
      │           connection.py   ← STARConnection (USB + asyncio.Lock)
      ├── bmg/
      │     └── clariostar/
      │           device.py       ← CLARIOstar(ResourceHolder, Machine)
      │           driver.py
      ├── biotek/
      │     └── cytation/
      │           device.py       ← Cytation5(ResourceHolder, Machine)
      │           driver.py       ← owns FTDI + Spinnaker connections internally
      └── thermo/
            └── nanodrop/
                  device.py       ← NanoDrop(ResourceHolder, Machine)
                  driver.py       ← uses photometry/ directly, not plate_reading/

plate_reading/ stays exactly as it is, for now. no breaking change, but could later become a combination of photometry/ and motion/(the composite). photometry/ is new and is where a new device like a NanoDrop could land.

1 Like

(before responding to @vcjdeboer’s new post …)

here is a concrete sketch of how I envision the “legacy” module people can use for backwards compatibility when moving to the new API (called it “beta 2” above, in reality will be 0.2.0b2)

# new: pylabrobot/hamilton/star/driver.py

class STARDriver:
  def request_configuration(self):
    # actual implementation
    ...
  
  head96: Head96 # new API, actual implementation

  # etc.

# new: pylabrobot/hamilton/star/head96.py

class Head96:
  def aspirate96(self, ...):
    # actual implementation
    ...

  # etc.

# old: pylabrobot/legacy/liquid_handling/backends/hamilton/star.py


class STARBackend:
  """
  Exactly the same API as current plr:main ("beta 1", 0.2.0b1)

  Uses new API as implementation so we can share the code,
  but keep API for users the same as before.
  """

  def __init__(self) -> None:
    self.driver = STARDriver()  # uses new API

  def request_configuration(self):
    # calls into new API
    return self.driver.request_configuration()

  def aspirate96(self, ...):
    self.driver.head96.aspirate96(...)  # calls into new API


# users change import from pylabrobot.X.Y.Z to pylabrobot.legacy.X.Y.Z, rest of API remains the same
# or they use the new API directly

yes strongly agreed.

maybe we should even have this on the io objects? This way backends do not have to think about it in the base case (although we might still want that if there are relevant protocol level locks such as write + read being matched.)

io.Socket and InhecoSiLAInterface actually already have this, as well as the InhecoIncubatorShakerStackBackend.

Locks are actually already needed right now, strictly speaking, since people could theoretically call two backend methods together even if they are in one object right now.

No lock needed there,

actually since FTDI is a serial protocol, and the cytation has a “serial” protocol on top of that (write + read always match), we should actually have locks there…

agree, and this is actually also the case for byonoy plate readers. They either do only luminescence or absorbance. PlateReader as it exists right now definitely can’t be in this next version if we wanna do it right.

We have to break up plate reader into separate classes such as AbsorbanceReader/AbsorbanceReaderBackend etc, exactly as you propose.

What is Protocol in your draft, is simply ABC in the current code (the current abstract backends also declare capabilities - the problem is that these are not nicely defined, like lh containing arm stuff and them not being composable)

as for

class ByonoyDriver:                          # no PlateReaderBackend parent
    absorbance = SignalR(unit="OD")          # honest declaration

    async def read_absorbance(self, plate, wavelength): ...
    # read_fluorescence simply does not exist

Why not:

class ByonoyDriver(CanReadAbsorbance):

or

class ByonoyDriver(AbsorbanceReaderBackend):

temperature = SignalR

This is very interesting. I hadn’t considered what we would do for thermometers, I only thought about temperature controllers.

I think something like device.temperature_sensor.get() makes a lot of sense…

one thing I am not 100% decided on is whether we should have a devices folder and then also resources folder separately? It is easy to imagine resources also being a universal folder like capabilities and the actual definitions just going into the vendor module:

pylabrobot/
  capabilities/
    ...
  resources/
    ...
  hamilton/
    star/
      ...
    labware/
      ...

plate_reading/ ← COMPOSITE (existing, unchanged)

why keep this? why wouldn’t composition be done at the Device layer at the end?

1 Like