In the absence of Device, where does higher level tasks like volume and tip tracking live?
Will they have to be reimplemented for every single … Device? (Not having a name for that highest level class is very confusing and makes even this communication hard)
Regarding organisation: it is all about usage (even when there are edge cases), most people buy devices based on the features they mostly associate them with: for plate reading of any sort they want to find plate readers, for microscopes microscopes, and for liquid handlers the same.
It would be incredibly tedious to have to look through all folders just to find all the different plate readers that PLR supports and compare their codebases.
Resources are actually the same but we’ve been discussing packaging them up into a big database for a while to solve this, and have a simple interface on the docs pages for easily searching through them.
The model fails on an accuracy level but works on a human level. And ultimately humans are using automation. If we do want to have a device-centric, accurate-to-features model then we need a higher level search system for devices which enables searching devices based on features … that could work I think?
So tips are only tracked per device?
That’s not useful to me anymore now that we have off-lh storage of tips to resupply the lh. We need workcell-wide state tracking.
The fact that microscopes are currently in the plate reading directory was always a mistake. Microscopes by definition take images (i.e. via cameras as sensors, e.g. CMOS), while plate readers take point measurements (usually via a PMT) without directly creating a larger data structure over at least 2 physical dimensions (usually x-y for microscopes, but can be z- and t- and lambda-dimesnions etc). I.e. there is a fundamental difference to these two categories (in addition to their detection mode differences [abs, fluorescence, luminescence]) which they should have been divided by from the start.
What I meant was that when someone is searching for a machine to buy/use they look for devices that are sold for that main feature (e.g. absorbance plate reading or imaging).
Correctly executed, that is what PLR provides in its current codebase and that works well for both software developers searching/importing backends, and interested parties to see what PLR provides (like the supported devices page on the docs)
That being said, if we have a database of supported devices with search functions, then I actually agree with the true-to-feature separation by manufacturer.
The most important part is to find an organisation that is usable by PLR users.
If separation by manufacturer is not intuitive then we need a helper structure that makes it usable.
the current cytation 5 is a plate reader + imager, cytation 1 is just an imager. you were saying cytation should be in plate reader, but it sounds like you changed your mind and are now in favor of by manufacturer
Just to summarize then where I think this could land:
The capability folders (plate_reading, imaging, spectrophotometry, liquid_handling, etc.) stay as they are and continue to hold the universal frontends.
The concrete device classes move to a separate devices/ (or /machines) folder, which can be organized internally by vendor. The vendor name wouldn’t need to be part of the public import: devices/__init__.py can re-export everything so users simply write from pylabrobot.devices import Cytation5. That also solves the BioTek → Agilent issue: a folder rename and a small change in devicdes/__init__.py, without breaking user-facing imports.
One personal note, PLR’s hardware-agnostic architecture feels like more than just a software design choice. Being able to swap a CLARIOstar for a SpectraMax without rewriting protocols is the point. So while vendor folders make sense as internal organization, keeping them invisible to users feels important. That’s probably a personal preference more than a blocking concern, and the __init__.pypattern addresses it well enough.
My concern was mainly that moving to vendor organization might replace the capability folders as the primary entry point, but I don’t think that’s actually possible given that universal frontends need a vendor-neutral home by definition.
yes. the detection mode /sensor type already hints at the data structure: a PMT gives you a point measurement (a scalar), a CMOS produces a spatial data structure over physical dimensions (x–y, potentially z/t/λ), a photodiode array yields a spectrum, an ion trap a mass spectrum.
That seems aligned with the distinction you’re making between point measurements and data structures over physical dimensions. At the same time, the schema really sits at the method level rather than the frontend level, a plate reader can return a point measurement, a spectrum, or even a time series depending on the read, and some instruments (e.g. liquid handlers, peelers) don’t return measurement data at all but just execute an action and might return an event status.
So it might be worth thinking about being a bit more explicit about the detection mode and resulting data structure each method returns, to help keep capability boundaries and serialization aligned as we add new capabilities.
I used AI to clean up the presentation but I’ve been mulling this over for a while. I’d be interested in your thoughts on this method. I may be way off base but if you’re not embarrassing yourself occasionally you’re probably not doing enough!
————————————
Proposal: Capability-Based Machine Interfaces
I’ve been thinking about the fundamental tension in the current proposals: we’re trying to model what machines ARE, but what we actually need in protocols is what machines CAN DO.
The Core Insight
Lab automation has a limited action space:
Move liquid (aspirate/dispense)
Move plates (pick up, transport, place)
Control temperature
Shake
Read plates
etc.
The number of possible actions is small and finite. The number of machine configurations is infinite.
Current Problem
When we write class STAR(LiquidHandler, Arm), we’re saying “A STAR IS both these things.” But:
Most liquid handlers don’t have arms
Some machines have 2 arms
Some incubators have temperature control, others don’t
Where does “plate reading” fit if a Cytation also does liquid handling?
Proposed Alternative: Capabilities
Instead of inheritance, machines expose capabilities:
class STAR(Resource, Machine):
def \__init_\_(*self*, *backend*: STARBackend):
super().\__init_\_(*name*="STAR", *backend*=backend)
*# What this machine CAN DO*
self.liquid_handling = LiquidHandlingCapability(backend)
self.plate_moving = ArmCapability(backend, *arm_type*="iswap")
*# Could have multiple: self.core_gripper = ArmCapability(backend, arm_type="core")*
Benefits
Optional features are natural:
class Cytomat(Resource, Machine):
def \__init_\_(*self*, *with_environment*=True):
self.storage = PlateStorageCapability(backend)
self.temperature = TemperatureCapability(backend) *if* with_environment *else* None
*# Same class works for both configurations*
late on the discussion. it is a very heavy yet very much needed one i feel. i have seen similar problem before actually. for example, on device that could actually has more than one purpose. for example, the microplate reader, where it can also be a shaking incubator as well as heater shaker, since most of them support kinetics.
i do not understand the architecture deep enough as Rick or Camillo, but generally I feel like Proposal 2 is better and has a defined architecture. Proposal 1 seems like patching, while 2 seems scalable (but indeed, it is radical). 2 makes a cleaner mental model as it turns the machine frontends explicitly into capabilities attached to a resource. those images from Camillo and Rick helps a lot, and i think developers really need this to understand the beta version.
the api sure will change and one would argue that the api might be more verbose (e.g. star.lh.aspirate instead of star.aspirate) but i believe this would also help developers to intuitively see what other attributes does a machine have. machine.frontend_attribute.capability.
also just an observation, i feel like this is moving from single-frontend machine to multi-frontend machine (which is arguably a workstation), and this is closer towards workcell (which is usually different machines but multiple frontends). i believe this will help for developers to define custom workcell easier with the new architecture.
I came around the new convention and also agree it will be very useful.
A transition path does have to be created and communicated clearly and a lot of details have to be worked out first - ideally PLR would have more PLR definitions at that point to unambiguate terminology and avoid confusion.
I do think though that we cannot call device features (liquid handling, shaking, temperature control, absorbance plate reading, …) “frontend” anymore because they are no longer the user front-facing interface (that would be the Device), but are instead just features of the device.
Building on the Option 2 and the great discussion so far, I want to add four connected points. I put this together by studying how other instrument control frameworks handle similar problems, Bluesky, QMI, EPICS, QCoDeS, MicroManager, ScopeFoundry, ACQ4, labscript suite and others, and with my llm to synthesize things quicker.
Apologies if this is too long or if some of it might be out of scope, but happy to discuss any of it. The four points:
a concurrency concern that the new architecture introduces (maybe allready raised here)
a proposal for how drivers declare capabilities via duck typing (similar to what Keoni mentioned in his post here)
a proposal for how drivers declare observable state via signals (related to the duck typing)
a folder structure that follows from the capabilities architecture.
1. A concurrency note for shared-wire capabilities
Option 2 gives STAR two capabilities on one driver:
Both capabilities share one USB wire. If two async tasks call star.liquid_handling.aspirate() and star.arm.move_plate() simultaneously, commands collide on the wire. I think we need a lock at the driver level, one transaction at a time:
Both capabilities route through STARConnection.transact(). The lock is invisible to capabilities — they just call driver methods. This is the same pattern QMI uses for shared instrument connections.
For the Cytation the situation is different: one driver but two independent physical connections (FTDI for plate reading, Spinnaker for imaging). No lock needed there, the connections don’t share a wire, and sequencing is enforced by await in protocol code. Also, the capabilties are not yet separated anyway, but for the future that might be needed a plate_reading (or motion) and and imaging capability.
2. Duck typing for drivers — Protocols at the driver boundary
Keoni raised something interesting in post #5:
Rick’s response was that LiquidHandler is more than a spec — it provides shared logic like validation and volume tracking that can’t live in a pure Protocol. That’s correct. But I think duck typing belongs one level lower: at the driver, not at the capability.
The problem with ABC at the driver level:
class ByonoyAbsorbance96Driver(PlateReaderBackend): # forced to inherit
async def read_fluorescence(self, ...):
raise NotImplementedError # silent lie — hardware can't do this
The type checker sees no problem. The user hits a runtime error deep inside a protocol run.
The proposal: drivers declare what they can do, nothing more.
Drivers implement only what the hardware supports, no inheritance, no NotImplementedError:
# devices/byonoy/absorbance96/driver.py
class ByonoyDriver: # no PlateReaderBackend parent
absorbance = SignalR(unit="OD") # honest declaration
async def read_absorbance(self, plate, wavelength): ...
# read_fluorescence simply does not exist
The capability checks at call time:
await byonoy.plate_reading.read_fluorescence(...)
# → AttributeError: 'ByonoyDriver' does not support 'fluorescence'
# Available signals: ['absorbance']
Instead of a silent NotImplementedError, a clear message before it ever reaches the driver.
We are currently working on a driver for the NanoDrop — and this is exactly the device that prompted thinking about this architecture. The NanoDrop measures absorbance on single samples, not plates. It is not a plate reader, but it shares the photometry capability with CLARIOstar and Byonoy. Forcing it into PlateReaderBackend would be the wrong model. With duck typing it simply implements CanReadAbsorbance and gets the photometry capability directly — no forced inheritance, no wrong category.
The capability layer (with its shared logic, validation, volume tracking) stays exactly as Rick designed it. Duck typing only replaces the ABC at the driver boundary.
3. Signals — alongside Protocols
Bluesky introduced the idea that instruments don’t just execute commands, they also continuously expose readable state: a temperature, a pressure, a busy flag, a last measurement value. Bluesky calls these “readings” from a Readable device. The same concept appears in TANGO (Attributes), EPICS (Process Variables), and ophyd-async (SignalR/SignalRW/SignalX). This is useful beyond just driving hardware, it is the foundation for telemetry, monitoring, live UI, and background data acquisition.
I’d like to propose we adopt the same pattern in PLR. Next to Protocols (for commands), drivers also declare Signals, observable device state:
class SignalR: # read-only measurand: absorbance, temperature
class SignalRW: # read-write parameter: temperature setpoint
class SignalX: # trigger: open_drawer, start_shaking
class ByonoyDriver:
absorbance = SignalR(unit="OD")
# fluorescence not declared — not a lie, just absence
Signals and Protocols are complementary. Both are declaration-based, not inheritance-based. Together they replace the ABC at the driver boundary, the capability layer itself can keep whatever structure makes sense.
4. Folder structure
the big change will be that the frontend does not live in the same folder as the backend anymore for the machines. Instead the device lives there. and the capabilties is a new folder, that was previsouly the frontend.
plate_reading/ stays exactly as it is, for now. no breaking change, but could later become a combination of photometry/ and motion/(the composite). photometry/ is new and is where a new device like a NanoDrop could land.
here is a concrete sketch of how I envision the “legacy” module people can use for backwards compatibility when moving to the new API (called it “beta 2” above, in reality will be 0.2.0b2)
# new: pylabrobot/hamilton/star/driver.py
class STARDriver:
def request_configuration(self):
# actual implementation
...
head96: Head96 # new API, actual implementation
# etc.
# new: pylabrobot/hamilton/star/head96.py
class Head96:
def aspirate96(self, ...):
# actual implementation
...
# etc.
# old: pylabrobot/legacy/liquid_handling/backends/hamilton/star.py
class STARBackend:
"""
Exactly the same API as current plr:main ("beta 1", 0.2.0b1)
Uses new API as implementation so we can share the code,
but keep API for users the same as before.
"""
def __init__(self) -> None:
self.driver = STARDriver() # uses new API
def request_configuration(self):
# calls into new API
return self.driver.request_configuration()
def aspirate96(self, ...):
self.driver.head96.aspirate96(...) # calls into new API
# users change import from pylabrobot.X.Y.Z to pylabrobot.legacy.X.Y.Z, rest of API remains the same
# or they use the new API directly
maybe we should even have this on the io objects? This way backends do not have to think about it in the base case (although we might still want that if there are relevant protocol level locks such as write + read being matched.)
io.Socket and InhecoSiLAInterface actually already have this, as well as the InhecoIncubatorShakerStackBackend.
Locks are actually already needed right now, strictly speaking, since people could theoretically call two backend methods together even if they are in one object right now.
No lock needed there,
actually since FTDI is a serial protocol, and the cytation has a “serial” protocol on top of that (write + read always match), we should actually have locks there…
agree, and this is actually also the case for byonoy plate readers. They either do only luminescence or absorbance. PlateReader as it exists right now definitely can’t be in this next version if we wanna do it right.
We have to break up plate reader into separate classes such as AbsorbanceReader/AbsorbanceReaderBackend etc, exactly as you propose.
What is Protocol in your draft, is simply ABC in the current code (the current abstract backends also declare capabilities - the problem is that these are not nicely defined, like lh containing arm stuff and them not being composable)
as for
class ByonoyDriver: # no PlateReaderBackend parent
absorbance = SignalR(unit="OD") # honest declaration
async def read_absorbance(self, plate, wavelength): ...
# read_fluorescence simply does not exist
Why not:
class ByonoyDriver(CanReadAbsorbance):
or
class ByonoyDriver(AbsorbanceReaderBackend):
temperature = SignalR
This is very interesting. I hadn’t considered what we would do for thermometers, I only thought about temperature controllers.
I think something like device.temperature_sensor.get() makes a lot of sense…
one thing I am not 100% decided on is whether we should have a devices folder and then also resources folder separately? It is easy to imagine resources also being a universal folder like capabilities and the actual definitions just going into the vendor module: