mt.scale.measure_weight would be the most consistent. Also, many machines later turn out to have more than one functionality. For example, kx2.arm.move might sound redundant, but these machines optionally have barcode readers and the existence of kx2.barcode_reader makes kx2.arm make more sense.
an alternative idea is to have one “primary” interface for each, like kx2: Arm and mt: Scale and clario_star: PlateReader with the “secondary” interfaces being attributes like kx2.barcode_reader and clario_star.shaker. This would allow shortcuts for the primary methods kx2.move, scale.measure_weight etc. Where is see this getting into problems is with devices like the cytation that have dual-primary use of PlateReader and Microscope.
the proposal is actually the other way around: to make Resources have front ends. The model is what the user instantiates first, and then the “front ends” are accessed through attributes like star.lh and star.arm.
So resources have a “has-a” relation to these “front ends”/machine capabilities.
I was thinking that LiquidHandler and PlateReader and such will still be called front ends. Now each machine just has a set of front ends depending on its capabilities.
I want to figure out what makes sense first. The API design is not complete yet. Once we find something that makes sense it will almost explain itself. Right now things are difficult to explain because they are not defined.
Breaking existing code is unfortunately unavoidable because LiquidHandler right now is the interface to everything, and we are moving Arm out of that.
Yes, we will probably want to do a versioned beta release (beta 1) right before making this change. After that, beta 1 will not be maintained anymore and we will work on beta 2 (this new api design)
My idea for the transition is to provide the existing classes in pylabrobot.legacy while moving all implementations elsewhere. This way people can change their imports from pylabrobot.X to pylabrobot.legacy.X until beta 3.
There are two paths, and I imagine we will be using a mix of both:
Backends like STARBackend can simply inherit from ArmBackend and LiquidHandlerBackend
We can have “adapter” objects like EVOArmAdapter(ArmBackend):
which is actually how the evo backend already works!
easy to imagine something similar for the STAR and other machines.
these thin objects keep a reference to the backend. They can use the backend as it currently works with the backend defining all firmware commands, OR we can have the adapter define the commands and just use backend.send_command for communication.
I agree - this “down the line” extendability is very neat!
I believe a primary interface will always be necessary:
many commands don’t really fall into clear “machine feature” categories (mem-read, measurements in particular)
these should be callable from the primary interface
I also just found:
There are commands that are used for multiple machine features: CLARIOstar.read_absorbance() is an “AbsorbancePlateReader” feature but it is also a “Shaker” feature … how would the new proposal deal with this?
I like this - as long as there is a clear transition path to help PLR users consciously upgrade - i.e. without any nasty surprises - this sounds good to me
The Cytation is worth using as a concrete stress test, because the Cytation 10, not yet supported in PLR, makes this even sharper. It adds confocal imaging on top of widefield, plus optional dispensing, shaking, and gas control (just like the 5 and 1 I think) depending on how the instrument was purchased. Neither PlateReader nor Microscope is clearly primary, and capabilities vary per hardware configuration.
What we have today:
pr = ImageReader(name="PR", size_x=0, size_y=0, size_z=0,
backend=CytationBackend())
await pr.setup(use_cam=True)
# imaging
res = await pr.capture(well=(1,2), mode=ImagingMode.BRIGHTFIELD, ...)
# plate reading - also on the same object
data = await pr.read_absorbance(wavelength=434)
This works because someone wrote all of it and knew what belonged there. But pr is named after the instrument, ImageReader is named after one of its capabilities — and yet `pr.read_absorbance` works too. The naming already reveals the tension: one object, two capabilities, no clean way to express that in the current architecture.
what option 1 would look like:
class Cytation10(
Resource,
PlateReader,
WideFieldImager,
ConfocalImager, # new in cytation 10
LiquidDispenser, # optional module
Shaker, # optional module
GasController, # optional module
):
...
This inheritance line gets unwieldy quickly. And there is no clean way to express “this lab’s Cytation 10 has confocal and shaking but not gas control”, you’d need separate subclasses for every hardware configuration.
With the adapter pattern as mentioned:
A thin object wired to the shared backend that exposes one capability. The same pattern applied to the Cytation:
The Cytation 5 and 10 share `CytationBackend`but declare different capability sets:
# Cytation 5
backend = CytationBackend()
cytation5 = Instrument(
backend=backend,
capabilities={
"photometry": CytationPhotometryAdapter(backend),
"widefield_imaging": CytationWideFieldAdapter(backend),
}
)
# Cytation 10 - configured for this specific lab
backend = CytationBackend()
cytation10 = Instrument(
backend=backend,
capabilities={
"photometry": CytationPhotometryAdapter(backend),
"widefield_imaging": CytationWideFieldAdapter(backend),
"confocal_imaging": CytationConfocalAdapter(backend),
"shaking": CytationShakingAdapter(backend),
# gas control not purchased - simply not declared
}
)
A note on shaking: on the Cytation the shaker is part of the plate reading workflow, you shake during incubation steps, not independently. Whether shaking deserves its own top-level capability entry or whether it lives inside photometry as a sub-capability is an open question. The capability dict can nest, so both are expressible. But it is a good example of why capability boundaries need to be defined carefully, they should reflect how the instrument is actually used, not just what firmware commands exist.
Single capabilities are promoted to the top level automatically, so existing code keeps working:
This matters beyond PLR internals. There is active work on FAIR metadata for research instruments (McCafferty et al.: https://doi.org/10.5281/zenodo.7759201). PIDinst identifies instruments but explicitly does not classify them (what they measure, what they can do). A consistent PLR capability namespace could fill that gap naturally, emerging from real implementations rather than top-down standardization. That is a contribution the broader research infrastructure community is actively looking for.
I will comment on the high level api design first, and then respond to the “shaking is a part of reading” (for cytation + clariostar)
the adapter would be a way to split up backends, but proposal 2 is first and foremost about splitting what are currently the frontends:
this snippet is close to proposal 2, but here the capabilities are machine specific. We actually want to make them universal objects like Photometer/PlateReader and Imager (so we can share code and have a universal interface). In the code snippet above, CytationPhotometryAdapter is a specific to a machine which makes writing universal code difficult. Also it does not allow share logic between machines.
There are two ways of implementing proposal 2. I will clarify/summarize them below because the thread above is a little vague.
option 2a: with adapters
class Cytation10:
def __init__(self, photometry = True, widefield = True, confocal = True, shaking = True, gas = False) -> None:
self.backend = CytationBackend()
if self.photometry:
self.pr = PlateReader(backend=CytationPhotometryAdapter(cytation=self.backend))
...
where self.pr is a universal PlateReader, keeping its backend small and atomic, and providing universal logic shared across all plate readers. (For a plate reader that logic is pretty thin, but for arms for example we have to do resource model → coordinate conversions and for liquid handlers there are many state updates, which not every backend should have to reimplement.)
With adapters, CytationPhotometryAdapter implements PlateReaderBackend and uses the cytation as the interface /“back-backend”.
Going into more detail: the CytationPhotometryAdapter class pseudocode you gave includes
Here, CytationBackend.read_absorbance is a still a method and the adapter only forwards the call. This is not strictly necessary. It would make sense for the CytationPhotometryAdapter to implement the method directly, like the adapter defines the firmware command and only calls self._cytation.send_command (this nicely splits up backends.)
(see the similarity between proposal 2 splitting ‘current front ends’ and adapters being the analogous way of splitting ‘current back ends’ :))
option 2b: just with backends
there is a simpler case, when the backend can inherit from all classes at once:
class CytationBackend(PlateReaderBackend, ...):
...
class Cytation10:
def __init__(self, photometry = True, widefield = True, confocal = True, shaking = True, gas = False) -> None:
self.backend = CytationBackend()
if self.photometry:
self.pr = PlateReader(backend=self.backend)
...
that would make sense when CytationBackend has direct methods for reading absorbance and such.
In both 2a and 2b:
Cytation10 is the class the user instantiates to work with that machine.
Cytation10.pr is a universal PlateReader. PlateReader takes any backend conforming to PlateReaderBackend
The Cytation10 class is responsible for instantiating the Cytation10Backend as well as the “front ends” PlateReader, Shaker, etc.
Reading would be like cytation10.pr.read_absorbance(...), shaking would be cytation10.shaker.shake (note: no cytation10.read_absorbance)
To the user using the Cytation10 interface this would look identical regardless of how the backends work. But using and writing backend interfaces would be slightly different, since under 2a Cytation10Backend might only have send_command and a few other methods with CytationPhotometryAdapter implementing the methods for actually reading plates. With 2b, it would be a big class implementing everything and potentially raising NotImplementedError for specific features.
In the new PLR, I think we can have a mix of a) adapters implementing abstract backends and b) machine backends directly implementing abstract backends, although there is something to be said for making the architecture uniform even for simple machines. The uniform architecture would mean (b) every machine has a dedicated adapter helper class to confirm to a particular backend spec (like PlateReaderBackend). This makes the code more consistent, but also more complex for simple machines.
Where adapters are truly needed is when a backend has multiple instances of something, like multiple arms. When I introduced the adapter pattern in my post above, it was so that that adapter can have information about which arm on the backend to actually use:
with that snippet, EVOArmAdapter conforms to ArmBackend (having move_plate), but EVOBackend does not conform to ArmBackend because its EVOBackend.move_plate requires the arm_id parameter.
Reiterating other benefits: adapters also help break up code more nicely for complex machines, such as STARs (the backend is like 10k lines right now…) And yes in the case of a cytation, it doesn’t really make sense for the shared CytationBackend to talk about confocal things and we would rather put that in a separate class and have the Cytation10 class orchestrate things.
TLDR; adapters are necessary sometimes. We can make every current-backend work through adapters for consistency, at the cost of added complexity for simple machines.
shaking while reading: if shaking is truly not a standalone function, then yes it will have to be a backend kwarg for the read method.
But it is a good example of why capability boundaries need to be defined carefully, they should reflect how the instrument is actually used, not just what firmware commands exist.
I disagree: we should model the firmware at the most granular level possible as you can never be sure what users will want to do. What the machine supports, PLR should support. This does not mean we can’t provide utilities and convenience to make it nicer for “normal use cases” (in fact, that’s the role of front ends), but I do think we should split up backends by machine capability rather than “use case”. The goal of front ends IS to abstract use cases.
Thanks for the clarification. The backends don’t have to be split of course, but the frontends might be, especially when there are multiple of the same type. The Resource with frontend and backend as attributes (Proposal 2) makes sense.
From the user perspective, cytation10.pr.read_absorbance(…) is completely clear. Much cleaner with proposal two. If i am correct, this might be the difference:
# ─── OLD ───────────────────────────────────────────────────────────────────
from pylabrobot.plate_reading import ImageReader
from pylabrobot.plate_reading import CytationBackend
pr = ImageReader(
name="PR",
size_x=0,
size_y=0,
size_z=0,
backend=CytationBackend()
)
await pr.setup(use_cam=True)
await pr.read_absorbance(wavelength=434)
await pr.capture(well=(1,2), mode=ImagingMode.BRIGHTFIELD, ...)
# ─── NEW ───────────────────────────────────────────────────────────────────
from pylabrobot.plate_reading import Cytation5
cytation = Cytation5(PlateReader=True, Imager=True)
await cytation.setup()
await cytation.pr.read_absorbance(wavelength=434)
await cytation.imager.capture(well=(1,2), mode=ImagingMode.BRIGHTFIELD, ...)
Here now the use_cam=True is also nicely dealt with. The backend declaration can also move inside the machine class, so the user no longer needs to know about them. The PlateReader=True, Imager=True flags also cleanly express which capabilities this specific instrument has installed.
From the developer perspective, the Resource and Machine classes are harder to grasp. The Resource class lives in the resources folder which suggests labware, modules, and 3D printed scaffolds rather than instruments. I did see your earlier post on Machine being a Resource subclass, but for new contributors writing a new machine should we explicitly write class STAR(Machine) rather than class STAR(Resource)? Since Machine is the ABC designed exactly for this purpose, backend ownership and lifecycle, it seems like the more honest and explicit base class for any instrument.
If the frontend is now an attribute of a Machine, does it still need its own setup()/stop() lifecycle, or is (or was) that handled entirely by the Machine?
On naming, the word “frontend” becomes less intuitive when it is an attribute of a Machine rather than the top-level object. In the current architecture “frontend” makes sense as a pair with “backend”, the frontend faces the user, the backend faces the hardware. That is clean. But in Proposal 2 the Machine is now the thing facing the user, and the frontend sits one level deeper as an attribute. The word starts to feel slightly misplaced there.
Just thinking out loud, “capability” might map more naturally to how a scientist reads the code. A Cytation5 has plate reading and imaging capabilities. A STAR has liquid handling and arm capabilities. When you instantiate a machine you declare which capabilities it has. That language feels close to how you would describe an instrument in a methods section or a lab inventory.
It also handles the edge cases reasonably well. A standalone CLARIOstar is a machine whose single capability is plate reading. A scale is a machine whose single capability is weighing. The word does not imply sub-components the way “module” does, and it does not carry web development connotations the way “frontend” does.
That said, “frontend” is established in PLR, and changing it might have a real cost. It might also be that “frontend” and “capability” are just two valid framings of the same thing, frontend from the developer perspective, capability from the user/scientist perspective. Not sure this is worth resolving now, just wanted to raise it before the new architecture gets locked in.
The “machine” would still need to control the life cycle of the backend, since there will be multiple “front ends” in this new case. So Cytation5 for example would call CytationBackend.setup. It would also call PlateReader.setup and Imager.setup, but those two methods will need to assume the backend itself is already initialized and just do their respective setup procedures. For example, Imager needs to load the configured objectives. The “front ends” cannot call backend.setup again because there might be other “front ends” using the same backend, and backend.setup should only be called once. So backend life cycle is managed by the object that created it, the Machine.
This is probably the biggest unsolved aspect of this api redesign.
Resources are actually every individual modeled object, including objects smaller than labware like wells. They all exist in a hierarchy tree. Yes there are many labware definitions, but a big part is just modeling reality.
Currently, front ends inherit from both Machine and Resource.
Inheriting from both might still be possible in the future.
The conflict is essentially between:
Separating them in terms of code by having Machine be the first class and an instance of Resource being an attribute has clearer separation between the machine control system and resource management system.
Machine and Resource have a 1:1 relationship and are fundamentally the same physical object, so it might not make sense to split it up. Machines ARE physical objects and therefore resources. They just happen to also have a life cycle and capabilities attached. (Machine to front end capability is requiring one-to-many and so has to be split up)
Personally, I strongly prefer option 2 for its simplicity and it makes a lot of sense to me since I am thinking about physical objects AND because I see plr.resources as a low level library that the rest of PLR is built on rather than resources and machines existing in parallel:
However many users, in this thread and outside, find that model to be confusing, which is the main counterargument that I would have to Machines being Resources. I do not have a reasonable justification for it.
Going on a bit of a tangent, I am also thinking about where classes like Cytation and STAR should live after we split them into multiple frontends/capabilities. STAR can be in liquid_handling because you might say that’s its main power, but a cytation has two main powers: plate reading and microscopy so where should it go? To me it makes sense to introduce something like pylabrobot.biotek and pylabrobot.hamilton. In that case, we can have pylabrobot.hamilton.plates and pylabrobot.hamilton.star or pylabrobot.hamilton.machines.star or something like that. This is a nice split of the library: the lab-agnostic standard (capabilities, labware MODELS, io, etc.) and then specific machines and labware definitions organized per manufacturer. (this also makes it clearer in the future for manufacturers to maintain their own PLR compatible libraries outside of the PLR repo). I hope that change can also clarify to people why machines are resources.
Yes I like naming them capabilities.
Front ends are nice because they were literally the “front” most part of the API in the past. In that sense the new front end would be “Cytation5” and “STAR”. It also describes the CS pattern of front end / backend classes fairly well, although not as nice as it did in the past now that we have these capability classes in between. It becomes slightly more difficult to explain things when both the “frontends” and “backends” are machine specific, but we have already discussed that that is necessary (see first post) (the capabilities are still machine agnostic) and the explanation just follows reality at the end of the day.
A clario star also does co2 control, temperature control, shaking (maybe?), etc. … Scales I think are actually simple, but, as I wrote above, the model is extensible in case we ever find a weird scale.
PLR is a library for developers first and foremost, and I don’t want to introduce different names for the same thing.
No it’s a very good point, and definitely something we want to nail down before releasing it. (we can of course start implementation sooner if we need to.) I think we are making good progress.
I’m hesitant about organizing by manufacturer (e.g. pylabrobot.biotek, pylabrobot.hamilton). Vendor names are fragile and can change over time, BioTek → Agilent is already an example. Folder names may age poorly because of that.
In practice, scientists don’t think in terms of vendors or low-level capabilities. They think in terms of instrument identities and categories: STAR, Cytation, Orbitrap, Confocal, etc. In the lab, we like dedicated names.
Even if an instrument technically exposes multiple capabilities (imaging, spectrometry, motion), we still conceptualize it as a plate reader or a mass spectrometer. For example, the Cytation is generally thought of as a plate reader, even though it has imaging. Plate readers may also have spectrometry or motion, but that doesn’t mean we need to exclude the “plate reader” category and instead start using flags like spectrometry=True, motion=True as the primary organizing principle.
There is already a frontend category system (plate_reading, shaking, peeling, etc.), so I’m not sure reorganizing around vendor names adds much clarity. Ultimately, where a class lives in the folder structure may not be that important, taxonomies are partly a matter of taste, as long as the conceptual model remains clear and intuitive.
Why Machine? It is confusing to me because it already has a pure meaning in the current PLR v0-v1 But in PLR v2 that meaning changes.
Starting with general-public well-understood, new-to-PLR naming is a lot easier to transition to (especially as newcomers which we want to make entrance to PLR easy for)
→ a new start for PLR terminology with definitions written down on the website from the start.
Most people I talk with also use Device for “programmable lab equipment” already.
(Machine seems like a more abstract, “pure” computer science term rather than an applied robotics term, the field we are working in)
the OSI vs TCP/IP story is strikingly similar to what’s happening in lab automation:
Few people remember that the telecommunications companies and the computing industry fought for years to kill the Internet and its TCP/IP family of protocols. They recognized the potential market and the need for a shared set of protocols, but they wanted to control everything.
Rather than adopt the Internet protocols, the industry set out on an ambitious program known as the Open Systems Interconnection (OSI). OSI was supported by all the major companies.
For years, [users] wrote that their networking plan was to use TCP/IP “until OSI becomes available.” Eventually they dropped the final clause.
One reason that OSI failed was that it attempted to standardize everything. With little operational experience to guide them, the standardization groups added more and more features. The early Internet was much more pragmatic. Its motto was “rough consensus and running code.” TCP/IP specified the network and transport protocols, but made no attempt to define the underlying network technology, while protocols for applications such as email were not standardized until there was practical experience with running code.
Ha, yes. I was thinking of the motion capability of the plate reader, but a plate reader always has motion internally (it moves the plate to read or image), and motion isn’t exposed in the plate_reading frontend, it’s handled in the backend. The frontend only exposes the photometric read methods.
So for Cytation1, something like:
Cytation1(imaging=True)
…is consistent with what’s exposed at the frontend level. That said: for Cytation1, imaging isn’t really an optional capability, it’s the core identity of the instrument, so we probably shouldn’t require imaging=True at instantiation.
The optional parts are the lab-specific modules (e.g. gas/CO₂ control, injectors, etc.). So something like:
Cytation1(gas=False, injection=False)
(or ideally a small config=/YAML that declares which modules are present) feels more accurate: the Machine has one backend, attaches multiple frontends, and the configuration determines which frontends are actually exposed.
what is Device in that case? I don’t think Device needs to be a class in that diagram since STAR and Cytation5 can just inherit from Resource + Machine directly.
is there a better option than organizing it by manufacturer?
(or ideally a small config=/YAML that declares which modules are present) feels more accurate: the Machine has one backend, attaches multiple frontends, and the configuration determines which frontends are actually exposed.
I’d prefer not to organize by vendor, since those can change and don’t really reflect how scientists think about instruments. In practice, Cytation is historically and experimentally treated as a plate-based system, even if some variants only expose imaging.
A microscope can image a plate, and a Cytation can image a slide, but those edge cases don’t really change the primary experimental paradigm/category. Given that the current structure is already organized by capability/feature (e.g. plate_reading), that feels like a stable and vendor-neutral place for it to live. Folder structure is more about discoverability/maintenance than ontology.
Device would take over some importat roles that Frontend has now:
the interface which users are interacting with to control & use a programmable lab equipment
the connection between the resource model/digital twin and the Driver - in the current PLR version it/Frontend manages updates to the resource model based on actions taken by Backend(future Driver)
major difference to Frontend is that Device gives up Driver access to its Features
if I read the above correctly, the need to enable and manage digital_twin<>driver updates has not been discussed in this new PLR architecture - I believe it requires highest level management, just as the previous highest level, Frontend, did
→ therefore requiring its replacement, the Device class, which merges the two worlds (digital twin & driver) into one simple interface