Updating PLR API for machine interfaces discussion

in PLR we have always had frontend classes being the main interface to certain machines, reference to a Hamilton STAR is LiquidHandler, Cytation is PlateReader, cpac is TemperatureController, etc. When you control a star or OT, the reference is always an instance of LiquidHandler.

As we implement more in PLR, two problems are becoming obvious:

problem 1: some machines do not clearly fit into just one of these categories - they act like several machines in one.

One example is many liquid handlers having arms. I am working on adding an SCARA(Arm) class for the kx2 and pf400. I want to re-use the code I wrote in LiquidHandler.{pick_up,drop}_resource for these arms exactly as I do it for the iswap (integrated arm for stars)

A second example of this first problem is the huge number of devices doing temperature control. We want to share the TemperatureContorller.wait_for_temperature method across all: actual heating plates like cpac, heater shakers, incubators, certain plate readers, etc.

We can think about LiquidHandler inheriting from Arm, just adding the liquid handling functions. And Incubators inheriting from TemperatureController. However, in general liquid handlers do not have dedicated arms (like OT2, Prep, Nimbus etc.). This is ugly. LiquidHandler and Arm should really exist on the same level and not depend on each other.

problem 2: Machines front ends are Resources, but not every machine uses the same resource model even if it’s a very similar machine otherwise. The most obvious example is incubators: the SCILA has 4 drawers and positions while Cytomats have tens of internal plate locations but only one transfer station.

(On the topic of “incubators”, they are actually just “plate movers” and oftentimes “temperature controllers”.)

→ it is becoming clear to me that the “simplest possible implementation” of having a LiquidHandler being the frontmost interface (“front end”) for, for example, a STAR is not working anymore. It is simple, but not “possible”. It is too opinionated about certain machines. We need to have a dedicated STAR front end that somehow combines LiquidHandler and Arm. While one unified class per machine was the ideal, we learn through implementations what the standard needs to be.

Before explaining the two proposals, I want to give an example on how PLR being “universal” is currently being used. We have many functions taking machines as arguments, like async def serial_dilution(lh: LiquidHandler, ...) and for byonoy reading:

def read_plate_byonoy(b: Byonoy, arm: Arm, plate: Plate...):
  # byonoy requires plate sandwiched in between illumination and reader unit
  arm.move(illumination_unit, parking_unit)
  arm.move(plate, reader_unit)
  arm.move(illumination_unit, reader_unit)
  b.measure()  # measure absorbance

These are functions (not methods) so they can be shared across protocols and workcells (workcells are often classes).


For the two options of sharing machine front ends across actual machine interfaces:

Option 1: Machine resources inheriting from multiple “front ends”

This was developed together with @CamilloMoschner

In PLR, we have “composite front ends” like HeaterShaker(TemperatureController, Shaker). We could follow that pattern and have STAR(LiquidHandler, Arm).

Pseudocode example:

class STAR(Resource, LiquidHandler, Arm):
  def __init__(self, ...):
    self.backend = STARBackend(...)
    Arm.__init__(self, backend=self.backend, ...)
    LiquidHandler.__init__(self, backend=self.backend, ...)
    ...

# simple operations
star = STAR(...)
star.aspirate(...)
star.move_plate(...)

# backend specific methods
star.backend.special_method()

# functions
read_plate_byonoy(arm=star, ...)  # STAR inherits from Arm
class SCILA(
  Resource,
  TemperatureController,
  CO2Contorller,
  AutomatedStorage
):
  ...

class Cytomat(
  Resource,
  TemperatureController,
  CO2Contorller,
  AutomatedStorage
):

Where I see this running into problems is when a machine has multiple “machines” with one backend. For example, a Tecan EVO can have two independent gripper arms. The proposal for this is to use backend_kwargs:

# Evo inherits from Arm but has two arms
# use_arm is a backend_kwarg for EVOBackend.move_plate
read_plate_byonoy(evo, ..., use_arm=1)

This is extremely awkward since we don’t even know where to send the backend_kwarg in read_plate_byonoy. You’d have to have read_plate_byonoy (..., arm_kwargs: dict, reader_kwargs: dict).

You can make the argument here that other methods in the function will also need backend_kwargs and so we are already forced into this pattern. I think that’s not necessarily always the case for properly written functions and if our abstractions for the actual machines are good. The byonoy reading abstraction using a robotic arm is a good counter example to backend kwargs being needed. It should be easy to express these easy things with PLR.

Additionally, it makes the following more difficult: some machines optionally have features. Like some cytomats are just plate storage, they use the same API for that but lack environmental control. Would we have CytomatWithEnvironment(..., TemperatureController) and CytomatWithoutEnvironment(...)?

Option 2: Machine resources having “front ends” as attributes

This one is a more radical change.

Pseudocode example:

class STAR(Resource):
  def __init__(self):
    super().__init__(name="STAR", size_x=0, size_y=0, size_z=0)
    self.backend = STARBackend()
    self.lh = LiquidHandler(backend=self.backend)
    self.arm = Arm(backend=self.backend)

# simple operations
star = STAR()
star.setup()
star.lh.aspirate(...)
star.arm.move_plate(...)

assert star.lh._backend is star.backend  # True

# backend specific methods
star.backend.special_method()

# functions
read_plate_byonoy(arm=star.arm, ...)

This is nice when machines have multiple instances of a machine front end:

read_plate_byonoy(arm=evo.arms[1], ...)

For machines with optional controls such as cytomats, this is nice:

class Cytomat(Resource):
  def __init__(self, with_environment: bool = True, ...):
    super().__init__(name="Cytomat", size_x=0, size_y=0, size_z=0)
    self.backend = CytomatBackend()
    self.automated_storage = AutomatedStorage(backend=self.backend)
    if with_environment:
      self.temperature_controller = TemperatureController(backend=self.backend)
    else:
      self.temperature_controller = None

The goal with being “universal” in PLR is to make it easy to reuse code across machines: plate definitions, utility functions, potentially even entire protocols. However, a true “universal” / identical code architecture is not feasible given how different these machines turn out to be. Again, what we should build are tools that are amenable to being shared across all machines.

This would quite a radical change with either option. I think this is the right time to get these in before we do a versioned beta.

Curious to hear what you think.

2 Likes

I don’t know if this helps anything but I’d count the x-gantry where the pipette channels live on a liquid handler as an “arm”. It is just a cartesian coordinate arm with pipette attached, that can also move plates using the coregrippers. You can also have a STAR with multiple “arms”. A 96head, card gripper, or independent channels arms. I’ve worked on a STAR with two x-gantry arms, somewhat more common on STARplus’s. (firmware actually works to prevent them from crashing into each other which is crazy)

I think this backs up option #2?

yes, gantry arm.

in my post I was mostly speaking about “scara” style arms like gripper arms that move plates.

so yes similar to what I described for evo above

core grippers

yes this is a good point. not immediately sure how that would be modeled in option 2. maybe an optional .core_arm that is sometimes none? idk. cam also brought this up

for core grippers we could have

with star.core_grippers(front_channel=6) as arm:
  read_plate_byonoy(arm=arm, ...)

and/or

star.get_core_grippers()

I think this is a core problem I have some opinions on. I think Go does really well here with interfaces. The idea would be this:

  1. Every machine resource implements precisely what is required. Shared basic code can be imported. But for example, STAR simply implements functions of both Arm and LiquidHandler
  2. There are interfaces which each machine can satisfy.

The main advantage is that you get the kind of type checking required, but you don’t have to inherit from multiple “front ends” (you just implement whatever can be implemented). In read_plate_byonoy() you could just use read_plate_byonoy(byonoy, arm=evo.arms[0]), or would use read_plate_byonoy(byonoy, arm=star.arm). Implementations are simplified in that you don’t use star.lh.aspirate(), you just do star.aspirate().

Here is claude generating some ideas for what it would look like, which are quite nice:

from typing import Protocol, runtime_checkable

# --- Define the "interfaces" ---

@runtime_checkable  # allows isinstance() checks at runtime
class LiquidHandler(Protocol):
    def aspirate(self, resource, volume: float, **kwargs) -> None: ...
    def dispense(self, resource, volume: float, **kwargs) -> None: ...

@runtime_checkable
class Arm(Protocol):
    def move_plate(self, source, target, **kwargs) -> None: ...
    def pick_up_resource(self, resource, **kwargs) -> None: ...
    def drop_resource(self, resource, **kwargs) -> None: ...

@runtime_checkable
class TemperatureController(Protocol):
    def set_temperature(self, temp: float) -> None: ...
    def wait_for_temperature(self, temp: float, timeout: float = 60) -> None: ...

@runtime_checkable
class AutomatedStorage(Protocol):
    def store_plate(self, plate, **kwargs) -> None: ...
    def retrieve_plate(self, plate, **kwargs) -> None: ...


# --- Concrete machine: just implement the methods you support ---

class STAR:
    """STAR satisfies both LiquidHandler and Arm implicitly."""
    def __init__(self):
        self.backend = STARBackend()

    def aspirate(self, resource, volume: float, **kwargs) -> None:
        self.backend.aspirate(resource, volume, **kwargs)

    def dispense(self, resource, volume: float, **kwargs) -> None:
        self.backend.dispense(resource, volume, **kwargs)

    def move_plate(self, source, target, **kwargs) -> None:
        self.backend.iswap_move(source, target, **kwargs)

    def pick_up_resource(self, resource, **kwargs) -> None:
        self.backend.iswap_pick(resource, **kwargs)

    def drop_resource(self, resource, **kwargs) -> None:
        self.backend.iswap_drop(resource, **kwargs)


class OT2:
    """OT2 satisfies LiquidHandler only — no arm."""
    def __init__(self):
        self.backend = OT2Backend()

    def aspirate(self, resource, volume: float, **kwargs) -> None:
        self.backend.aspirate(resource, volume, **kwargs)

    def dispense(self, resource, volume: float, **kwargs) -> None:
        self.backend.dispense(resource, volume, **kwargs)


# --- The EVO problem: multiple arms ---

class EVO:
    """EVO satisfies LiquidHandler. Arms are separate objects."""
    def __init__(self):
        self.backend = EVOBackend()
        # Each arm is its own object satisfying the Arm protocol
        self.arms: list[Arm] = [
            EVOArm(self.backend, arm_id=0),
            EVOArm(self.backend, arm_id=1),
        ]

    def aspirate(self, resource, volume: float, **kwargs) -> None:
        self.backend.aspirate(resource, volume, **kwargs)

    def dispense(self, resource, volume: float, **kwargs) -> None:
        self.backend.dispense(resource, volume, **kwargs)


class EVOArm:
    """A single EVO arm — satisfies Arm protocol."""
    def __init__(self, backend, arm_id: int):
        self.backend = backend
        self.arm_id = arm_id

    def move_plate(self, source, target, **kwargs) -> None:
        self.backend.move_plate(source, target, arm=self.arm_id, **kwargs)

    def pick_up_resource(self, resource, **kwargs) -> None:
        self.backend.pick_up(resource, arm=self.arm_id, **kwargs)

    def drop_resource(self, resource, **kwargs) -> None:
        self.backend.drop(resource, arm=self.arm_id, **kwargs)


# --- Optional capabilities (the Cytomat problem) ---

class Cytomat:
    """Always satisfies AutomatedStorage. Optionally TemperatureController."""
    def __init__(self, with_environment: bool = True):
        self.backend = CytomatBackend()
        self._has_env = with_environment

    def store_plate(self, plate, **kwargs) -> None:
        self.backend.store(plate, **kwargs)

    def retrieve_plate(self, plate, **kwargs) -> None:
        self.backend.retrieve(plate, **kwargs)

    # Only meaningful if with_environment=True
    def set_temperature(self, temp: float) -> None:
        if not self._has_env:
            raise NotImplementedError("This Cytomat has no environmental control")
        self.backend.set_temp(temp)

    def wait_for_temperature(self, temp: float, timeout: float = 60) -> None:
        if not self._has_env:
            raise NotImplementedError("This Cytomat has no environmental control")
        self.backend.wait_temp(temp, timeout)


# --- Functions using protocols (like Go interface parameters) ---

def read_plate_byonoy(b, arm: Arm, plate) -> None:
    """Works with ANY object satisfying Arm."""
    arm.move_plate(plate, "illumination_parking")
    arm.move_plate(plate, "reader_unit")
    b.measure()

async def serial_dilution(lh: LiquidHandler, plate, volumes: list[float]) -> None:
    """Works with ANY object satisfying LiquidHandler."""
    for vol in volumes:
        lh.aspirate(plate, vol)
        lh.dispense(plate, vol)

def warm_up(tc: TemperatureController, target: float) -> None:
    """Works with ANY object satisfying TemperatureController."""
    tc.set_temperature(target)
    tc.wait_for_temperature(target)


# --- Usage ---

star = STAR()
evo = EVO()
cytomat = Cytomat(with_environment=True)
cytomat_cold = Cytomat(with_environment=False)

# STAR satisfies both LiquidHandler and Arm
serial_dilution(star, plate="plate1", volumes=[10, 20])
read_plate_byonoy(byonoy, arm=star, plate="plate1")

# EVO: liquid handling on the machine, arm is a separate object
serial_dilution(evo, plate="plate1", volumes=[10, 20])
read_plate_byonoy(byonoy, arm=evo.arms[1], plate="plate1")

# Cytomat with env satisfies TemperatureController
warm_up(cytomat, target=37.0)

# isinstance checks work at runtime thanks to @runtime_checkable
assert isinstance(star, LiquidHandler)  # True
assert isinstance(star, Arm)            # True
assert isinstance(evo, Arm)             # False — EVO itself isn't an Arm
assert isinstance(evo.arms[0], Arm)     # True

# Cytomat with env passes, without doesn't (structurally it does,
# but you could add a runtime guard)
assert isinstance(cytomat, TemperatureController)      # True (has the methods)
assert isinstance(cytomat_cold, TemperatureController)  # Also True! (see note below)

I feel like this solves the problem of multiple instances of a machine front end. Rather than composite front ends, you have implementations of different interfaces. To me, the interfaces are much cleaner. One can also imagine how you would implement general functions with interfaces - you just define an interface and function, and then can type check if it can be used with a certain backend implementation.

thanks for comments!

I like protocols/interfaces, but not sure they are the right choice here because LiquidHandler is more than just a spec, it actually provides shared code like validation and volume tracker updates. This code executes both before and after the actual operation (like validation (universal), then actual operations (specific), then state updates (universal)) → no nice opportunity for super(). calls. That is why I chose backends rather than protocols or subclasses. (The backends do follow a protocol/spec, but I implemented that through an ABC.)

1 Like

star.lh.aspirate

yes, ugly for aspirate.

however, I am thinking:

  1. you can say lh = star.lh in scripts. most lh function calls are already inside of functions that take lh as a parameter and they won’t change.
  2. reminds me of the “should I async or not?” question. yes, 6 extra await characters which makes the API longer, but simultaneously it does make the code a lot nicer overall. I chose to do what’s good for developers, not what’s shortest to type and do think that was the right call for asyncio. (this is becoming more important now that we have LLMs, which didnt even exist when I wrote the asyncio update)

for other things, the “namespace” it gives is actually nice. like resource.temperature_controller.start() rather than current .start_temperature_control(). just lh is ugly because aspirate implies LH in this particular case (there are of course many similar cases though …)

I think I get it. ABCs were chosen because you could get that shared logic and implementation. I think I am mainly thinking from a server/client perspective rather than just a PLR implementation perspective. Basically this:

from pylabrobot.liquid_handling import LiquidHandler
from pylabrobot.liquid_handling.backends import STARBackend
from pylabrobot.resources import Deck

deck = Deck.load_from_json_file("hamilton-layout.json")
lh = LiquidHandler(backend=STARBackend(), deck=deck)

In this basic setup code, you load the deck from a JSON file, and then you load up an instance of the LiquidHandler with a STARBackend. This is so you get the convenient super() you were talking about above. The problem is that the LiquidHandler now directly accesses and changes the state of the deck in a non-contractual way: ie, it directly accesses aspects of the deck.

Now if deck was an Protocol, you could satisfy that interface from just a memory store with uploading a JSON, from a protobuf interface elsewhere, or from a JSON-append memory store. But because deck is not an interface, it becomes very difficult to figure out how to plug another backend in for it.

This is the same with the STARBackend() actually - I would like to be able to access it over a network, but there is no interface for the STARBackend: So I basically have to rewrite everything as “STARBackendAPI”, which is certainly not as nice as doing STARBackend('192.168.1.56').

So I think what I’m mainly feeling a pain point on is almost orthogonal: if there were nice interfaces for each component of PLR, like with a Protocol, I could easily figure out what exactly I need to implement on each side, and newer workcells which combine many pieces of hardware can easily slot in by defining certain operations that they do. With ABCs I have to implement all the functions, and get runtime type checks rather than static type checks.

So basically my pain points might not be very relevant, because I am thinking about a different layer of interfacing with PLR, replacing the front ends themselves

I guess this discussion is going slightly off-topic, but since we do want to have the networked architecture soon I’ll work through it here because it might be unexpectedly related. We also have a separate thread on networking Networked interface for PLR.

overall I don’t think networking will make a big difference in terms of choosing option 1 vs 2. Options 1 and 2 are mostly about the user-facing python API and the default implementation we provide. The implementation and API design of course have to be flexible so that people can change out their own parts / make networking easy. I don’t see how option 1 or option 2 limits or enables that uniquely.

This is not entirely correct.

Front ends like LiquidHandler are one implementation and not subclassed. These provide state tracking and other shared implementations. Backends like LiquidHandlerBackend are the ABC (or Protocol) that are implementations.

I chose ABCs for backends because it’s the standard for “protocols” in Python, or at least at that time. I suppose a Protocol will work fine instead of the ABC. (LiquidHandlerBackend etc.)

I chose the frontend/backend split so we can get the universal implementation + separate atomic backend commands. The backends follow an ABC (or Protocol) so they are interchangeable, the front ends (like LiquidHandler - which we are working on in this thread) are just one implementation (for now).

In this thread I want to discuss how to rework front ends for the reasons written in my first post.

This is not just the case for Deck, which is a special subclass of Resource, but really any Resource I think. What we would need is a Protocol to go along every class in PLR, and have the PLR implementation be the one we provide. This way you can change out any part for your own.

This is going to be a long discussion, I feel :sweat_smile:

I completely agree - I think this discussion should really just focus on option 1 or 2 or a yet-not-known alternative.

Just to recap, because I think it is absolutely crucial to make PLR easy to understand for anyone:

so far the PyLabRobot architecture has been really nice and easy to oversee:

A Resource is a digital model of a physical item (data and behaviour) - it is completely independent of any form of communication/control of a programmable device.

→ core PLR principle == “digital twin”; giving rise to PLR pillar #1 the “Resource Management System”

A MachineBackend is the communication and control layer that enables us to actually make a device do anything.

→ core PLR principle == “driver”; giving rise to PLR pillar #2 the “Machine Control System”

The Frontend now combines the two classes into one separate class that enables a higher level abstraction of common features shared across different instances of MachineBackends and Resources.

One of the core principles behind PLR has been that the Frontend’s backend and resource can (in theory) be exchanged/plug-and-played (creating a trade-off beteween accessing machine specific powers VS generalization).

Q1: Is proposal 2 erasing this core PLR principle, i.e. Frontends’ backends and resources cannot be plug and played anymore?

Q2: Is proposal 2 redefining what a Resource is?
i.e. destroying the clear divide between the RMS and the MCS by making the Resource having Frontends and thereby fundamentally changing the established order of the PLR architecture?

Is this correct infographic for the current version of proposal 2 ?:

I completely agree with the problems mentioned in the first message, and agree that we must find a way to make repeated features available with the same code structure - as much as is possible based on the constraints of real physical machines.

But don’t think I understand proposal 2 enough to really assess it; it seems very confusing to me.

e.g.

is not just ugly and impractical to write (all things that we can get used to) but it is confusing “what is the liquid handler, what is the abstraction, why is liquid handler calling a specific machine and then liquid handler again, … ?”


1 Like

Building on Proposal 1 (multiple inheritance) - I think this is really just applying a well-established inheritance pattern to PLR’s architecture.

The key issue I see:
PLR currently treats every machine as a flat, single-purpose class, but many machines are really composites of multiple features, as mentioned above.

i.e. PLR has to be more precise: a STAR is not just a liquid handler - it is, as defined on the OEM website, a “Liquid Handling Workstation”, directly implying its multi-feature nature.

Single-feature vs multi-feature machines

Some machines genuinely are single-feature - a standalone temperature controller, a standalone shaker, a standalone pump. These are complete classes: own backend contract, own shared frontend logic.

Others are multi-feature composites - a heater-shaker is a temperature controller + a shaker.
A STAR is a liquid handler + arm + temperature controller. A Cytomat is an incubator + plate transporter.
Right now these extra features are either reimplemented from scratch inside each machine (temperature control exists in 7+ incompatible forms across the codebase), or just missing entirely.

PLR already solved this once

HeaterShaker proves the pattern works:

class HeaterShakerBackend(ShakerBackend, TemperatureControllerBackend):
    pass  # the backend
class HeaterShaker(Shaker, TemperatureController):
   pass  # the corresponding frontend

One backend instance satisfies all inherited contracts through Python’s method resolution order. Any improvement to `TemperatureController.wait_for_temperature()` - with its polling, tolerance, timeout logic - instantly applies to every composite that includes temperature control. No duplication, no drift.

In my opinion, this should be the general pattern, applied consistently at both the frontend and backend level.

Edge case

Multi-instance features - a STAR has both an iSWAP and core grippers. Rather than inheriting `Arm` twice, the machine inherits `Arm` once and uses identifiers to select which arm, same as PLR already does today. I think this will be needed anyway because users require human readable/understandable identifiers anyway (e.g. “LiHA-left/right”, “RoMA/LiHa”, …)

What this gives us

  • Write .wait_for_temperature() once, every machine containing the single-feature backend/frontend TemperatureController gets it
  • Add shaking to a plate reader? Inherit Shaker, implement 2 backend methods, done
  • Backend contracts stay minimal - each feature defines only what it needs
  • Existing single-feature machines don’t change at all
1 Like

this diagram is confusing because front ends are resources but have backends as an attribute.

what you are thinking about is the deck attribute, but LiquidHandler is already a resource and deck is just a part of it to make the LiquidHandler resource universal (by putting all machine specific stuff in Deck). if you look at other machines like PlateReader they are ResourceHolders.

this is not the case, as I described in my first post

this is not a valid question since front ends ARE resources right now

in option 2, we would make the first class a user deals with a Resource which then contains attributes like lh and arm and stuff

yes, some resources will have functionalties

also “fundamentally changing the established order of the PLR architecture” in one way or another is necessary as I explained in the proposal

yes

lh would be the universal interface for a liquid handling robot, meaning one .lh attribute of a given machine/resource can be used in the same functions/scripts as that of another. same with arm: you can use kx2.arm or star.arm and pass it to the read_plate_byonoy example.

the actual implementation of commands will still depend on the backend

not really, it only works in this narrow case and it’s easy to see how it does not extend, particularly when the resource models of those front ends do not match.

llm slop. this applies to both options 1 and 2. it is the whole reason I am starting this discussion.

yes I already wrote this approach in my first post, with a reason for why I think it does not work

the unsolved problems I am hearing with approach 1:

  • machines with multiple instances of one functionality like two arms become very awkward
  • when a machine has an optional feature like an optional temperature controller in the case of a cytomat, we would need a whole new class or we would have dead methods

unsolved problems with approach 2:

  • more awkward to type when used directly
  • bigger change than option 1 compared to current API

to me it seems approach 1 has unsolved/unsolvable problems that actually make it impossible to scale, option 2 is just awkward compared to the current code.

to me the clear winner seems to be option 2. if there is no third unconsidered option I will go ahead and move to option 2. I aim to release this is a first versioned beta release.

I see, I wasn’t aware of this. Very confusing

I don’t understand: the backend would just state this is not implemented, just like with the star not having an iswap or the Byonoy A96A not being able to read fluorescence?

Wouldn’t this require human readable identifiers anyway?
(exactly like move_resource in star takes “iswap” or “core_grippers”)

that is the current model: methods always exist because the frontend/backend says they must, but then they raise NotImplementedError.

with option 2, star.iswap: Optional[Arm] would be None when swap is not installed.

yes, of course. but they would be their own objects rather than a method with backend kwargs

I’ll chime in here and say I’ve found this pretty confusing as well…

Perhaps I’m a little confused about the resource model? What exactly is a resource? I was thinking resources are just like the data of the deck (as separate from the robot, which has actions or functions), but maybe I’m thinking about this incorrectly?

1 Like

right now, a resource is any physical object whether a liquid handler or a well. some are just resources like Well, some have functions like LiquidHandler. background: ~big PLR update: visualizer + LH is a Resource - PyLabRobot Development - Lab Automation Forums

You have to abstract the part that is not shared across all children, and in the case of a liquid handler that was the deck.

with option 2, this is what I am thinking: front ends like LH would stop being physical objects themselves and become purely “the actions and functions” (as an attribute of a resource). like STAR: Resource as the main thing users would instantiate when using a star and then star.lh: LiquidHandler(Machine) (and star.iswap: Optional[Arm] as discussed). So some resources have a Machine associated with them, through an attribute. so we would still have an object for the entire STAR/OT2, etc. and then also their decks, but the machine control would be moved into an attribute of those resources.

also rather than star.lh a better name might be star.pipetting_head or something like that.

for convenience we could have:

class STAR:
  async def aspirate(self, *args, **kwargs):
    return await self.lh.aspirate(*args, **kwargs)

  async def dispense(self, *args, **kwargs):
    return await self.lh.dispense(*args, **kwargs)

  ...

  async def pick_up_resource(self, *args, **kwargs):
    return await self.arm.pick_up_resource(*args, **kwargs)

  ...

but that gets ugly fast

Looking more into proposal 2, I’m am starting to see the benefits it would bring.

But I think there are many questions we have to solve before making this change (keeping Bent Flyvbjerg’s “Think Slow, Act Fast” principle in mind here):

e.g.:

  1. What happens to single-feature devices?
    Will scale.measure_weight() stay as it is, and if so will it then not have a “new frontend” / a device-specific frontend?
    Or will all machines have to adopt the new frontend, making it mt_scale.scale.measure_weight()?
  2. Can we use this opportunity to make the new Frontend have a Resource model as an attribute, rather than being a resource itself, thereby clearly distinguishing what is the model/digitaltwin and what is the driver with its new segmentation into Frontend attributes (“Machine Features”) ?
  3. We need a clear and unambiguous PLR terminology that people can learn and reference, otherwise multiple people talk about the same idea with different terms, or multiple ideas with the same term.
  4. How are we teaching these new concepts?
    Without clear teaching, old and new PLR users cannot adopt it - users need to be given a transition path.
  5. We must not break existing code, companies and developers across the world depend on PLR now - that means we need a version before the change to make the change unambiguous and intentional to users, or we find a way to maintain current PLR behaviour during the transition period, and give users at least half a year to update, with clear deprecation warnings.
  6. How do we have to modify the backends to enable their features being split across different “Machine Features” (?), i.e. LiquidHandler, Arm, Storage, TemperatureController, … ?

In general, I am proposing the development of a detailed implementation plan and heavily testing ideas inside a development branch first.
…

I’m happy to make more infographics to help with answering these and the many more questions.

1 Like