Per-unit instrument configuration

Rick’s Backend kwargs proposal makes it explicit which operation parameters are backend-specific. On another level, there is also per-unit configuration that could be made explicit, which optional modules and optics are installed on THIS specific instrument in the lab. Some earlier thoughts on this in Updating PLR API:

Some devices have optional capabilities that vary per unit. An iSWAP on a STAR is optional, gas control on a Cytation is rare, and objectives/filter cubes differ between units. PLR already handles this ad-hoc: the STAR reads DriveConfiguration from firmware, the Cytation takes a CytationImagingConfig dataclass with objectives and filters from the user. Same problem, no shared pattern.

On the capability branch, devices create capabilities in __init__() before setup() connects to hardware, so optional capabilities can’t be discovered from firmware at construction time. Making a separate class per configuration doesn’t scale (iSWAP × 96-head × tube gripper × channels = too many classes).

What if we had a config object, a “device card”, that tells __init__ what this specific unit has? A model base (what every unit always has) plus instance overrides (what THIS unit has):

# STAR: optional modules
STAR_BASE = DeviceCard(capabilities={"liquid_handling": {"channels": 8}})
my_star = STAR_BASE.merge(DeviceCard.instance(capabilities={
    "liquid_handling": {"channels": 16},
    "iswap": {"rotation": True},
}))
my_star.has("iswap")  # True

and for a cytation:

# Cytation 5: always has plate reading + microscopy, but optics vary per unit
CYTATION5_BASE = DeviceCard(capabilities={
    "absorbance": {}, "fluorescence": {}, "luminescence": {},
    "microscopy": {},  # always present, but objectives/filters vary
})

# Lab A: 4x + 20x, DAPI + GFP
lab_a = CYTATION5_BASE.merge(DeviceCard.instance(capabilities={
    "microscopy": {"objectives": [O_4X_PL_FL, O_20X_PL_FL],
                   "filters": [DAPI, GFP]},
}))

# Lab B: 4x + 40x, GFP + Texas Red + Cy5, plus gas control
lab_b = CYTATION5_BASE.merge(DeviceCard.instance(capabilities={
    "microscopy": {"objectives": [O_4X_PL_FL, O_40X_PL_APO],
                   "filters": [GFP, TEXAS_RED, CY5]},
    "gas_control": {"co2": True},
}))

lab_a.has("gas_control")              # False
lab_b.has("gas_control")              # True
lab_a.get("microscopy", "objectives") # [O_4X_PL_FL, O_20X_PL_FL]
lab_b.get("microscopy", "objectives") # [O_4X_PL_FL, O_40X_PL_APO]

The card can come from firmware (STAR already reads DriveConfiguration, Cytation already queries its turret and filter slots) or a config file (service engineer or user programs the instrument when hardware changes). This is what manufacturers already require, PLR just doesn’t capture it as a shared concept yet.

How this all connects to backend_params: the card could validate that STARMoveParams(grip_force=80) is only sent to a STAR that actually has an iSWAP.

Any thoughts?

(this to me seems more related to Updating PLR API for machine interfaces discussion than the backend kwargs proposal)

there are two ways to get a specific instrument’s configuration:

  • from the user at init
  • from the machine at setup

so optional capabilities can’t be discovered from firmware at [construction] (setup?) time

in the original proposal, i was thinking we could actually create them at setup time, like

class STAR(Device, Resource):
  def __init__(self, ...):
    self.iswap: Optional[OrientableArm] = None

  def setup(self, ...):
    self._driver.setup()
    if self._driver.config.has_iswap:
      self.iswap = OrientableArm (backend=iSWAP(driver=self._driver))

in fact, you can even have capabilities that are even shorter lived. The core gripper example from that thread:

async with star.core_grippers(front_channel=6) as arm:
  read_plate_byonoy(arm=arm, ...)

(the core grippers are mounted on channels, so this capability does not even exist physically sometimes)

a separate class per configuration doesn’t scale (iSWAP × 96-head × tube gripper × channels = too many classes)

which is why i want to move the a composition architecture :slight_smile:

What if we had a config object, a “device card”, that tells init what this specific unit has? A model base (what every unit always has) plus instance overrides (what THIS unit has):

I think this could be useful as a description of a device at a time, if people have a need for it. As a prescription, it might be more difficult since some capabilities are loaded from hardware. We should ideally avoid asking users for stuff we can just load from the machine (in the base case).

How this all connects to backend_params: the card could validate that STARMoveParams(grip_force=80) is only sent to a STAR that actually has an iSWAP.

well rather than the card validating params, with the above proposal star.iswap would not even exist, it would be None!

1 Like

True, the backend_params discussion was the trigger, but you’re right it’s bigger than that, per-unit configuration shows up at every level: which kwargs a backend supports, which capabilities a device has, which internal features like locking are present, which optics are installed.

Good point on Optional[OrientableArm] = None, that handles optional capabilities cleanly. On another level; supports_locking on ShakerBackend is the same per-unit configuration problem: every non-locking shaker still has to implement dead lock_plate() / unlock_plate() methods because the flag lives on the ABC, not in a shared config object. A device card could unify both, optional capabilities (star.iswap is None) and optional internal features (card.get("shaking", "has_lock")), in one place, instead of scattering per-unit knowledge across Optional attributes, supports_X flags, and firmware queries.

agreed for self-describing instruments. But the optional hardware on a device, like objectives and
filter cubes, needs to be manually added to the firmware via the vendor’s software anyway. So why not have that standardized in PLR as well? Non-self-describing devices would also benefit from a shared config object.

it’s not as centralized as that, but the current ShakerBackend ABC/spec has a requried supports_locking property that backends must implement. Same with supports_active_cooling on TemperatureController, which actually drives context-specific behavior (can you call set_temperature lower than the current temp?)

I think doing that on a universal level might be difficult? Because some machines might take configuration that does not universally exist. The “prescription space” is bigger than the “description space”.

Ideally the capability code acts kind of like the card (capabilities existing, having flags, ranges, etc.), except typed and in python objects. We can serialize them if needed

yes, but the lock_plate and unlock_plate are still abstract methods so also shakers without locking need to implement them in the backend.

I suppose they could raise on the default implementation so it’s optional for backends to implement it? with front ends doing the check and not even calling the backend method?

the composition PR splits a lot of these dual-responsibility front ends, but I think splitting shaker into shaker+locking might be too much?

is there a better pattern?

don’t know if it is really better, and also does not rely on a device card. but locking is an internal behavior, it auto-triggers inside start_shaking(), not something the user calls directly. So splitting into two separate capabilites indeed overexposes it.

A lighter pattern: private protocol + init-time check:

  class _CanLockInternally(Protocol):                                                                                                                                    
      async def _lock_plate(self) -> None: ...              
      async def _unlock_plate(self) -> None: ...                                                                                                                         
                                                                                                                                                                         
  class Shaker:
      def __init__(self, backend):                                                                                                                                       
          self._can_lock = isinstance(backend, _CanLockInternally)                                                                                                       
                                                                                                                                                                         
      async def start_shaking(self, speed):                                                                                                                              
          if self._can_lock:                                                                                                                                             
              await self.backend._lock_plate()                                                                                                                           
          await self.backend.start_shaking(speed) 

Non-locking backends just don’t implement the protocol, no dead methods, no supports_X flag. Locking shakers opt in by implementing _lock_plate() / _unlock_plate().

Same pattern works for supports_active_cooling on TemperatureController, anything that’s an internal behavior rather than a user-facing capability.

this would be description instead of prescription as well. which is nice of course, if it is sufficient.

so the _CanLockInternally would just exist on the backend level but not front end?

The methods (_lock_plate, _unlock_plate) only exist on backends that implement them, non-locking backends don’t have them at all. The frontend still knows about locking though; it does an isinstance(backend, _CanLockInternally) check at init to decide whether to call them. So it spans both layers, but the user never sees it.

so yes

I like that, let’s try it

1 Like