Plr protobuf

prototyped making a remote liquidhandler/starbackend/resource, such that you can access pylabrobot remotely. It works, but definitely can be improved.

lh = RemoteLiquidHandler.connect(f"http://127.0.0.1:{_LH_PORT}")

await lh.pick_up_tips(["tip_rack_01_tipspot_A1"])
await lh.aspirate(["plate_01_well_A1"], vols=[100])
await lh.dispense(["plate_01_well_A2"], vols=[100])
await lh.return_tips()

This is useful for me, because you can implement the LiquidHandler interface and then basically just use the exact functions of PLR. Allows intercepting calls to do things like liquid tracking and serialization of full state.

1 Like

based

hope to merge something like that into PLR after Updating PLR API for machine interfaces discussion

I’ve learned a few things while building this, which I’ll briefly put:

  1. the protobuf definitions are the most important. With a good interface description, generation works great.
  2. you can autonomously test by creating fake serial backends and then check RemoteClient vs Local. This is more useful than the other tests it produces and usually guarantees, to some degree, that the interfaces are functionally compatible
  3. the hardest part is creating the nice interface for humans. While client/server can be generated from protobuf, you need to implement server (can be done autonomously by making serial-comparisons), and you need to make sure the user-facing interface is correct. This takes a shit ton of boiler plate on top of the autogenerated client, but is very easy to do. plr-interace → client-interface -> server -> serial
2 Likes

responding to @vcjdeboer

if read_positions receives dataclasses (name + coordinate) rather than live PLR objects, the backend boundary becomes serializable. If we applied that principle across all capabilities — liquid handling, arms, shaking — the entire driver interface would be serializable by design. That would make @koeng protobuf/networking* work significantly easier (no custom converters), and it also makes drivers easier to test (just pass in data, no need to construct a full resource tree) and easier to write for new contributors (the driver contract is explicit about what data it needs).

from Modeling plate reading capabilities - #3 by vcjdeboer

serialize backend calls are indeed easier when we make this networked.

resources are technically serializable, but it’s a bit of a mess since they link to the resource of the resource system (up and down the tree). so in the rest of this post I will say “serializable” meaning non-resource data types.

The major challenge with this is that some backend methods like aspirate require a LOT of information, especially about container/well geometry, for like “start lld search height”, which is not a backend kwarg (which we could also define to be serializable) but actually inferred from the container geometry that is passed to the function. So not created on the front end or passed by the user. This type of behavior makes requiring serialization difficult. However, we could imagine the front end computing ALL of this information and passing it to all backends, and then backends deciding to use it or not. Other backends might have similar concepts, and some might ignore it.

A second challenge with requiring serializable backend commands is that making everything serialized is an even harder problem. Some star methods like probe_liquid_heights, backend specific methods that require resource models to be passed. If we want a “serializable” layer, then we would need to split this function somewhere.

This point is more about middleware/quality of life, but let’s say the backends only receive “aspirate at XYZ” rather than “aspirate at container C”, tracking would be more difficult for them. At that point, you might also just put the networking layer at sending firmware commands.

Third, we would already need to pass resources at some point since the backends will need to know what the resources look like. Or at the very least the server needs to know this for it to be useful. I always imagined I would have a single server running for my incubators where there is a web interface showing all plates, which users can edit, and then protocols (clients) just syncing the incubator server and loading plates from it. Since we will already have server side knowledge of resources, you could also imagine serializing resources by name and the server loading them from their own memory.

1 Like

I got carried away a little bit, I made a simple “federated” resource mirroring system. check it out on this branch: pylabrobot/grpc_demo at grpc-resource-tree · PyLabRobot/pylabrobot · GitHub. demo_federated.py has a demo. Don’t take it too seriously, just a PoC.

you can have multiple servers running (imagine one per machine), each hosting their own resource tree. in the demo there are two servers, one for the odtc and one for a hamilton. The client is the master “lab”, which creates remote_hamilton = RemoteResource("hamilton", target="localhost:50061") and then assigns it to its own tree root.assign_child_resource(remote_hamilton, location=Coordinate(0, 0, 0)). both the client and server can modify the resources. when you call RemoteResource.get_resource it will also return a RemoteResource

you can imagine the “client” here, meaning lab that assigns hamilton and odtc, also exposing a server that is a RemoteResource to another even higher level client.

2 Likes

based