currently, SingleChannelAspiration and SingleChannelDispense take a Container (Resource). This means every backend has to compute the exact location of the aspiration/dispense (container.get_absolute_location('c', 'c', 'cavity_bottom')).
Since this computation is shared, we might as well do it on the front end. That means: SingleChannelAspiration and SingleChannelDispense take a Coordinate instead of a Container.
This has the added benefit of it being easy to specify aspiration/dispenses at non-container locations. Sometimes we want to aspirate air, which is possible but awkward to do with passing a container + a big positive z offset. With containers, it also becomes easier to specify where in the container to aspirate: for example
In line with with a comment I just posted, Iād suggest instead implementing an āaspirate_airā or āmake_air_gapā sounding command, and let backends implement it if they want / are able to.
Abstract is better for a frontend. I actually donāt use PLRās coordinates at all in my backend. They are calculated from container names by the existing hardware-side controller.
why would the hardware side control need to implement a resource/container model? That seems to be a lot of (duplicate!) work, and really constrain what is possible on the machine
the STAR backend aspirates transport air in a superior way (faster, at end of step), so as long as the aspirate_air command takes advantage of the STAR speed with a proper backend interface that is different than the opentrons one (which will be a move to coordinate and asp, taking slightly longer), removing the resource model from the backend is ideal.
i like to send numbers to my backend because knowing these numbers makes bugfixing easier
iām not a huge fan of introducing many different methods like that. It becomes incredibly confusing for the end user what they should use. An aspiration is an aspiration, whether you aspirate water or air.
If it already exists, integrating such a machine with PLR would require a workaround (similar to current OT implementation). It would likely limit functionality for the end-user
Why so? Look at dplyr as an example. It has a lot of methods, but everyone I know using R is very happy with it, and not confused at all. The same applies to ggplot2.
The confusion part is not about the amount. It depends on your choice of methods and wording. Indeed āaspirate_airā would not be the best name as youāve said, but the action is a clear case of something with a distinct purpose.
I guess most people would enjoy lh.add_cavity(...) (?) more than lh.aspirate([container.get_absolute_location("c", "c", "t"), ...].
Perhaps a better āofficialā way of getting the numbers would work too. I struggled to understand what the coordinates and dimensional parameters meant, or where to find them.
If the coordinates thing moves forward, Iād recommend writing a reference documentation for PLRās āspatial modelā of the deck: origin, directions, dimensional parameters of resources, etc.
Iām simply not comfortable sending my robot to some XYZ without fully understanding how that is calculated exactly.
Iāll second this, actually. It was hard to figure out the why behind it all. Made sense at the end, but did feel a little more complicated than the OT methods, though perhaps more powerful
Briefly: the origin of the deck is as it is defined by the manufacturer. The origin is actually relatively unimportant since the most important things happen on the ālocalā/relative level, and the deck origin is simply added on the last step before sending commands to a machine.
Having at least the option on the machine to go to a particular coordinate is very powerful in terms of what it allows you to do on your robot. In addition, it makes the implementation a lot simpler because we can fully reuse the resource model code across all robots without having to āmirrorā resources to an internal deck model.