Cytation 1 objective not moving with Y P0e01 instruction

The instruction for moving the objective on the cytation 1 is probably different than for the cytation 5.

I checked “Y”, “P0e01” and with other objective codes, but the objective did not move to its reading position.

That said, the camera operation in the capture and acquire functions seem to work probably. Also, the movement to the well and moving within a well works properly. Really really nice stuff there!

do you have gen5.exe? with wireshark & usbpcap it’s easy to see what commands it is sending. (or i check it probably on saturday)

I tried, but cannot do much on the gen5 computer, i have to jump through a lot of hoops to get that going.

1 Like

i made a branch with what i could quickly see from the wireshark capture. lmk if this works for you

yes this works, but also had to change the set_objective function. see pull request

2 Likes

Works beautifully now! many thanks! :smiley: :microscope: :robot:

3 Likes

nice! do you also notice some overlap between images? when i send the exact same commands as gen5.exe, i notice there is overlap between the images. I suspect this is because gen5.exe actually crops the images to get rid of the vignette effect. I thought in PLR it’s better to give the user full images as taken from the sensor, and let the user figure out stitching if they want to. In the future, we might provide a stitching algorithm in PLR.

2 Likes

I will look into that

1 Like

he image output is 1288 x 964 pixels for the cytation 1,

Camera: Blackfly BFLY-U3-13S2M
Sensor: Sony ICX445
• Resolution: 1288 × 964
• Aspect: ~4:3 (also rectangular)

I don’t get why the output is defined as square:

the cytation 5 camera should be:
Camera: Blackfly BFLY-U3-23S6M-C

Sensor : Sony IMX249

Native resolution : 1920 × 1200 pixels

Aspect ratio: ~16:10 (rectangular)

Mono or color

I updated my cytation 1 image sizes in the biotek_backend.py. Also I used this script for stitching from chatgpt:

import numpy as np

def stitch_grid_with_blend(ims, rows, cols, overlap_fraction=0.1):
    assert len(ims) == rows * cols, "Mismatch between number of images and grid shape"

    img_h, img_w = ims[0].shape
    step_x = int(img_w * (1 - overlap_fraction))
    step_y = int(img_h * (1 - overlap_fraction))

    # Calculate final canvas size
    canvas_w = step_x * (cols - 1) + img_w
    canvas_h = step_y * (rows - 1) + img_h
    stitched = np.zeros((canvas_h, canvas_w), dtype=np.float32)
    weight = np.zeros((canvas_h, canvas_w), dtype=np.float32)

    for idx, img in enumerate(ims):
        r = idx // cols
        c = idx % cols
        x = c * step_x
        y = r * step_y

        # Blend into canvas
        stitched[y:y+img_h, x:x+img_w] += img
        weight[y:y+img_h, x:x+img_w] += 1

    # Avoid division by zero
    weight[weight == 0] = 1
    blended = stitched / weight

    return blended.astype(np.uint8)

and then:

stitched = stitch_grid_with_blend(ims, rows=5, cols=4, overlap_fraction=0.1)

result:

The stitching is not perfect but alignment of the images might be a bit off.

i got these numbers straight from gen5.exe. There is a “wide fov” option that strangely does not change the locations it goes to while taking multiple images of the well. We know that images taken with PLR / just the raw sensor are slightly bigger than what you see in gen5.exe. This implies gen5.exe crops images. I suspect “wide fov” is a different cropping size than “not wide fov”.

I can check on gen5.exe for our cytation1 and see what sizes it suggests there. I’m not sure these are actually pixels :upside_down_face:

is this always 0.1?

No can be any. Maybe the blending algorithm requires a minimal overlap percentage though

Same for cytation 1, images are captured at 1288x964 by plr, but gen5 gives 1128x832. Don’t know if it is a crop or a resize though.

2 Likes