cellmil.models.segmentation.cellpose

Classes

CellposeSAM([pretrained_model, device])

PyTorch nn.Module wrapper around Cellpose model for cell instance segmentation.

class cellmil.models.segmentation.cellpose.CellposeSAM(pretrained_model: str = 'cpsam', device: Optional[device] = None, **kwargs: Any)[source]

Bases: Module

PyTorch nn.Module wrapper around Cellpose model for cell instance segmentation. This wrapper allows Cellpose to be used like other PyTorch models in the segmentation pipeline.

__init__(pretrained_model: str = 'cpsam', device: Optional[device] = None, **kwargs: Any)[source]

Initialize CellposeSAM wrapper.

Parameters:
  • pretrained_model – Path to pretrained cellpose model or model name

  • device – Device to run the model on

  • **kwargs – Additional arguments passed to CellposeModel

forward(x: Tensor) Dict[str, Tensor][source]

Forward pass through Cellpose model.

Parameters:

x – Input tensor of shape (B, C, H, W) where C=3 (RGB channels)

Returns:

  • masks: Instance segmentation masks

  • flows: Flow fields

  • styles: Style vectors

  • cellprob: Cell probability maps

Return type:

Dictionary containing

_convert_outputs_to_tensors(masks: Any, flows: Any, styles: Any, image_shape: tuple[int, int]) Dict[str, Tensor][source]

Convert Cellpose outputs to PyTorch tensors.

Parameters:
  • masks – Instance masks from Cellpose

  • flows – Flow outputs from Cellpose

  • styles – Style vectors from Cellpose

  • image_shape – Original image shape (H, W)

Returns:

Dictionary of converted tensors

_stack_batch_results(results: List[Dict[str, Tensor]]) Dict[str, Tensor][source]

Stack batch results into single tensors.

Parameters:

results – List of result dictionaries from individual images

Returns:

Dictionary with stacked tensors

eval()[source]

Set model to evaluation mode.

train(mode: bool = True)[source]

Set model to training mode (not implemented for Cellpose).

to(device: Union[device, str])[source]

Move model to specified device.

cuda(device: Optional[Union[int, device]] = None)[source]

Move model to CUDA device.

cpu()[source]

Move model to CPU.

parameters()[source]

Return model parameters (for compatibility with PyTorch optimizers).

state_dict()[source]

Return state dictionary (for compatibility with PyTorch save/load).

load_state_dict(state_dict: Dict[str, Any], strict: bool = True)[source]

Load state dictionary (for compatibility with PyTorch save/load).

calculate_instance_map(predictions: Dict[str, Tensor], magnification: float = 40.0) tuple[torch.Tensor, list[dict[numpy.int32, dict[str, Any]]]][source]

Calculate instance map and extract cell information from Cellpose predictions.

Parameters:
  • predictions – Dictionary containing model outputs (masks, flows, cellprob, styles)

  • magnification – Magnification level of the image

Returns:

  • instance_map: Tensor with instance segmentation

  • instance_types: List of dictionaries with cell information for each image in batch

Return type:

Tuple containing

_extract_contour(mask: ndarray) ndarray[source]

Extract contour from binary mask.

Parameters:

mask – Binary mask of the cell

Returns:

Contour points as numpy array