# Classes¶

## ImageContainer¶

class squidpy.im.ImageContainer(img=None, layer='image', scale=1.0, **kwargs)[source]

Container for in memory numpy.ndarray/xarray.DataArray or on-disk TIFF/JPEG images.

Wraps xarray.Dataset to store several image layers with the same x and y dimensions in one object. Dimensions of stored images are (y, x, channels). The channel dimension may vary between image layers.

Allows for lazy and chunked reading via rasterio and dask, if the input is a TIFF image. This class is given to all image processing functions, along with anndata.AnnData instance, if necessary.

Parameters
Raises
• ValueError – If loading from a file/store with an unknown format.

• NotImplementedError – If loading a specific data type has not been implemented.

add_img(img, layer=None, channel_dim='channels', lazy=True, chunks=None, **kwargs)[source]

Add a new image to the container.

Parameters
Return type

None

Returns

Nothing, just adds a new layer to data.

Raises
• ValueError – If loading from a file/store with an unknown format.

• NotImplementedError – If loading a specific data type has not been implemented.

Notes

Lazy loading via dask is not supported for on-disk JPEG files, they will be loaded in memory. Multi-page TIFFs will be loaded in one xarray.DataArray, with concatenated channel dimensions.

apply(func, layer=None, channel=None, copy=True, **kwargs)[source]

Apply a function to a layer within this container.

Parameters
Return type

Optional[ImageContainer]

Returns

If copy = True, returns a new container with layer. Otherwise, overwrites the layer in this container.

copy(deep=False)[source]

Return a copy of self.

Parameters

deep (bool) – Whether to make a deep copy or not.

Return type

ImageContainer

Returns

Copy of self.

crop_center(y, x, radius, **kwargs)[source]

Extract a circular crop.

The extracted crop will have shape (radius[0] * 2 + 1, radius[1] * 2 + 1).

Parameters
Return type

ImageContainer

Returns

The cropped image of size size * scale.

crop_corner(y, x, size=None, scale=1.0, cval=0, mask_circle=False, preserve_dtypes=True)[source]

Extract a crop from the upper-left corner.

Parameters
Return type

ImageContainer

Returns

The cropped image of size size * scale.

Raises

ValueError – If the crop would completely lie outside of the image or if mask_circle = True and size does not define a square.

Notes

If preserve_dtypes = True but cval cannot be safely cast, cval will be set to 0.

features_custom(func, layer, channels=None, feature_name=None, **kwargs)

Calculate features using a custom function.

The feature extractor func can be any callable(), as long as it has the following signature: numpy.ndarray (height, width, channels) -> float/Sequence.

Parameters
Return type
Returns

Returns features with the following keys:

• '{feature_name}_{i}' - i-th feature value.

Examples

Simple example would be to calculate the mean of a specified channel, as already done in squidpy.im.ImageContainer.features_summary():

img = squidpy.im.ImageContainer(...)
img.features_custom(imd_id=..., func=numpy.mean, channels=0)

features_histogram(layer, feature_name='histogram', channels=None, bins=10, v_range=None)

Compute histogram counts of color channel values.

Returns one feature per bin and channel.

Parameters
Return type
Returns

Returns features with the following keys for each channel c in channels:

• '{feature_name}_ch-{c}_bin-{i}' - the histogram counts for each bin i in bins.

features_segmentation(label_layer, intensity_layer=None, feature_name='segmentation', channels=None, props=('label', 'area', 'mean_intensity'))

Calculate segmentation features using skimage.measure.regionprops().

Features are calculated using label_layer, a cell segmentation of intensity_layer, resulting from from calling e.g. squidpy.im.segment().

Depending on the specified parameters, mean and std of the requested props are returned. For the ‘label’ feature, the number of labels is returned, i.e. the number of cells in this image.

Parameters
• label_layer (str) – Name of the image layer used to calculate the non-intensity properties.

• intensity_layer (Optional[str]) – Name of the image layer used to calculate the intensity properties.

• feature_name (str) – Base name of feature in resulting feature values dict.

• channels (Union[int, Sequence[int], None]) – Channels for this feature is computed. If None, use all channels. Only relevant for features that use the intensity_layer.

• props (Sequence[str]) –

Segmentation features that are calculated. See properties in skimage.measure.regionprops_table(). Each feature is calculated for each segment (e.g., nucleus) and mean and std values are returned, except for ‘centroid’ and ‘label’. Valid options are:

• ’area’ - number of pixels of segment.

• ’bbox_area’ - number of pixels of bounding box area of segment.

• ’centroid’ - centroid coordinates of segment.

• ’convex_area’ - number of pixels in convex hull of segment.

• ’eccentricity’ - eccentricity of ellipse with same second moments as segment.

• ’equivalent_diameter’ - diameter of circles with same area as segment.

• ’euler_number’ - Euler characteristic of segment.

• ’extent’ - ratio of pixels in segment to its bounding box.

• ’feret_diameter_max’ - longest distance between points around convex hull of segment.

• ’filled_area’ - number of pixels of segment with all holes filled in.

• ’label’ - number of segments.

• ’major_axis_length’ - length of major axis of ellipse with same second moments as segment.

• ’max_intensity’ - maximum intensity of intensity_layer in segment.

• ’mean_intensity’ - mean intensity of intensity_layer in segment.

• ’min_intensity’ - min intensity of intensity_layer in segment.

• ’minor_axis_length’ - length of minor axis of ellipse with same second moments as segment.

• ’orientation’ - angle of major axis of ellipse with same second moments as segment.

• ’perimeter’ - perimeter of segment using 4-connectivity.

• ’perimeter_crofton’ - perimeter of segment approximated by the Crofton formula.

• ’solidity’ - ratio of pixels in the segment to the convex hull of the segment.

Return type
Returns

Returns features with the following keys:

• '{feature_name}_label' - if ‘label is in props.

• '{feature_name}_centroid' - if ‘centroid is in props.

• '{feature_name}_{p}_mean' - mean for each non-intensity property p in props.

• '{feature_name}_{p}_std' - standard deviation for each non-intensity property p in props.

• '{feature_name}_ch-{c}_{p}_mean' - mean for each intensity property p in props and channel c in channels.

• '{feature_name}_ch-{c}_{p}_std' - standard deviation for each intensity property p in props and channel c in channels.

features_summary(layer, feature_name='summary', channels=None, quantiles=(0.9, 0.5, 0.1))

Calculate summary statistics of image channels.

Parameters
Return type
Returns

Returns features with the following keys for each channel c in channels:

• '{feature_name}_ch-{c}_quantile-{q}' - the quantile features for each quantile q in quantiles.

• '{feature_name}_ch-{c}_mean' - the mean.

• '{feature_name}_ch-{c}_std' - the standard deviation.

features_texture(layer, feature_name='texture', channels=None, props=('contrast', 'dissimilarity', 'homogeneity', 'correlation', 'ASM'), distances=(1), angles=(0, 0.7853981633974483, 1.5707963267948966, 2.356194490192345))

Calculate texture features.

A grey level co-occurrence matrix (GLCM) is computed for different combinations of distance and angle.

The distance defines the pixel difference of co-occurrence. The angle define the direction along which we check for co-occurrence. The GLCM includes the number of times that grey-level $$j$$ occurs at a distance $$d$$ and at an angle theta from grey-level $$i$$.

Parameters
Return type
Returns

Returns features with the following keys for each channel c in channels:

• '{feature_name}_ch-{c}_{p}_dist-{dist}_angle-{a}' - the GLCM properties, for each p in props, d in distances and a in angles.

Notes

If the image is not of type numpy.uint8, it will be converted.

generate_equal_crops(size=None, as_array=False, **kwargs)[source]

Decompose image into equally sized crops.

Parameters
Yields

The crops, whose type depends on as_array.

Notes

Crops going outside out of the image boundary are padded with cval.

Return type

Union[Iterator[ImageContainer], Iterator[Dict[str, ndarray]]]

generate_spot_crops(adata, library_id=None, spatial_key='spatial', spot_scale=1.0, obs_names=None, as_array=False, return_obs=False, **kwargs)[source]

Iterate over adata.obs_names and extract crops.

Implemented for 10X spatial datasets.

Parameters
Yields
• If return_obs = True, yields a tuple (crop, obs_name). Otherwise, yields just the crops.

• The type of the crops depends on as_array.

Return type

Union[Iterator[ImageContainer], Iterator[ndarray], Iterator[Tuple[ndarray, …]], Iterator[Dict[str, ndarray]]]

interactive(adata, spatial_key='spatial', library_id=None, cmap='viridis', palette=None, blending='opaque', symbol='disc', key_added='shapes')[source]

Launch napari viewer.

Parameters
Return type

~Interactive

Returns

Interactive view of this container. Screenshot of the canvas can be taken by squidpy.pl.Interactive.screenshot().

classmethod load(path, lazy=True, chunks=None)[source]

Load data from a Zarr store.

Parameters
Return type

ImageContainer

Returns

The loaded container.

save(path, **kwargs)[source]

Save the container into a Zarr store.

Parameters

path (Union[str, Path]) – Path to a Zarr store.

Return type

None

Returns

Nothing, just saves the container.

show(layer=None, channel=None, as_mask=False, ax=None, figsize=None, dpi=None, save=None, **kwargs)[source]

Show an image within this container.

Parameters
Return type

None

Returns

Nothing, just plots the and optionally saves the plot.

Raises

ValueError – If as_mask = True and the image layer has more than 1 channel.

classmethod uncrop(crops, shape=None)[source]

Re-assemble image from crops and their positions.

Fills remaining positions with zeros. Positions are given as upper-right corners.

Parameters
Return type

ImageContainer

Returns

Re-assembled image from crops.

Raises

ValueError – If crop metadata was not found or if the requested shape is smaller than required by crops.

property data

Underlying xarray.Dataset.

Return type

Dataset

property shape

Image shape (y, x).

Return type

## InteractiveViewer¶

class squidpy.pl.Interactive(img, adata, **kwargs)[source]

Interactive viewer for spatial data.

Parameters
close()[source]

Close the viewer.

Return type

None

screenshot(return_result=False, dpi=180, save=None, **kwargs)[source]

Plot a screenshot of the viewer’s canvas.

Parameters
Return type
Returns

Nothing, if return_result = False, otherwise the image array.

show(restore=False)[source]

Launch the napari.Viewer.

Parameters

restore (bool) – Whether to reinitialize the GUI after it has been destroyed.

Return type

Interactive

Returns

Nothing, just launches the viewer.

property adata

Annotated data object.

Return type

AnnData

## SegmentationModelWatershed¶

class squidpy.im.SegmentationWatershed[source]

Segmentation model based on skimage watershed segmentation.

segment(img, **kwargs)

Segment an image.

Parameters
Return type

Union[ndarray, ImageContainer]

Returns

Segmentation mask for the high-resolution image of shape (height, width, 1).

Raises
• ValueError – If the number of dimensions is neither 2 nor 3 or if there are more than 1 channels.

• NotImplementedError – If trying to segment a type for which the segmentation has not been registered.

## SegmentationModelBlob¶

class squidpy.im.SegmentationBlob(model)[source]

Segmentation model based on skimage blob detection.

Parameters

model (SegmentationBackend) –

Segmentation method to use. Valid options are:

• ’log’ - skimage.feature.blob_log(). Blobs are assumed to be light on dark.

• ’dog’ - skimage.feature.blob_dog. Blobs are assumed to be light on dark.

• ’doh’ - skimage.feature.blob_doh. Blobs can be light on dark or vice versa.

segment(img, **kwargs)

Segment an image.

Parameters
Return type

Union[ndarray, ImageContainer]

Returns

Segmentation mask for the high-resolution image of shape (height, width, 1).

Raises
• ValueError – If the number of dimensions is neither 2 nor 3 or if there are more than 1 channels.

• NotImplementedError – If trying to segment a type for which the segmentation has not been registered.

## SegmentationCustom¶

class squidpy.im.SegmentationCustom(func)[source]

Segmentation model based on a user-defined function.

Parameters

func (Callable[…, ndarray]) – Segmentation function to use. Can be any callable(), as long as it has the following signature: numpy.ndarray (height, width, channels) -> numpy.ndarray (height, width[, channels]).

segment(img, **kwargs)

Segment an image.

Parameters
Return type

Union[ndarray, ImageContainer]

Returns

Segmentation mask for the high-resolution image of shape (height, width, 1).

Raises
• ValueError – If the number of dimensions is neither 2 nor 3 or if there are more than 1 channels.

• NotImplementedError – If trying to segment a type for which the segmentation has not been registered.