cuCIM API Reference#

Clara Submodules#

class cucim.clara.CuImage#
Attributes
associated_images

Returns a set of associated image names.

channel_names

A channel name list.

coord_sys

Coordinate frame in which the direction cosines are measured.

device

A device type.

dims

A string containing a list of dimensions being requested.

direction

Direction cosines (size is always 3x3).

dtype

The data type of the image.

is_loaded

True if image data is loaded & available.

metadata

A metadata object as dict.

ndim

The number of dimensions.

origin

Physical location of (0, 0, 0) (size is always 3).

path

Underlying file path for this object.

raw_metadata

A raw metadata string.

resolutions

Returns a dict that includes resolution information.

shape

A tuple of dimension sizes (in the order of dims)

typestr

The data type of the image in string format.

Methods

associated_image(self[, name, device])

Returns an associated image for the given name, as a CuImage object.

cache([type])

Get cache object.

close(self)

Closes the file handle.

profiler(**kwargs)

Get profiler object.

read_region(self[, location, size, level, ...])

Returns a subresolution image.

save(self, arg0)

Saves image data to the file path.

size(self[, dim_order])

Returns size as a tuple for the given dimension order.

spacing(self[, dim_order])

Returns physical size in tuple.

spacing_units(self[, dim_order])

Units for each spacing element (size is same with ndim).

associated_image(self: cucim.clara._cucim.CuImage, name: str = '', device: cucim.clara._cucim.io.Device = cpu) object#

Returns an associated image for the given name, as a CuImage object.

property associated_images#

Returns a set of associated image names.

Digital Pathology image usually has a label/thumbnail or a macro image(low-power snapshot of the entire glass slide). Names of those images (such as ‘macro’ and ‘label’) are in associated_images.

static cache(type: object = None, **kwargs) cucim.clara._cucim.cache.ImageCache#

Get cache object.

property channel_names#

A channel name list.

close(self: cucim.clara._cucim.CuImage) None#

Closes the file handle.

Once the file handle is closed, the image object (if loaded before) still exists but cannot read additional images from the file.

property coord_sys#

Coordinate frame in which the direction cosines are measured.

Available Coordinate frame names are not finalized yet.

property device#

A device type.

By default t is cpu (It will be changed since v0.19.0).

property dims#

A string containing a list of dimensions being requested.

The default is to return the six standard dims (‘STCZYX’) unless it is a DP multi-resolution image.

[sites, time, channel(or wavelength), z, y, x]. S - Sites or multiposition locations.

NOTE: in OME-TIFF’s metadata, dimension order would be specified as ‘XYZCTS’ (first one is fast-iterating dimension).

property direction#

Direction cosines (size is always 3x3).

property dtype#

The data type of the image.

property is_loaded#

True if image data is loaded & available.

is_trace_enabled = False#
property metadata#

A metadata object as dict.

It would be a dictionary(key-value pair) in general but can be a complex object (e.g., OME-TIFF metadata).

property ndim#

The number of dimensions.

property origin#

Physical location of (0, 0, 0) (size is always 3).

property path#

Underlying file path for this object.

static profiler(**kwargs) cucim.clara._cucim.profiler.Profiler#

Get profiler object.

property raw_metadata#

A raw metadata string.

read_region(self: cucim.clara._cucim.CuImage, location: Iterable = (), size: List[int] = (), level: int = 0, num_workers: int = 0, batch_size: int = 1, drop_last: bool = False, prefetch_factor: int = 2, shuffle: bool = False, seed: int = 0, device: cucim.clara._cucim.io.Device = cpu, buf: object = None, shm_name: str = '', **kwargs) object#

Returns a subresolution image.

  • location and size’s dimension order is reverse of image’s dimension order.

  • Need to specify (X,Y) and (Width, Height) instead of (Y,X) and (Height, Width).

  • If location is not specified, location would be (0, 0) if Z=0. Otherwise, location would be (0, 0, 0)

  • Like OpenSlide, location is level-0 based coordinates (using the level-0 reference frame)

  • If size is not specified, size would be (width, height) of the image at the specified level.

  • <not supported yet> Additional parameters (S,T,C,Z) are similar to <https://allencellmodeling.github.io/aicsimageio/aicsimageio.html#aicsimageio.aics_image.AICSImage.get_image_data>

  • Do not yet support indices/ranges for (S,T,C,Z).

  • Default value for level, S, T, Z are zero.

  • Default value for C is -1 (whole channels)

  • <not supported yet> device could be one of the following strings or Device object: e.g., ‘cpu’, ‘cuda’, ‘cuda:0’ (use index 0), cucim.clara.io.Device(cucim.clara.io.CUDA,0).

  • <not supported yet> If buf is specified (buf’s type can be either numpy object that implements __array_interface__, or cupy-compatible object that implements __cuda_array_interface__), the read image would be saved into buf object without creating CPU/GPU memory.

  • <not supported yet> If shm_name is specified, shared memory would be created and data would be read in the shared memory.

property resolutions#

Returns a dict that includes resolution information.

  • level_count: The number of levels

  • level_dimensions: A tuple of dimension tuples (width, height)

  • level_downsamples: A tuple of down-sample factors

  • level_tile_sizes: A tuple of tile size tuple (tile width, tile_height)

save(self: cucim.clara._cucim.CuImage, arg0: str) None#

Saves image data to the file path.

Currently it supports only .ppm file format that can be viewed by eog command in Ubuntu.

property shape#

A tuple of dimension sizes (in the order of dims)

size(self: cucim.clara._cucim.CuImage, dim_order: str = '') List[int]#

Returns size as a tuple for the given dimension order.

spacing(self: cucim.clara._cucim.CuImage, dim_order: str = '') List[float]#

Returns physical size in tuple.

If dim_order is specified, it returns physical size for the dimensions. If a dimension given by the dim_order doesn’t exist, it returns 1.0 by default for the missing dimension.

Args:

dim_order: A dimension string (e.g., ‘XYZ’)

Returns:

A tuple with physical size for each dimension

spacing_units(self: cucim.clara._cucim.CuImage, dim_order: str = '') List[str]#

Units for each spacing element (size is same with ndim).

property typestr#

The data type of the image in string format.

The value can be converted to NumPy’s dtype using numpy.dtype(). (e.g., numpy.dtype(img.typestr)).

class cucim.clara.DLDataType#
Attributes
bits

Number of bits, common choices are 8, 16, 32.

code

Type code of base types.

lanes

Number of lanes in the type, used for vector types.

property bits#

Number of bits, common choices are 8, 16, 32.

property code#

Type code of base types.

property lanes#

Number of lanes in the type, used for vector types.

class cucim.clara.DLDataTypeCode#

Members:

DLInt

DLUInt

DLFloat

DLBfloat

Attributes
name

name(self: handle) -> str

value
DLBfloat = <DLDataTypeCode.DLBfloat: 4>#
DLFloat = <DLDataTypeCode.DLFloat: 2>#
DLInt = <DLDataTypeCode.DLInt: 0>#
DLUInt = <DLDataTypeCode.DLUInt: 1>#
property name#
property value#

cache#

class cucim.clara.cache.CacheType#

Members:

NoCache

PerProcess

SharedMemory

Attributes
name

name(self: handle) -> str

value
NoCache = <CacheType.NoCache: 0>#
PerProcess = <CacheType.PerProcess: 1>#
SharedMemory = <CacheType.SharedMemory: 2>#
property name#
property value#
class cucim.clara.cache.ImageCache#
Attributes
capacity

A capacity of list/hashmap.

config

Returns the dictionary of configuration.

free_memory

A cache memory size available in the cache memory.

hit_count

A cache hit count.

memory_capacity

A capacity of cache memory.

memory_size

A size of cache memory used.

miss_count

A cache miss count.

size

A size of list/hashmap.

type

A Cache type.

Methods

record(self[, value])

Records the cache statistics.

reserve(self, memory_capacity, **kwargs)

Reserves more memory if possible.

property capacity#

A capacity of list/hashmap.

property config#

Returns the dictionary of configuration.

property free_memory#

A cache memory size available in the cache memory.

property hit_count#

A cache hit count.

property memory_capacity#

A capacity of cache memory.

property memory_size#

A size of cache memory used.

property miss_count#

A cache miss count.

record(self: cucim.clara._cucim.cache.ImageCache, value: object = None) bool#

Records the cache statistics.

reserve(self: cucim.clara._cucim.cache.ImageCache, memory_capacity: int, **kwargs) None#

Reserves more memory if possible.

property size#

A size of list/hashmap.

property type#

A Cache type.

cucim.clara.cache.preferred_memory_capacity(img: object = None, image_size: Optional[List[int]] = None, tile_size: Optional[List[int]] = None, patch_size: Optional[List[int]] = None, bytes_per_pixel: int = 3) int#

Returns a good cache memory capacity value in MiB for the given conditions.

Please see how the value is calculated: https://godbolt.org/z/8vxnPfKM5

Args:

img: A CuImage object that can provide image_size, tile_size, bytes_per_pixel information. If this argument is provided, only patch_size from the arguments is used for the calculation. image_size: A list of values that presents the image size (width, height). tile_size: A list of values that presents the tile size (width, height). The default value is (256, 256). patch_size: A list of values that presents the patch size (width, height). The default value is (256, 256). bytes_per_pixel: The number of bytes that each pixel in the 2D image takes place. The default value is 3.

Returns:

int: The suggested memory capacity in MiB.

filesystem#

class cucim.clara.filesystem.CuFileDriver#

Methods

close(self)

Closes opened file if not closed.

pread(self, buf, count, file_offset[, ...])

Reads up to count bytes from the file driver at offset file_offset (from the start of the file) into the buffer buf starting at offset buf_offset.

pwrite(self, buf, count, file_offset[, ...])

Reads up to count bytes from the file driver at offset file_offset (from the start of the file) into the buffer buf starting at offset buf_offset.

close(self: cucim.clara._cucim.filesystem.CuFileDriver) bool#

Closes opened file if not closed.

pread(self: cucim.clara._cucim.filesystem.CuFileDriver, buf: object, count: int, file_offset: int, buf_offset: int = 0) int#

Reads up to count bytes from the file driver at offset file_offset (from the start of the file) into the buffer buf starting at offset buf_offset. The file offset is not changed.

Args:

buf: A buffer where read bytes are stored. Buffer can be either in CPU memory or (CUDA) GPU memory. count: The number of bytes to read. file_offset: An offset from the start of the file. buf_offset: An offset from the start of the buffer. Default value is 0.

Returns:

The number of bytes read if succeed, -1 otherwise.

pwrite(self: cucim.clara._cucim.filesystem.CuFileDriver, buf: object, count: int, file_offset: int, buf_offset: int = 0) int#

Reads up to count bytes from the file driver at offset file_offset (from the start of the file) into the buffer buf starting at offset buf_offset. The file offset is not changed.

Args:

buf: A buffer where write bytes come from. Buffer can be either in CPU memory or (CUDA) GPU memory. count: The number of bytes to write. file_offset: An offset from the start of the file. buf_offset: An offset from the start of the buffer. Default value is 0.

Returns:

The number of bytes written if succeed, -1 otherwise.

cucim.clara.filesystem.close(arg0: cucim.clara._cucim.filesystem.CuFileDriver) bool#

Closes the given file driver.

Args:

fd: An CuFileDriver object.

Returns:

True if succeed, False otherwise.

cucim.clara.filesystem.discard_page_cache(file_path: str) bool#

Discards a system (page) cache for the given file path.

Args:

file_path: A file path to drop system cache.

Returns:

True if succeed, False otherwise.

cucim.clara.filesystem.open(file_path: str, flags: str, mode: int = 420) cucim.clara._cucim.filesystem.CuFileDriver#

Open file with specific flags and mode.

‘flags’ can be one of the following flag string:

  • “r”: os.O_RDONLY

  • “r+”: os.O_RDWR

  • “w”: os.O_RDWR | os.O_CREAT | os.O_TRUNC

  • “a”: os.O_RDWR | os.O_CREAT

In addition to above flags, the method append os.O_CLOEXEC and os.O_DIRECT by default.

The following is optional flags that can be added to above string:

  • ‘p’: Use POSIX APIs only (first try to open with O_DIRECT). It does not use GDS.

  • ‘n’: Do not add O_DIRECT flag.

  • ‘m’: Use memory-mapped file. This flag is supported only for the read-only file descriptor.

When ‘m’ is used, PROT_READ and MAP_SHARED are used for the parameter of mmap() function.

Args:

file_path: A file path to open. flags: File flags in string. Default value is “r”. mode: A file mode. Default value is ‘0o644’.

Returns:

An object of CuFileDriver.

cucim.clara.filesystem.pread(fd: cucim.clara._cucim.filesystem.CuFileDriver, buf: object, count: int, file_offset: int, buf_offset: int = 0) int#

Reads up to count bytes from file driver fd at offset offset (from the start of the file) into the buffer buf starting at offset buf_offset. The file offset is not changed.

Args:

fd: An object of CuFileDriver. buf: A buffer where read bytes are stored. Buffer can be either in CPU memory or (CUDA) GPU memory. count: The number of bytes to read. file_offset: An offset from the start of the file. buf_offset: An offset from the start of the buffer. Default value is 0.

Returns:

The number of bytes read if succeed, -1 otherwise.

cucim.clara.filesystem.pwrite(fd: cucim.clara._cucim.filesystem.CuFileDriver, buf: object, count: int, file_offset: int, buf_offset: int = 0) int#

Write up to count bytes from the buffer buf starting at offset buf_offset to the file driver fd at offset offset (from the start of the file). The file offset is not changed.

Args:

fd: An object of CuFileDriver. buf: A buffer where write bytes come from. Buffer can be either in CPU memory or (CUDA) GPU memory. count: The number of bytes to write. file_offset: An offset from the start of the file. buf_offset: An offset from the start of the buffer. Default value is 0.

Returns:

The number of bytes written if succeed, -1 otherwise.

io#

class cucim.clara.io.Device#
Attributes
index

Device index.

type

Device type.

Methods

parse_type(arg0)

Create DeviceType object from the device name string.

property index#

Device index.

static parse_type(arg0: str) cucim.clara._cucim.io.DeviceType#

Create DeviceType object from the device name string.

property type#

Device type.

class cucim.clara.io.DeviceType#

Members:

CPU

CUDA

CUDAHost

CUDAManaged

CPUShared

CUDAShared

Attributes
name

name(self: handle) -> str

value
CPU = <DeviceType.CPU: 1>#
CPUShared = <DeviceType.CPUShared: 101>#
CUDA = <DeviceType.CUDA: 2>#
CUDAHost = <DeviceType.CUDAHost: 3>#
CUDAManaged = <DeviceType.CUDAManaged: 13>#
CUDAShared = <DeviceType.CUDAShared: 102>#
property name#
property value#

core Submodules#

color#

cucim.core.operations.color.color_jitter(img: Any, brightness=0, contrast=0, saturation=0, hue=0)#

Applies color jitter by random sequential application of 4 operations (brightness, contrast, saturation, hue).

Parameters
imgchannel first, cupy.ndarray or numpy.ndarray

Input data of shape (C, H, W). Can also batch process input of shape (N, C, H, W). Can be a numpy.ndarray or cupy.ndarray.

brightnessfloat or 2-tuple of float, optional

Non-negative factor to jitter the brightness by. When brightness is a scalar, scaling will be by a random value in range [max(0, 1 - brightness), (1 + brightness)]. brightness can also be a 2-tuple specifying the range for the random scaling factor. A value of 0 or (1, 1) will result in no change.

contrastfloat or 2-tuple of float, optional

Non-negative factor to jitter the contrast by. When contrast is a scalar, scaling will be by a random value between [max(0, 1 - contrast), (1 + contrast)]. contrast can also be a 2-tuple specifying the range for the random scaling factor. A value of 0 or (1, 1) will result in no change.

saturationfloat or 2-tuple of float, optional

Non-negative factor to jitter the saturation by. When saturation is a scalar, scaling will be by a random value between [max(0, 1 - saturation), (1 + saturation)]. saturation can also be a 2-tuple specifying the range for the random scaling factor. A value of 0 or (1, 1) will result in no change.

huefloat or 2-tuple of float, optional

Factor between [-0.5, 0.5] to jitter hue by. When hue is a scalar, scaling will be by a random value between in the range [-hue, hue]. hue can also be a 2-tuple specifying the range. A value of 0 or (0, 0) will result in no change.

Returns
outcupy.ndarray or numpy.ndarray

Output data. Same dimensions and type as input.

Raises
ValueError

If ‘brightness’,’contrast’,’saturation’ or ‘hue’ is outside of allowed range

TypeError

If input ‘img’ is not cupy.ndarray or numpy.ndarray

Examples

>>> import cucim.core.operations.color as ccl
>>> # input is channel first 3d array
>>> output_array = ccl.color_jitter(input_arr,.25,.75,.25,.04)
cucim.core.operations.color.image_to_absorbance(image, source_intensity=255.0, dtype=<class 'numpy.float32'>)#

Convert an image to units of absorbance (optical density).

Parameters
imagendarray

The image to convert to absorbance. Can be single or multichannel.

source_intensityfloat, optional

Reference intensity for image.

dtypenumpy.dtype, optional

The floating point precision at which to compute the absorbance.

Returns
absorbancendarray

The absorbance computed from image.

Notes

If image has an integer dtype it will be clipped to range [1, source_intensity], while float image inputs are clipped to range [source_intensity/255, source_intensity]. The minimum is to avoid log(0). Absorbance is then given by absorbance = log(image / source_intensity).

cucim.core.operations.color.normalize_colors_pca(image, source_intensity: float = 240.0, alpha: float = 1.0, beta: float = 0.345, ref_stain_coeff: tuple | cupy.ndarray = ((0.5626, 0.2159), (0.7201, 0.8012), (0.4062, 0.5581)), ref_max_conc: tuple | cupy.ndarray = (1.9705, 1.0308), image_type: str = 'intensity', channel_axis: int = 0)#

Extract the matrix of stain coefficient from the image.

Parameters
imagenp.ndarray

RGB image to determine concentrations for. Intensities should typically be within unsigned 8-bit integer intensity range ([0, 255]) when image_type == "intensity".

source_intensityfloat, optional

Transmitted light intensity. The algorithm will clip image intensities above the specified source_intensity and then normalize by source_intensity so that image intensities are <= 1.0. Only used when image_type==”intensity”.

alphafloat, optional

Algorithm parameter controlling the [alpha, 100 - alpha] percentile range used as a robust [min, max] estimate.

betafloat, optional

Absorbance (optical density) threshold below which to consider pixels as transparent. Transparent pixels are excluded from the estimation.

ref_stain_coeffarray-like

Reference stain coefficients as determined by the output of stain_extraction_pca for a reference image.

ref_max_conctuple or cp.ndarray

The reference maximum concentrations.

image_type{“intensity”, “absorbance”}, optional

With the default image_type of “intensity”, the image will be transformed to an absorbance using image_to_absorbance. If the input image is already an absorbance image, then image_type should be set to “absorbance” instead.

channel_axisint, optional

The axis corresponding to color channels (default is the last axis).

Returns
stain_coeffnp.ndarray

Stain attenuation coefficient matrix derived from the image, where the first column corresponds to H, the second column is E and the rows are RGB values.

Notes

The default beta of 0.345 is equivalent to the use of 0.15 in [1]. The difference is due to our use of the natural log instead of a decadic log (log10) when computing the absorbance.

References

1

M. Macenko et al., “A method for normalizing histology slides for quantitative analysis,” 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2009, pp. 1107-1110, doi: 10.1109/ISBI.2009.5193250. http://wwwx.cs.unc.edu/~mn/sites/default/files/macenko2009.pdf

cucim.core.operations.color.stain_extraction_pca(image, source_intensity=240, alpha=1, beta=0.345, *, channel_axis=0, image_type='intensity')#

Extract the matrix of H & E stain coefficient from an image.

Uses a method that selects stain vectors based on the angle distribution within a best-fit plane determined by principle component analysis (PCA) [1].

Parameters
imagecp.ndarray

RGB image to perform stain extraction on. Intensities should typically be within unsigned 8-bit integer intensity range ([0, 255]) when image_type == "intensity".

source_intensityfloat, optional

Transmitted light intensity. The algorithm will clip image intensities above the specified source_intensity and then normalize by source_intensity so that image intensities are <= 1.0. Only used when image_type==”intensity”.

alphafloat, optional

Algorithm parameter controlling the [alpha, 100 - alpha] percentile range used as a robust [min, max] estimate.

betafloat, optional

Absorbance (optical density) threshold below which to consider pixels as transparent. Transparent pixels are excluded from the estimation.

Returns
stain_coeffcp.ndarray

Stain attenuation coefficient matrix derived from the image, where the first column corresponds to H, the second column is E and the rows are RGB values.

Notes

The default beta of 0.345 is equivalent to the use of 0.15 in [1]. The difference is due to our use of the natural log instead of a decadic log (log10) when computing the absorbance.

References

1(1,2)

M. Macenko et al., “A method for normalizing histology slides for quantitative analysis,” 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2009, pp. 1107-1110, doi: 10.1109/ISBI.2009.5193250. http://wwwx.cs.unc.edu/~mn/sites/default/files/macenko2009.pdf

expose#

intensity#

cucim.core.operations.intensity.normalize_data(img: Any, norm_constant: float, min_value: float, max_value: float, type: str = 'range') Any#

Apply intensity normalization to the input array. Normalize intensities to the range of [0, norm_constant].

Parameters
imgchannel first, cupy.ndarray or numpy.ndarray

Input data of shape (C, H, W). Can also batch process input of shape (N, C, H, W). Can be a numpy.ndarray or cupy.ndarray.

norm_constant: float

Normalization range of the input data. [0, norm_constant]

min_valuefloat

Minimum intensity value in input data.

max_valuefloat

Maximum intensity value in input data.

type{‘range’, ‘atan’}

Type of normalization.

Returns
outcupy.ndarray or numpy.ndarray

Output data. Same dimensions and type as input.

Raises
TypeError

If input ‘img’ is not cupy.ndarray or numpy.ndarray

ValueError

If input original intensity min and max are same

ValueError

If incorrect normalization type is invoked

Examples

>>> import cucim.core.operations.intensity as its
>>> # input is channel first 3d array
>>> output_array = its.normalize_data(input_arr,
                                      10, 0 , 255)
cucim.core.operations.intensity.rand_zoom(img: Any, min_zoom: collections.abc.Sequence[float] | float = 0.9, max_zoom: collections.abc.Sequence[float] | float = 1.1, prob: float = 0.1, whole_batch: bool = False)#

Randomly Calls zoom with random zoom factor

Parameters
imgchannel first, cupy.ndarray or numpy.ndarray

Input data of shape (C, H, W). Can also batch process input of shape (N, C, H, W). Can be a numpy.ndarray or cupy.ndarray.

min_zoom: Min zoom factor. Can be float or sequence same size as image.

If a float, select a random factor from [min_zoom, max_zoom] then apply to all spatial dims to keep the original spatial shape ratio. If a sequence, min_zoom should contain one value for each spatial axis. If 2 values provided for 3D data, use the first value for both H & W dims to keep the same zoom ratio.

max_zoom: Max zoom factor. Can be float or sequence same size as image.

If a float, select a random factor from [min_zoom, max_zoom] then apply to all spatial dims to keep the original spatial shape ratio. If a sequence, max_zoom should contain one value for each spatial axis. If 2 values provided for 3D data, use the first value for both H & W dims to keep the same zoom ratio.

prob: Probability of zooming.
whole_batch: Flag to apply transform on whole batch.

If False, each image in the batch is randomly transformed It True, entire batch is transformed randomly.

Returns
outcupy.ndarray or numpy.ndarray

Output data. Same dimensions and type as input.

Raises
TypeError

If input ‘img’ is not cupy.ndarray or numpy.ndarray

Examples

>>> import cucim.core.operations.intensity as its
>>> # input is channel first 3d array
>>> output_array = its.rand_zoom(input_arr)
cucim.core.operations.intensity.scale_intensity_range(img: Any, b_max: float, b_min: float, a_max: float, a_min: float, clip: bool = False) Any#

Apply intensity scaling to the input array. Scaling from [a_min, a_max] to [b_min, b_max] with clip option.

Parameters
imgchannel first, cupy.ndarray or numpy.ndarray

Input data of shape (C, H, W). Can also batch process input of shape (N, C, H, W). Can be a numpy.ndarray or cupy.ndarray.

b_minfloat

intensity target range min.

b_maxfloat

intensity target range max.

a_minfloat

intensity original range min.

a_maxfloat

intensity original range max.

clipfloat

whether to perform clip after scaling.

Returns
outcupy.ndarray or numpy.ndarray

Output data. Same dimensions and type as input.

Raises
TypeError

If input ‘img’ is not cupy.ndarray or numpy.ndarray

ValueError

If input original intensity min and max are same

Examples

>>> import cucim.core.operations.intensity as its
>>> # input is channel first 3d array
>>> output_array = its.scale_intensity_range(input_arr,
                                             0.0, 255.0,
                                             -1.0, 1.0, False)
cucim.core.operations.intensity.zoom(img: Any, zoom_factor: Sequence[float])#

Zooms an ND image

Parameters
imgchannel first, cupy.ndarray or numpy.ndarray

Input data of shape (C, H, W). Can also batch process input of shape (N, C, H, W). Can be a numpy.ndarray or cupy.ndarray.

zoom_factor: Sequence[float]

The zoom factor along the spatial axes. Zoom factor should contain one value for each spatial axis.

Returns
——-
outcupy.ndarray or numpy.ndarray

Output data. Same dimensions and type as input.

Raises
TypeError

If input ‘img’ is not cupy.ndarray or numpy.ndarray

Examples

>>> import cucim.core.operations.intensity as its
>>> # input is channel first 3d array
>>> output_array = its.zoom(input_arr,[1.1,1.1])

morphology#

cucim.core.operations.morphology.distance_transform_edt(image, sampling=None, return_distances=True, return_indices=False, distances=None, indices=None, *, block_params=None, float64_distances=False)#

Exact Euclidean distance transform.

This function calculates the distance transform of the input, by replacing each foreground (non-zero) element, with its shortest distance to the background (any zero-valued element).

In addition to the distance transform, the feature transform can be calculated. In this case the index of the closest background element to each foreground element is returned in a separate array.

Parameters
imagearray_like

Input data to transform. Can be any type but will be converted into binary: 1 wherever image equates to True, 0 elsewhere.

samplingfloat, or sequence of float, optional

Spacing of elements along each dimension. If a sequence, must be of length equal to the image rank; if a single number, this is used for all axes. If not specified, a grid spacing of unity is implied.

return_distancesbool, optional

Whether to calculate the distance transform.

return_indicesbool, optional

Whether to calculate the feature transform.

distancescupy.ndarray, optional

An output array to store the calculated distance transform, instead of returning it. return_distances must be True. It must be the same shape as image. Should have dtype cp.float32 if float64_distances is False, otherwise it should be cp.float64.

indicescupy.ndarray, optional

An output array to store the calculated feature transform, instead of returning it. return_indicies must be True. Its shape must be (image.ndim,) + image.shape. Its dtype must be a signed or unsigned integer type of at least 16-bits in 2D or 32-bits in 3D.

Returns
distancescupy.ndarray, optional

The calculated distance transform. Returned only when return_distances is True and distances is not supplied. It will have the same shape as image. Will have dtype cp.float64 if float64_distances is True, otherwise it will have dtype cp.float32.

indicesndarray, optional

The calculated feature transform. It has an image-shaped array for each dimension of the image. See example below. Returned only when return_indices is True and indices is not supplied.

Other Parameters
block_params3-tuple of int

The m1, m2, m3 algorithm parameters as described in [2]. If None, suitable defaults will be chosen. Note: This parameter is specific to cuCIM and does not exist in SciPy.

float64_distancesbool, optional

If True, use double precision in the distance computation (to match SciPy behavior). Otherwise, single precision will be used for efficiency. Note: This parameter is specific to cuCIM and does not exist in SciPy.

Notes

The Euclidean distance transform gives values of the Euclidean distance.

\[y_i = \sqrt{\sum_{i}^{n} (x[i] - b[i])^2}\]

where \(b[i]\) is the background point (value 0) with the smallest Euclidean distance to input points \(x[i]\), and \(n\) is the number of dimensions.

Note that the indices output may differ from the one given by scipy.ndimage.distance_transform_edt() in the case of input pixels that are equidistant from multiple background points.

The parallel banding algorithm implemented here was originally described in [1]. The kernels used here correspond to the revised PBA+ implementation that is described on the author’s website [2]. The source code of the author’s PBA+ implementation is available at [3].

References

1

Thanh-Tung Cao, Ke Tang, Anis Mohamed, and Tiow-Seng Tan. 2010. Parallel Banding Algorithm to compute exact distance transform with the GPU. In Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games (I3D ’10). Association for Computing Machinery, New York, NY, USA, 83–90. DOI:https://doi.org/10.1145/1730804.1730818

2(1,2)

https://www.comp.nus.edu.sg/~tants/pba.html

3

orzzzjq/Parallel-Banding-Algorithm-plus

Examples

>>> import cupy as cp
>>> from cucim.core.operations import morphology
>>> a = cp.array(([0,1,1,1,1],
...               [0,0,1,1,1],
...               [0,1,1,1,1],
...               [0,1,1,1,0],
...               [0,1,1,0,0]))
>>> morphology.distance_transform_edt(a)
array([[ 0.    ,  1.    ,  1.4142,  2.2361,  3.    ],
       [ 0.    ,  0.    ,  1.    ,  2.    ,  2.    ],
       [ 0.    ,  1.    ,  1.4142,  1.4142,  1.    ],
       [ 0.    ,  1.    ,  1.4142,  1.    ,  0.    ],
       [ 0.    ,  1.    ,  1.    ,  0.    ,  0.    ]])

With a sampling of 2 units along x, 1 along y:

>>> morphology.distance_transform_edt(a, sampling=[2,1])
array([[ 0.    ,  1.    ,  2.    ,  2.8284,  3.6056],
       [ 0.    ,  0.    ,  1.    ,  2.    ,  3.    ],
       [ 0.    ,  1.    ,  2.    ,  2.2361,  2.    ],
       [ 0.    ,  1.    ,  2.    ,  1.    ,  0.    ],
       [ 0.    ,  1.    ,  1.    ,  0.    ,  0.    ]])

Asking for indices as well:

>>> edt, inds = morphology.distance_transform_edt(a, return_indices=True)
>>> inds
array([[[0, 0, 1, 1, 3],
        [1, 1, 1, 1, 3],
        [2, 2, 1, 3, 3],
        [3, 3, 4, 4, 3],
        [4, 4, 4, 4, 4]],
       [[0, 0, 1, 1, 4],
        [0, 1, 1, 1, 4],
        [0, 0, 1, 4, 4],
        [0, 0, 3, 3, 4],
        [0, 0, 3, 3, 4]]])

spatial#

cucim.core.operations.spatial.image_flip(img: Any, spatial_axis: ()) Any#

Shape preserving order reversal of elements in input array along the given spatial axis

Parameters
imgcupy.ndarray or numpy.ndarray

Input data. Can be numpy.ndarray or cupy.ndarray

spatial_axistuple

spatial axis along which to flip over the input array

Returns
outcupy.ndarray or numpy.ndarray

Output data. Same dimensions and type as input.

Raises
TypeError

If input ‘img’ is not cupy.ndarray or numpy.ndarray

Examples

>>> import cucim.core.operations.spatial as spt
>>> # input is channel first 3d array
>>> output_array = spt.image_flip(input_arr, (1, 2))
cucim.core.operations.spatial.image_rotate_90(img: Any, k: int, spatial_axis: ()) Any#

Rotate input array by 90 degrees along the given axis

Parameters
imgcupy.ndarray or numpy.ndarray

Input data. Can be numpy.ndarray or cupy.ndarray

kint

number of times to rotate

spatial_axistuple

spatial axis along which to rotate the input array by 90 degrees

Returns
outcupy.ndarray or numpy.ndarray

Output data. Same dimensions and type as input.

Raises
TypeError

If input ‘img’ is not cupy.ndarray or numpy.ndarray

Examples

>>> import cucim.core.operations.spatial as spt
>>> # input is channel first 3d array
>>> output_array = spt.image_rotate_90(input_arr,1,(1,2))
cucim.core.operations.spatial.rand_image_flip(img: Any, spatial_axis: (), prob: float = 0.1, whole_batch: bool = False) Any#

Randomly flips the image along axis.

Parameters
imgcupy.ndarray or numpy.ndarray

Input data. Can be numpy.ndarray or cupy.ndarray

prob: Probability of flipping.
spatial_axistuple

spatial axis along which to flip over the input array

whole_batch: Flag to apply transform on whole batch.

If False, each image in the batch is randomly transformed It True, entire batch is transformed randomly.

Returns
outcupy.ndarray or numpy.ndarray

Output data. Same dimensions and type as input.

Raises
TypeError

If input ‘img’ is not cupy.ndarray or numpy.ndarray

Examples

>>> import cucim.core.operations.spatial as spt
>>> # input is channel first 3d array
>>> output_array = spt.rand_image_flip(input_arr,spatial_axis=(1,2))
cucim.core.operations.spatial.rand_image_rotate_90(img: Any, spatial_axis: (), prob: float = 0.1, max_k: int = 3, whole_batch: bool = False) Any#

With probability prob, input arrays are rotated by 90 degrees in the plane specified by spatial_axis.

Parameters
imgcupy.ndarray or numpy.ndarray

Input data. Can be numpy.ndarray or cupy.ndarray

prob: probability of rotating.

(Default 0.1, with 10% probability it returns a rotated array)

max_k: number of rotations

will be sampled from np.random.randint(max_k) + 1, (Default 3).

spatial_axistuple

spatial axis along which to rotate the input array by 90 degrees

whole_batch: Flag to apply transform on whole batch.

If False, each image in the batch is randomly transformed It True, entire batch is transformed randomly.

Returns
outcupy.ndarray or numpy.ndarray

Output data. Same dimensions and type as input.

Raises
TypeError

If input ‘img’ is not cupy.ndarray or numpy.ndarray

Examples

>>> import cucim.core.operations.spatial as spt
>>> # input is channel first 3d array
>>> output_array = spt.rand_image_rotate_90(input_arr, spatial_axis=(1, 2))

skimage Submodules#

color#

cucim.skimage.color.combine_stains(stains, conv_matrix, *, channel_axis=-1)#

Stain to RGB color space conversion.

Parameters
stains(…, C=3, …) array_like

The image in stain color space. By default, the final dimension denotes channels.

conv_matrix: ndarray

The stain separation matrix as described by G. Landini [1].

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If stains is not at least 2-D with shape (…, C=3, …).

Notes

Stain combination matrices available in the color module and their respective colorspace:

  • rgb_from_hed: Hematoxylin + Eosin + DAB

  • rgb_from_hdx: Hematoxylin + DAB

  • rgb_from_fgx: Feulgen + Light Green

  • rgb_from_bex: Giemsa stain : Methyl Blue + Eosin

  • rgb_from_rbd: FastRed + FastBlue + DAB

  • rgb_from_gdx: Methyl Green + DAB

  • rgb_from_hax: Hematoxylin + AEC

  • rgb_from_bro: Blue matrix Anilline Blue + Red matrix Azocarmine + Orange matrix Orange-G

  • rgb_from_bpx: Methyl Blue + Ponceau Fuchsin

  • rgb_from_ahx: Alcian Blue + Hematoxylin

  • rgb_from_hpx: Hematoxylin + PAS

References

1

https://web.archive.org/web/20160624145052/http://www.mecourse.com/landinig/software/cdeconv/cdeconv.html

2

A. C. Ruifrok and D. A. Johnston, “Quantification of histochemical staining by color deconvolution,” Anal. Quant. Cytol. Histol., vol. 23, no. 4, pp. 291–299, Aug. 2001.

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage.color import (separate_stains, combine_stains,
...                                    hdx_from_rgb, rgb_from_hdx)
>>> ihc = cp.array(data.immunohistochemistry())
>>> ihc_hdx = separate_stains(ihc, hdx_from_rgb)
>>> ihc_rgb = combine_stains(ihc_hdx, rgb_from_hdx)
cucim.skimage.color.convert_colorspace(arr, fromspace, tospace, *, channel_axis=-1)#

Convert an image array to a new color space.

Valid color spaces are:

‘RGB’, ‘HSV’, ‘RGB CIE’, ‘XYZ’, ‘YUV’, ‘YIQ’, ‘YPbPr’, ‘YCbCr’, ‘YDbDr’

Parameters
arr(…, C=3, …) array_like

The image to convert. By default, the final dimension denotes channels.

fromspacestr

The color space to convert from. Can be specified in lower case.

tospacestr

The color space to convert to. Can be specified in lower case.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The converted image. Same dimensions as input.

Raises
ValueError

If fromspace is not a valid color space

ValueError

If tospace is not a valid color space

Notes

Conversion is performed through the “central” RGB color space, i.e. conversion from XYZ to HSV is implemented as XYZ -> RGB -> HSV instead of directly.

Examples

>>> import cupy as cp
>>> from skimage import data
>>> img = cp.array(data.astronaut())
>>> img_hsv = convert_colorspace(img, 'RGB', 'HSV')
cucim.skimage.color.deltaE_cie76(lab1, lab2, channel_axis=-1)#

Euclidean distance between two points in Lab color space

Parameters
lab1array_like

reference color (Lab colorspace)

lab2array_like

comparison color (Lab colorspace)

channel_axisint, optional

This parameter indicates which axis of the arrays corresponds to channels.

Returns
dEarray_like

distance between colors lab1 and lab2

References

1

https://en.wikipedia.org/wiki/Color_difference

2

A. R. Robertson, “The CIE 1976 color-difference formulae,” Color Res. Appl. 2, 7-11 (1977).

cucim.skimage.color.deltaE_ciede2000(lab1, lab2, kL=1, kC=1, kH=1, *, channel_axis=-1)#

Color difference as given by the CIEDE 2000 standard.

CIEDE 2000 is a major revision of CIDE94. The perceptual calibration is largely based on experience with automotive paint on smooth surfaces.

Parameters
lab1array_like

reference color (Lab colorspace)

lab2array_like

comparison color (Lab colorspace)

kLfloat (range), optional

lightness scale factor, 1 for “acceptably close”; 2 for “imperceptible” see deltaE_cmc

kCfloat (range), optional

chroma scale factor, usually 1

kHfloat (range), optional

hue scale factor, usually 1

channel_axisint, optional

This parameter indicates which axis of the arrays corresponds to channels.

Returns
deltaEarray_like

The distance between lab1 and lab2

Notes

CIEDE 2000 assumes parametric weighting factors for the lightness, chroma, and hue (kL, kC, kH respectively). These default to 1.

References

1

https://en.wikipedia.org/wiki/Color_difference

2

http://www.ece.rochester.edu/~gsharma/ciede2000/ciede2000noteCRNA.pdf DOI:10.1364/AO.33.008069

3

M. Melgosa, J. Quesada, and E. Hita, “Uniformity of some recent color metrics tested with an accurate color-difference tolerance dataset,” Appl. Opt. 33, 8069-8077 (1994).

cucim.skimage.color.deltaE_ciede94(lab1, lab2, kH=1, kC=1, kL=1, k1=0.045, k2=0.015, *, channel_axis=-1)#

Color difference according to CIEDE 94 standard

Accommodates perceptual non-uniformities through the use of application specific scale factors (kH, kC, kL, k1, and k2).

Parameters
lab1array_like

reference color (Lab colorspace)

lab2array_like

comparison color (Lab colorspace)

kHfloat, optional

Hue scale

kCfloat, optional

Chroma scale

kLfloat, optional

Lightness scale

k1float, optional

first scale parameter

k2float, optional

second scale parameter

channel_axisint, optional

This parameter indicates which axis of the arrays corresponds to channels.

Returns
dEarray_like

color difference between lab1 and lab2

Notes

deltaE_ciede94 is not symmetric with respect to lab1 and lab2. CIEDE94 defines the scales for the lightness, hue, and chroma in terms of the first color. Consequently, the first color should be regarded as the “reference” color.

kL, k1, k2 depend on the application and default to the values suggested for graphic arts

Parameter

Graphic Arts

Textiles

kL

1.000

2.000

k1

0.045

0.048

k2

0.015

0.014

References

1

https://en.wikipedia.org/wiki/Color_difference

2

http://www.brucelindbloom.com/index.html?Eqn_DeltaE_CIE94.html

cucim.skimage.color.deltaE_cmc(lab1, lab2, kL=1, kC=1, *, channel_axis=-1)#

Color difference from the CMC l:c standard.

This color difference was developed by the Colour Measurement Committee (CMC) of the Society of Dyers and Colourists (United Kingdom). It is intended for use in the textile industry.

The scale factors kL, kC set the weight given to differences in lightness and chroma relative to differences in hue. The usual values are kL=2, kC=1 for “acceptability” and kL=1, kC=1 for “imperceptibility”. Colors with dE > 1 are “different” for the given scale factors.

Parameters
lab1array_like

reference color (Lab colorspace)

lab2array_like

comparison color (Lab colorspace)

channel_axisint, optional

This parameter indicates which axis of the arrays corresponds to channels.

Returns
dEarray_like

distance between colors lab1 and lab2

Notes

deltaE_cmc the defines the scales for the lightness, hue, and chroma in terms of the first color. Consequently deltaE_cmc(lab1, lab2) != deltaE_cmc(lab2, lab1)

References

1

https://en.wikipedia.org/wiki/Color_difference

2

http://www.brucelindbloom.com/index.html?Eqn_DeltaE_CIE94.html

3

F. J. J. Clarke, R. McDonald, and B. Rigg, “Modification to the JPC79 colour-difference formula,” J. Soc. Dyers Colour. 100, 128-132 (1984).

cucim.skimage.color.gray2rgb(image, *, channel_axis=-1)#

Create an RGB representation of a gray-level image.

Parameters
imagearray_like

Input image.

channel_axisint, optional

This parameter indicates which axis of the output array will correspond to channels.

Returns
rgb(…, C=3, …) ndarray

RGB image. A new dimension of length 3 is added to input image.

Notes

If the input is a 1-dimensional image of shape (M, ), the output will be shape (M, 3).

cucim.skimage.color.gray2rgba(image, alpha=None, *, channel_axis=-1, check_alpha=True)#

Create a RGBA representation of a gray-level image.

Parameters
imagearray_like

Input image.

alphaarray_like, optional

Alpha channel of the output image. It may be a scalar or an array that can be broadcast to image. If not specified it is set to the maximum limit corresponding to the image dtype.

channel_axisint, optional

This parameter indicates which axis of the output array will correspond to channels.

check_alphabool, optional

Checking for unsafe casting of alpha adds overhead, so can be disabled on request. Note: This kwarg is not present in scikit-image (it always checks the alpha array).

Returns
rgbandarray

RGBA image. A new dimension of length 4 is added to input image shape.

cucim.skimage.color.hed2rgb(hed, *, channel_axis=-1)#

Haematoxylin-Eosin-DAB (HED) to RGB color space conversion.

Parameters
hed(…, C=3, …) array_like

The image in the HED color space. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in RGB. Same dimensions as input.

Raises
ValueError

If hed is not at least 2-D with shape (…, C=3, …).

References

1

A. C. Ruifrok and D. A. Johnston, “Quantification of histochemical staining by color deconvolution.,” Analytical and quantitative cytology and histology / the International Academy of Cytology [and] American Society of Cytology, vol. 23, no. 4, pp. 291-9, Aug. 2001.

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage.color import rgb2hed, hed2rgb
>>> ihc = cp.array(data.immunohistochemistry())
>>> ihc_hed = rgb2hed(ihc)
>>> ihc_rgb = hed2rgb(ihc_hed)
cucim.skimage.color.hsv2rgb(hsv, *, channel_axis=-1)#

HSV to RGB color space conversion.

Parameters
hsv(…, 3, …) array_like

The image in HSV format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, 3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If hsv is not at least 2-D with shape (…, 3, …).

Notes

Conversion between RGB and HSV color spaces results in some loss of precision, due to integer arithmetic and rounding [1].

References

1

https://en.wikipedia.org/wiki/HSL_and_HSV

Examples

>>> import cupy as cp
>>> from skimage import data
>>> img = cp.array(data.astronaut())
>>> img_hsv = rgb2hsv(img)
>>> img_rgb = hsv2rgb(img_hsv)
cucim.skimage.color.lab2lch(lab, *, channel_axis=-1)#

CIE-LAB to CIE-LCH color space conversion.

LCH is the cylindrical representation of the LAB (Cartesian) colorspace

Parameters
lab(…, C=3, …) array_like

The N-D image in CIE-LAB format. The last (N+1-th) dimension must have at least 3 elements, corresponding to the L, a, and b color channels. Subsequent elements are copied.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in LCH format, in a N-D array with same shape as input lab.

Raises
ValueError

If lch does not have at least 3 color channels (i.e. l, a, b).

Notes

The Hue is expressed as an angle between (0, 2*pi)

Examples

>>> from skimage import data
>>> from cucim.skimage.color import rgb2lab, lab2lch
>>> img = cp.array(data.astronaut())
>>> img_lab = rgb2lab(img)
>>> img_lch = lab2lch(img_lab)
cucim.skimage.color.lab2rgb(lab, illuminant='D65', observer='2', *, channel_axis=-1)#

Convert image in CIE-LAB to sRGB color space.

Parameters
lab(…, C=3, …) array_like

The input image in CIE-LAB color space. Unless channel_axis is set, the final dimension denotes the CIE-LAB channels. The L* values range from 0 to 100; the a* and b* values range from -128 to 127.

illuminant{“A”, “B”, “C”, “D50”, “D55”, “D65”, “D75”, “E”}, optional

The name of the illuminant (the function is NOT case sensitive).

observer{“2”, “10”, “R”}, optional

The aperture angle of the observer.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in sRGB color space, of same shape as input.

Raises
ValueError

If lab is not at least 2-D with shape (…, C=3, …).

See also

rgb2lab

Notes

This function uses lab2xyz() and xyz2rgb(). The CIE XYZ tristimulus values are x_ref = 95.047, y_ref = 100., and z_ref = 108.883. See function xyz_tristimulus_values() for a list of supported illuminants.

References

1

https://en.wikipedia.org/wiki/Standard_illuminant

2

https://en.wikipedia.org/wiki/CIELAB_color_space

cucim.skimage.color.lab2xyz(lab, illuminant='D65', observer='2', *, channel_axis=-1)#

Convert image in CIE-LAB to XYZ color space.

Parameters
lab(…, C=3, …) array_like

The input image in CIE-LAB color space. Unless channel_axis is set, the final dimension denotes the CIE-LAB channels. The L* values range from 0 to 100; the a* and b* values range from -128 to 127.

illuminant{“A”, “B”, “C”, “D50”, “D55”, “D65”, “D75”, “E”}, optional

The name of the illuminant (the function is NOT case sensitive).

observer{“2”, “10”, “R”}, optional

The aperture angle of the observer.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in XYZ color space, of same shape as input.

Raises
ValueError

If lab is not at least 2-D with shape (…, C=3, …).

ValueError

If either the illuminant or the observer angle are not supported or unknown.

UserWarning

If any of the pixels are invalid (Z < 0).

See also

xyz2lab

Notes

The CIE XYZ tristimulus values are x_ref = 95.047, y_ref = 100., and z_ref = 108.883. See function xyz_tristimulus_values() for a list of supported illuminants.

References

1

http://www.easyrgb.com/en/math.php

2

https://en.wikipedia.org/wiki/CIELAB_color_space

cucim.skimage.color.label2rgb(label, image=None, colors=None, alpha=0.3, bg_label=0, bg_color=(0, 0, 0), image_alpha=1, kind='overlay', *, saturation=0, channel_axis=-1)#

Return an RGB image where color-coded labels are painted over the image.

Parameters
labelndarray

Integer array of labels with the same shape as image.

imagendarray, optional

Image used as underlay for labels. It should have the same shape as labels, optionally with an additional RGB (channels) axis. If image is an RGB image, it is converted to grayscale before coloring.

colorslist, optional

List of colors. If the number of labels exceeds the number of colors, then the colors are cycled.

alphafloat [0, 1], optional

Opacity of colorized labels. Ignored if image is None.

bg_labelint, optional

Label that’s treated as the background. If bg_label is specified, bg_color is None, and kind is overlay, background is not painted by any colors.

bg_colorstr or array, optional

Background color. Must be a name in cucim.skimage.color.color_dict or RGB float values between [0, 1].

image_alphafloat [0, 1], optional

Opacity of the image.

kindstring, one of {‘overlay’, ‘avg’}

The kind of color image desired. ‘overlay’ cycles over defined colors and overlays the colored labels over the original image. ‘avg’ replaces each labeled segment with its average color, for a stained-class or pastel painting appearance.

saturationfloat [0, 1], optional

Parameter to control the saturation applied to the original image between fully saturated (original RGB, saturation=1) and fully unsaturated (grayscale, saturation=0). Only applies when kind=’overlay’.

channel_axisint, optional

This parameter indicates which axis of the output array will correspond to channels. If image is provided, this must also match the axis of image that corresponds to channels.

Returns
resultarray of float, shape (M, N, 3)

The result of blending a cycling colormap (colors) for each distinct value in label with the image, at a certain alpha value.

cucim.skimage.color.lch2lab(lch, *, channel_axis=-1)#

CIE-LCH to CIE-LAB color space conversion.

LCH is the cylindrical representation of the LAB (Cartesian) colorspace

Parameters
lch(…, C=3, …) array_like

The N-D image in CIE-LCH format. The last (N+1-th) dimension must have at least 3 elements, corresponding to the L, a, and b color channels. Subsequent elements are copied.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in LAB format, with same shape as input lch.

Raises
ValueError

If lch does not have at least 3 color channels (i.e. l, c, h).

Examples

>>> from skimage import data
>>> from cucim.skimage.color import rgb2lab, lch2lab
>>> img = cp.array(data.astronaut())
>>> img_lab = rgb2lab(img)
>>> img_lch = lab2lch(img_lab)
>>> img_lab2 = lch2lab(img_lch)
cucim.skimage.color.luv2rgb(luv, *, channel_axis=-1)#

Luv to RGB color space conversion.

Parameters
luv(…, C=3, …) array_like

The image in CIE Luv format. By default, the final dimension denotes channels.

Returns
out(…, C=3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If luv is not at least 2-D with shape (…, C=3, …).

Notes

This function uses luv2xyz and xyz2rgb.

cucim.skimage.color.luv2xyz(luv, illuminant='D65', observer='2', *, channel_axis=-1)#

CIE-Luv to XYZ color space conversion.

Parameters
luv(…, C=3, …) array_like

The image in CIE-Luv format. By default, the final dimension denotes channels.

illuminant{“A”, “B”, “C”, “D50”, “D55”, “D65”, “D75”, “E”}, optional

The name of the illuminant (the function is NOT case sensitive).

observer{“2”, “10”, “R”}, optional

The aperture angle of the observer.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in XYZ format. Same dimensions as input.

Raises
ValueError

If luv is not at least 2-D with shape (…, C=3, …).

ValueError

If either the illuminant or the observer angle are not supported or unknown.

Notes

XYZ conversion weights use observer=2A. Reference whitepoint for D65 Illuminant, with XYZ tristimulus values of (95.047, 100., 108.883). See function xyz_tristimulus_values() for a list of supported illuminants.

References

1

http://www.easyrgb.com/en/math.php

2

https://en.wikipedia.org/wiki/CIELUV

cucim.skimage.color.rgb2gray(rgb, *, channel_axis=-1)#

Compute luminance of an RGB image.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

Returns
outndarray

The luminance image - an array which is the same size as the input array, but with the channel dimension removed.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

Notes

The weights used in this conversion are calibrated for contemporary CRT phosphors:

Y = 0.2125 R + 0.7154 G + 0.0721 B

If there is an alpha channel present, it is ignored.

References

1

http://poynton.ca/PDFs/ColorFAQ.pdf

Examples

>>> import cupy as cp
>>> from cucim.skimage.color import rgb2gray
>>> from skimage import data
>>> img = cp.array(data.astronaut())
>>> img_gray = rgb2gray(img)
cucim.skimage.color.rgb2hed(rgb, *, channel_axis=-1)#

RGB to Haematoxylin-Eosin-DAB (HED) color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in HED format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

References

1

A. C. Ruifrok and D. A. Johnston, “Quantification of histochemical staining by color deconvolution.,” Analytical and quantitative cytology and histology / the International Academy of Cytology [and] American Society of Cytology, vol. 23, no. 4, pp. 291-9, Aug. 2001.

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage.color import rgb2hed
>>> ihc = cp.array(data.immunohistochemistry())
>>> ihc_hed = rgb2hed(ihc)
cucim.skimage.color.rgb2hsv(rgb, *, channel_axis=-1)#

RGB to HSV color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in HSV format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

Notes

Conversion between RGB and HSV color spaces results in some loss of precision, due to integer arithmetic and rounding [1].

References

1

https://en.wikipedia.org/wiki/HSL_and_HSV

Examples

>>> import cupy as cp
>>> from cucim.skimage import color
>>> from skimage import data
>>> img = cp.array(data.astronaut())
>>> img_hsv = color.rgb2hsv(img)
cucim.skimage.color.rgb2lab(rgb, illuminant='D65', observer='2', *, channel_axis=-1)#

Conversion from the sRGB color space (IEC 61966-2-1:1999) to the CIE Lab colorspace under the given illuminant and observer.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

illuminant{“A”, “B”, “C”, “D50”, “D55”, “D65”, “D75”, “E”}, optional

The name of the illuminant (the function is NOT case sensitive).

observer{“2”, “10”, “R”}, optional

The aperture angle of the observer.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in Lab format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

Notes

RGB is a device-dependent color space so, if you use this function, be sure that the image you are analyzing has been mapped to the sRGB color space.

This function uses rgb2xyz and xyz2lab. By default Observer=”2”, Illuminant=”D65”. CIE XYZ tristimulus values x_ref=95.047, y_ref=100., z_ref=108.883. See function xyz_tristimulus_values() for a list of supported illuminants.

References

1

https://en.wikipedia.org/wiki/Standard_illuminant

cucim.skimage.color.rgb2luv(rgb, *, channel_axis=-1)#

RGB to CIE-Luv color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in CIE Luv format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

Notes

This function uses rgb2xyz and xyz2luv.

References

1

http://www.easyrgb.com/en/math.php

2

https://en.wikipedia.org/wiki/CIELUV

cucim.skimage.color.rgb2rgbcie(rgb, *, channel_axis=-1)#

RGB to RGB CIE color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in RGB CIE format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

References

1

https://en.wikipedia.org/wiki/CIE_1931_color_space

Examples

>>> from skimage import data
>>> from cucim.skimage.color import rgb2rgbcie
>>> img = cp.array(data.astronaut())
>>> img_rgbcie = rgb2rgbcie(img)
cucim.skimage.color.rgb2xyz(rgb, *, channel_axis=-1)#

RGB to XYZ color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in XYZ format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

Notes

The CIE XYZ color space is derived from the CIE RGB color space. Note however that this function converts from sRGB.

References

1

https://en.wikipedia.org/wiki/CIE_1931_color_space

Examples

>>> import cupy as cp
>>> from skimage import data
>>> img = cp.array(data.astronaut())
>>> img_xyz = rgb2xyz(img)
cucim.skimage.color.rgb2ycbcr(rgb, *, channel_axis=-1)#

RGB to YCbCr color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in YCbCr format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

Notes

Y is between 16 and 235. This is the color space commonly used by video codecs; it is sometimes incorrectly called “YUV”.

References

1

https://en.wikipedia.org/wiki/YCbCr

cucim.skimage.color.rgb2ydbdr(rgb, *, channel_axis=-1)#

RGB to YDbDr color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in YDbDr format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

Notes

This is the color space commonly used by video codecs. It is also the reversible color transform in JPEG2000.

References

1

https://en.wikipedia.org/wiki/YDbDr

cucim.skimage.color.rgb2yiq(rgb, *, channel_axis=-1)#

RGB to YIQ color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in YIQ format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

cucim.skimage.color.rgb2ypbpr(rgb, *, channel_axis=-1)#

RGB to YPbPr color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in YPbPr format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

References

1

https://en.wikipedia.org/wiki/YPbPr

cucim.skimage.color.rgb2yuv(rgb, *, channel_axis=-1)#

RGB to YUV color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in YUV format. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

Notes

Y is between 0 and 1. Use YCbCr instead of YUV for the color space commonly used by video codecs, where Y ranges from 16 to 235.

References

1

https://en.wikipedia.org/wiki/YUV

cucim.skimage.color.rgba2rgb(rgba, background=(1, 1, 1), *, channel_axis=-1)#

RGBA to RGB conversion using alpha blending [1].

Parameters
rgba(…, C=4, …) array_like

The image in RGBA format. By default, the final dimension denotes channels.

backgroundarray_like

The color of the background to blend the image with (3 floats between 0 to 1 - the RGB value of the background).

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If rgba is not at least 2D with shape (…, 4, …).

References

1

https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending

Examples

>>> import cupy as cp
>>> from cucim.skimage import color
>>> from skimage import data
>>> img_rgba = cp.array(data.logo())
>>> img_rgb = color.rgba2rgb(img_rgba)
cucim.skimage.color.rgbcie2rgb(rgbcie, *, channel_axis=-1)#

RGB CIE to RGB color space conversion.

Parameters
rgbcie(…, C=3, …) array_like

The image in RGB CIE format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If rgbcie is not at least 2-D with shape (…, C=3, …).

References

1

https://en.wikipedia.org/wiki/CIE_1931_color_space

Examples

>>> from skimage import data
>>> from cucim.skimage.color import rgb2rgbcie, rgbcie2rgb
>>> img = cp.array(data.astronaut())
>>> img_rgbcie = rgb2rgbcie(img)
>>> img_rgb = rgbcie2rgb(img_rgbcie)
cucim.skimage.color.separate_stains(rgb, conv_matrix, *, channel_axis=-1)#

RGB to stain color space conversion.

Parameters
rgb(…, C=3, …) array_like

The image in RGB format. By default, the final dimension denotes channels.

conv_matrix: ndarray

The stain separation matrix as described by G. Landini [1].

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in stain color space. Same dimensions as input.

Raises
ValueError

If rgb is not at least 2-D with shape (…, C=3, …).

Notes

Stain separation matrices available in the color module and their respective colorspace:

  • hed_from_rgb: Hematoxylin + Eosin + DAB

  • hdx_from_rgb: Hematoxylin + DAB

  • fgx_from_rgb: Feulgen + Light Green

  • bex_from_rgb: Giemsa stain : Methyl Blue + Eosin

  • rbd_from_rgb: FastRed + FastBlue + DAB

  • gdx_from_rgb: Methyl Green + DAB

  • hax_from_rgb: Hematoxylin + AEC

  • bro_from_rgb: Blue matrix Anilline Blue + Red matrix Azocarmine + Orange matrix Orange-G

  • bpx_from_rgb: Methyl Blue + Ponceau Fuchsin

  • ahx_from_rgb: Alcian Blue + Hematoxylin

  • hpx_from_rgb: Hematoxylin + PAS

This implementation borrows some ideas from DIPlib [2], e.g. the compensation using a small value to avoid log artifacts when calculating the Beer-Lambert law.

References

1

https://web.archive.org/web/20160624145052/http://www.mecourse.com/landinig/software/cdeconv/cdeconv.html

2

DIPlib/diplib

3

A. C. Ruifrok and D. A. Johnston, “Quantification of histochemical staining by color deconvolution,” Anal. Quant. Cytol. Histol., vol. 23, no. 4, pp. 291–299, Aug. 2001.

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage.color import separate_stains, hdx_from_rgb
>>> ihc = cp.array(data.immunohistochemistry())
>>> ihc_hdx = separate_stains(ihc, hdx_from_rgb)
cucim.skimage.color.xyz2lab(xyz, illuminant='D65', observer='2', *, channel_axis=-1)#

XYZ to CIE-LAB color space conversion.

Parameters
xyz(…, C=3, …) array_like

The image in XYZ format. By default, the final dimension denotes channels.

illuminant{“A”, “B”, “C”, “D50”, “D55”, “D65”, “D75”, “E”}, optional

The name of the illuminant (the function is NOT case sensitive).

observer{“2”, “10”, “R”}, optional

One of: 2-degree observer, 10-degree observer, or ‘R’ observer as in R function grDevices::convertColor.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in CIE-LAB format. Same dimensions as input.

Raises
ValueError

If xyz is not at least 2-D with shape (…, C=3, …).

ValueError

If either the illuminant or the observer angle is unsupported or unknown.

Notes

By default Observer=”2”, Illuminant=”D65”. CIE XYZ tristimulus values x_ref=95.047, y_ref=100., z_ref=108.883. See function xyz_tristimulus_values() for a list of supported illuminants.

References

1

http://www.easyrgb.com/en/math.php

2

https://en.wikipedia.org/wiki/CIELAB_color_space

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage.color import rgb2xyz, xyz2lab
>>> img = cp.array(data.astronaut())
>>> img_xyz = rgb2xyz(img)
>>> img_lab = xyz2lab(img_xyz)
cucim.skimage.color.xyz2luv(xyz, illuminant='D65', observer='2', *, channel_axis=-1)#

XYZ to CIE-Luv color space conversion.

Parameters
xyz(…, C=3, …) array_like

The image in XYZ format. By default, the final dimension denotes channels.

illuminant{“A”, “B”, “C”, “D50”, “D55”, “D65”, “D75”, “E”}, optional

The name of the illuminant (the function is NOT case sensitive).

observer{“2”, “10”, “R”}, optional

The aperture angle of the observer.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in CIE-Luv format. Same dimensions as input.

Raises
ValueError

If xyz is not at least 2-D with shape (…, C=3, …).

ValueError

If either the illuminant or the observer angle are not supported or unknown.

Notes

By default XYZ conversion weights use observer=2A. Reference whitepoint for D65 Illuminant, with XYZ tristimulus values of (95.047, 100., 108.883). See function xyz_tristimulus_values() for a list of supported illuminants.

References

1

http://www.easyrgb.com/en/math.php

2

https://en.wikipedia.org/wiki/CIELUV

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage.color import rgb2xyz, xyz2luv
>>> img = cp.array(data.astronaut())
>>> img_xyz = rgb2xyz(img)
>>> img_luv = xyz2luv(img_xyz)
cucim.skimage.color.xyz2rgb(xyz, *, channel_axis=-1)#

XYZ to RGB color space conversion.

Parameters
xyz(…, C=3, …) array_like

The image in XYZ format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If xyz is not at least 2-D with shape (…, C=3, …).

Notes

The CIE XYZ color space is derived from the CIE RGB color space. Note however that this function converts to sRGB.

References

1

https://en.wikipedia.org/wiki/CIE_1931_color_space

Examples

>>> from skimage import data
>>> from cucim.skimage.color import rgb2xyz, xyz2rgb
>>> img = cp.array(data.astronaut())
>>> img_xyz = rgb2xyz(img)
>>> img_rgb = xyz2rgb(img_xyz)
cucim.skimage.color.xyz_tristimulus_values(*, illuminant, observer, dtype=None)#

Get the CIE XYZ tristimulus values.

Given an illuminant and observer, this function returns the CIE XYZ tristimulus values [2] scaled such that \(Y = 1\).

Parameters
illuminant{“A”, “B”, “C”, “D50”, “D55”, “D65”, “D75”, “E”}

The name of the illuminant (the function is NOT case sensitive).

observer{“2”, “10”, “R”}

One of: 2-degree observer, 10-degree observer, or ‘R’ observer as in R function grDevices::convertColor [3].

dtypenp.dtype, optional

This argument is ignored in the cuCIM implementation of xyz_tristimulus_values since an array is not returned. The output is always a 3-tuple of float.

Returns
values3-tuple of float

Three elements \(X, Y, Z\) containing the CIE XYZ tristimulus values of the given illuminant.

Raises
ValueError

If either the illuminant or the observer angle are not supported or unknown.

Notes

The return type of this function differs from the one in scikit-image as it always returns a 3-tuple of float rather than an array with a user-specified dtype.

The CIE XYZ tristimulus values are calculated from \(x, y\) [1], using the formula

\[X = x / y\]
\[Y = 1\]
\[Z = (1 - x - y) / y\]

The only exception is the illuminant “D65” with aperture angle 2° for backward-compatibility reasons.

References

1

https://en.wikipedia.org/wiki/Standard_illuminant#White_points_of_standard_illuminants

2

https://en.wikipedia.org/wiki/CIE_1931_color_space#Meaning_of_X,_Y_and_Z

3

https://www.rdocumentation.org/packages/grDevices/versions/3.6.2/topics/convertColor

Examples

Get the CIE XYZ tristimulus values for a “D65” illuminant for a 10 degree field of view

>>> xyz_tristimulus_values(illuminant="D65", observer="10")
array([0.94809668, 1.        , 1.07305136])
cucim.skimage.color.ycbcr2rgb(ycbcr, *, channel_axis=-1)#

YCbCr to RGB color space conversion.

Parameters
ycbcr(…, C=3, …) array_like

The image in YCbCr format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If ycbcr is not at least 2-D with shape (…, C=3, …).

Notes

Y is between 16 and 235. This is the color space commonly used by video codecs; it is sometimes incorrectly called “YUV”.

References

1

https://en.wikipedia.org/wiki/YCbCr

cucim.skimage.color.ydbdr2rgb(ydbdr, *, channel_axis=-1)#

YDbDr to RGB color space conversion.

Parameters
ydbdr(…, C=3, …) array_like

The image in YDbDr format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If ydbdr is not at least 2-D with shape (…, C=3, …).

Notes

This is the color space commonly used by video codecs, also called the reversible color transform in JPEG2000.

References

1

https://en.wikipedia.org/wiki/YDbDr

cucim.skimage.color.yiq2rgb(yiq, *, channel_axis=-1)#

YIQ to RGB color space conversion.

Parameters
yiq(…, C=3, …) array_like

The image in YIQ format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If yiq is not at least 2-D with shape (…, C=3, …).

cucim.skimage.color.ypbpr2rgb(ypbpr, *, channel_axis=-1)#

YPbPr to RGB color space conversion.

Parameters
ypbpr(…, C=3, …) array_like

The image in YPbPr format. By default, the final dimension denotes channels.

channel_axisint, optional

This parameter indicates which axis of the array corresponds to channels.

Returns
out(…, C=3) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If ypbpr is not at least 2-D with shape (…, C=3).

References

1

https://en.wikipedia.org/wiki/YPbPr

cucim.skimage.color.yuv2rgb(yuv, *, channel_axis=-1)#

YUV to RGB color space conversion.

Parameters
yuv(…, C=3, …) array_like

The image in YUV format. By default, the final dimension denotes channels.

Returns
out(…, C=3, …) ndarray

The image in RGB format. Same dimensions as input.

Raises
ValueError

If yuv is not at least 2-D with shape (…, C=3, …).

References

1

https://en.wikipedia.org/wiki/YUV

data#

cucim.skimage.data.binary_blobs(length=512, blob_size_fraction=0.1, n_dim=2, volume_fraction=0.5, rng=None, *, seed=<DEPRECATED>)#

Generate synthetic binary image with several rounded blob-like objects.

Parameters
lengthint, optional

Linear size of output image.

blob_size_fractionfloat, optional

Typical linear size of blob, as a fraction of length, should be smaller than 1.

n_dimint, optional

Number of dimensions of output image.

volume_fractionfloat, default 0.5

Fraction of image pixels covered by the blobs (where the output is 1). Should be in [0, 1].

rng{cupy.random.Generator, int}, optional

Pseudo-random number generator. By default, a PCG64 generator is used (see cupy.random.default_rng()). If rng is an int, it is used to seed the generator.

Returns
blobsndarray of bools

Output binary image

Other Parameters
seedDEPRECATED

Deprecated in favor of rng.

Deprecated since version 23.12.00.

Notes

Warning: CuPy does not give identical randomly generated numbers as NumPy, so using a specific rng here will not give an identical pattern to the scikit-image implementation.

The behavior for a given random seed may also change across CuPy major versions. See: https://docs.cupy.dev/en/stable/reference/random.html

Examples

>>> from cucim.skimage import data
>>> # tiny size (5, 5)
>>> blobs = data.binary_blobs(length=5, blob_size_fraction=0.2)
>>> # larger size
>>> blobs = data.binary_blobs(length=256, blob_size_fraction=0.1)
>>> # Finer structures
>>> blobs = data.binary_blobs(length=256, blob_size_fraction=0.05)
>>> # Blobs cover a smaller volume fraction of the image
>>> blobs = data.binary_blobs(length=256, volume_fraction=0.3)

exposure#

cucim.skimage.exposure.adjust_gamma(image, gamma=1, gain=1)#

Performs Gamma Correction on the input image.

Also known as Power Law Transform. This function transforms the input image pixelwise according to the equation O = I**gamma after scaling each pixel to the range 0 to 1.

Parameters
imagendarray

Input image.

gammafloat, optional

Non negative real number. Default value is 1.

gainfloat, optional

The constant multiplier. Default value is 1.

Returns
outndarray

Gamma corrected output image.

See also

adjust_log

Notes

For gamma greater than 1, the histogram will shift towards left and the output image will be darker than the input image.

For gamma less than 1, the histogram will shift towards right and the output image will be brighter than the input image.

References

1

https://en.wikipedia.org/wiki/Gamma_correction

Examples

>>> from skimage import data
>>> from cucim.skimage import exposure, img_as_float
>>> image = img_as_float(cp.array(data.moon()))
>>> gamma_corrected = exposure.adjust_gamma(image, 2)
>>> # Output is darker for gamma > 1
>>> image.mean() > gamma_corrected.mean()
array(True)
cucim.skimage.exposure.adjust_log(image, gain=1, inv=False)#

Performs Logarithmic correction on the input image.

This function transforms the input image pixelwise according to the equation O = gain*log(1 + I) after scaling each pixel to the range 0 to 1.

For inverse logarithmic correction, the equation is O = gain*(2**I - 1).

Parameters
imagendarray

Input image.

gainfloat, optional

The constant multiplier. Default value is 1.

invfloat, optional

If True, it performs inverse logarithmic correction, else correction will be logarithmic. Defaults to False.

Returns
outndarray

Logarithm corrected output image.

See also

adjust_gamma

References

1

http://www.ece.ucsb.edu/Faculty/Manjunath/courses/ece178W03/EnhancePart1.pdf

cucim.skimage.exposure.adjust_sigmoid(image, cutoff=0.5, gain=10, inv=False)#

Performs Sigmoid Correction on the input image.

Also known as Contrast Adjustment. This function transforms the input image pixelwise according to the equation O = 1/(1 + exp*(gain*(cutoff - I))) after scaling each pixel to the range 0 to 1.

Parameters
imagendarray

Input image.

cutofffloat, optional

Cutoff of the sigmoid function that shifts the characteristic curve in horizontal direction. Default value is 0.5.

gainfloat, optional

The constant multiplier in exponential’s power of sigmoid function. Default value is 10.

invbool, optional

If True, returns the negative sigmoid correction. Defaults to False.

Returns
outndarray

Sigmoid corrected output image.

See also

adjust_gamma

References

1

Gustav J. Braun, “Image Lightness Rescaling Using Sigmoidal Contrast Enhancement Functions”, http://markfairchild.org/PDFs/PAP07.pdf

cucim.skimage.exposure.cumulative_distribution(image, nbins=256)#

Return cumulative distribution function (cdf) for the given image.

Parameters
imagearray

Image array.

nbinsint, optional

Number of bins for image histogram.

Returns
img_cdfarray

Values of cumulative distribution function.

bin_centersarray

Centers of bins.

See also

histogram

References

1

https://en.wikipedia.org/wiki/Cumulative_distribution_function

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage import exposure, img_as_float
>>> image = img_as_float(cp.array(data.camera()))
>>> hi = exposure.histogram(image)
>>> cdf = exposure.cumulative_distribution(image)
>>> cp.all(cdf[0] == cp.cumsum(hi[0])/float(image.size))
array(True)
cucim.skimage.exposure.equalize_adapthist(image, kernel_size=None, clip_limit=0.01, nbins=256)#

Contrast Limited Adaptive Histogram Equalization (CLAHE).

An algorithm for local contrast enhancement, that uses histograms computed over different tile regions of the image. Local details can therefore be enhanced even in regions that are darker or lighter than most of the image.

Parameters
image(M[, …][, C]) ndarray

Input image.

kernel_sizeint or array_like, optional

Defines the shape of contextual regions used in the algorithm. If iterable is passed, it must have the same number of elements as image.ndim (without color channel). If integer, it is broadcasted to each image dimension. By default, kernel_size is 1/8 of image height by 1/8 of its width.

clip_limitfloat, optional

Clipping limit, normalized between 0 and 1 (higher values give more contrast).

nbinsint, optional

Number of gray bins for histogram (“data range”).

Returns
out(M[, …][, C]) ndarray

Equalized image with float64 dtype.

Notes

  • For color images, the following steps are performed:
    • The image is converted to HSV color space

    • The CLAHE algorithm is run on the V (Value) channel

    • The image is converted back to RGB space and returned

  • For RGBA images, the original alpha channel is removed.

Changed in version 0.17: The values returned by this function are slightly shifted upwards because of an internal change in rounding behavior.

References

1

http://tog.acm.org/resources/GraphicsGems/

2

https://en.wikipedia.org/wiki/CLAHE#CLAHE

cucim.skimage.exposure.equalize_hist(image, nbins=256, mask=None)#

Return image after histogram equalization.

Parameters
imagearray

Image array.

nbinsint, optional

Number of bins for image histogram. Note: this argument is ignored for integer images, for which each integer is its own bin.

mask: ndarray of bools or 0s and 1s, optional

Array of same shape as image. Only points at which mask == True are used for the equalization, which is applied to the whole image.

Returns
outfloat array

Image array after histogram equalization.

Notes

This function is adapted from [1] with the author’s permission.

References

1

http://www.janeriksolem.net/histogram-equalization-with-python-and.html

2

https://en.wikipedia.org/wiki/Histogram_equalization

cucim.skimage.exposure.histogram(image, nbins=256, source_range='image', normalize=False, *, channel_axis=None)#

Return histogram of image.

Unlike numpy.histogram, this function returns the centers of bins and does not rebin integer arrays. For integer arrays, each integer value has its own bin, which improves speed and intensity-resolution.

If channel_axis is not set, the histogram is computed on the flattened image. For color or multichannel images, set channel_axis to use a common binning for all channels. Alternatively, one may apply the function separately on each channel to obtain a histogram for each color channel with separate binning.

Parameters
imagearray

Input image.

nbinsint, optional

Number of bins used to calculate histogram. This value is ignored for integer arrays.

source_rangestring, optional

‘image’ (default) determines the range from the input image. ‘dtype’ determines the range from the expected range of the images of that data type.

normalizebool, optional

If True, normalize the histogram by the sum of its values.

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

Returns
histarray

The values of the histogram. When channel_axis is not None, hist will be a 2D array where the first axis corresponds to channels.

bin_centersarray

The values at the center of the bins.

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage import exposure, img_as_float
>>> image = img_as_float(cp.array(data.camera()))
>>> cp.histogram(image, bins=2)
(array([ 93585, 168559]), array([0. , 0.5, 1. ]))
>>> exposure.histogram(image, nbins=2)
(array([ 93585, 168559]), array([0.25, 0.75]))
cucim.skimage.exposure.is_low_contrast(image, fraction_threshold=0.05, lower_percentile=1, upper_percentile=99, method='linear')#

Determine if an image is low contrast.

Parameters
imagearray-like

The image under test.

fraction_thresholdfloat, optional

The low contrast fraction threshold. An image is considered low- contrast when its range of brightness spans less than this fraction of its data type’s full range. [1]

lower_percentilefloat, optional

Disregard values below this percentile when computing image contrast.

upper_percentilefloat, optional

Disregard values above this percentile when computing image contrast.

methodstr, optional

The contrast determination method. Right now the only available option is “linear”.

Returns
outbool

True when the image is determined to be low contrast.

Notes

For boolean images, this function returns False only if all values are the same (the method, threshold, and percentile arguments are ignored).

References

1

https://scikit-image.org/docs/dev/user_guide/data_types.html

Examples

>>> import cupy as cp
>>> image = cp.linspace(0, 0.04, 100)
>>> is_low_contrast(image)
array(True)
>>> image[-1] = 1
>>> is_low_contrast(image)
array(True)
>>> is_low_contrast(image, upper_percentile=100)
array(False)
cucim.skimage.exposure.match_histograms(image, reference, *, channel_axis=None)#

Adjust an image so that its cumulative histogram matches that of another.

The adjustment is applied separately for each channel.

Parameters
imagendarray

Input image. Can be gray-scale or in color.

referencendarray

Image to match histogram of. Must have the same number of channels as image.

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

Returns
matchedndarray

Transformed input image.

Raises
ValueError

Thrown when the number of channels in the input image and the reference differ.

References

1

http://paulbourke.net/miscellaneous/equalisation/

cucim.skimage.exposure.rescale_intensity(image, in_range='image', out_range='dtype')#

Return image after stretching or shrinking its intensity levels.

The desired intensity range of the input and output, in_range and out_range respectively, are used to stretch or shrink the intensity range of the input image. See examples below.

Parameters
imagearray

Image array.

in_range, out_rangestr or 2-tuple, optional

Min and max intensity values of input and output image. The possible values for this parameter are enumerated below.

‘image’

Use image min/max as the intensity range.

‘dtype’

Use min/max of the image’s dtype as the intensity range.

dtype-name

Use intensity range based on desired dtype. Must be valid key in DTYPE_RANGE.

2-tuple

Use range_values as explicit min/max intensities.

Returns
outarray

Image array after rescaling its intensity. This image is the same dtype as the input image.

See also

equalize_hist

Notes

Changed in version 0.17: The dtype of the output array has changed to match the output dtype, or float if the output range is specified by a pair of values.

Examples

By default, the min/max intensities of the input image are stretched to the limits allowed by the image’s dtype, since in_range defaults to ‘image’ and out_range defaults to ‘dtype’:

>>> image = cp.array([51, 102, 153], dtype=np.uint8)
>>> rescale_intensity(image)
array([  0, 127, 255], dtype=uint8)

It’s easy to accidentally convert an image dtype from uint8 to float:

>>> 1.0 * image
array([ 51., 102., 153.])

Use rescale_intensity to rescale to the proper range for float dtypes:

>>> image_float = 1.0 * image
>>> rescale_intensity(image_float)
array([0. , 0.5, 1. ])

To maintain the low contrast of the original, use the in_range parameter:

>>> rescale_intensity(image_float, in_range=(0, 255))
array([0.2, 0.4, 0.6])

If the min/max value of in_range is more/less than the min/max image intensity, then the intensity levels are clipped:

>>> rescale_intensity(image_float, in_range=(0, 102))
array([0.5, 1. , 1. ])

If you have an image with signed integers but want to rescale the image to just the positive range, use the out_range parameter. In that case, the output dtype will be float:

>>> image = cp.asarray([-10, 0, 10], dtype=np.int8)
>>> rescale_intensity(image, out_range=(0, 127))
array([  0. ,  63.5, 127. ])

To get the desired range with a specific dtype, use .astype():

>>> rescale_intensity(image, out_range=(0, 127)).astype(np.int8)
array([  0,  63, 127], dtype=int8)

If the input image is constant, the output will be clipped directly to the output range: >>> image = cp.asarray([130, 130, 130], dtype=np.int32) >>> rescale_intensity(image, out_range=(0, 127)).astype(np.int32) array([127, 127, 127], dtype=int32)

feature#

cucim.skimage.feature.blob_dog(image, min_sigma=1, max_sigma=50, sigma_ratio=1.6, threshold=0.5, overlap=0.5, *, threshold_rel=None, exclude_border=False)#

Finds blobs in the given grayscale image. Blobs are found using the Difference of Gaussian (DoG) method [1], [2]. For each blob found, the method returns its coordinates and the standard deviation of the Gaussian kernel that detected the blob. Parameters ———- image : ndarray

Input grayscale image, blobs are assumed to be light on dark background (white on black).

min_sigmascalar or sequence of scalars, optional

The minimum standard deviation for Gaussian kernel. Keep this low to detect smaller blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

max_sigmascalar or sequence of scalars, optional

The maximum standard deviation for Gaussian kernel. Keep this high to detect larger blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

sigma_ratiofloat, optional

The ratio between the standard deviation of Gaussian Kernels used for computing the Difference of Gaussians

thresholdfloat or None, optional

The absolute lower bound for scale space maxima. Local maxima smaller than threshold are ignored. Reduce this to detect blobs with lower intensities. If threshold_rel is also specified, whichever threshold is larger will be used. If None, threshold_rel is used instead.

overlapfloat, optional

A value between 0 and 1. If the area of two blobs overlaps by a fraction greater than threshold, the smaller blob is eliminated.

threshold_relfloat or None, optional

Minimum intensity of peaks, calculated as max(dog_space) * threshold_rel, where dog_space refers to the stack of Difference-of-Gaussian (DoG) images computed internally. This should have a value between 0 and 1. If None, threshold is used instead.

exclude_bordertuple of ints, int, or False, optional

If tuple of ints, the length of the tuple must match the input array’s dimensionality. Each element of the tuple will exclude peaks from within exclude_border-pixels of the border of the image along that dimension. If nonzero int, exclude_border excludes peaks from within exclude_border-pixels of the border of the image. If zero or False, peaks are identified regardless of their distance from the border.

Returns
A(n, image.ndim + sigma) ndarray

A 2d array with each row representing 2 coordinate values for a 2D image, or 3 coordinate values for a 3D image, plus the sigma(s) used. When a single sigma is passed, outputs are: (r, c, sigma) or (p, r, c, sigma) where (r, c) or (p, r, c) are coordinates of the blob and sigma is the standard deviation of the Gaussian kernel which detected the blob. When an anisotropic gaussian is used (sigmas per dimension), the detected sigma is returned for each dimension.

Notes

The radius of each blob is approximately \(\sqrt{2}\sigma\) for a 2-D image and \(\sqrt{3}\sigma\) for a 3-D image.

References

1

https://en.wikipedia.org/wiki/Blob_detection#The_difference_of_Gaussians_approach

2

Lowe, D. G. “Distinctive Image Features from Scale-Invariant Keypoints.” International Journal of Computer Vision 60, 91–110 (2004). https://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf DOI:10.1023/B:VISI.0000029664.99615.94

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage import feature
>>> coins = cp.array(data.coins())
>>> feature.blob_dog(coins, threshold=.05, min_sigma=10, max_sigma=40)
array([[128., 155.,  10.],
       [198., 155.,  10.],
       [124., 338.,  10.],
       [127., 102.,  10.],
       [193., 281.,  10.],
       [126., 208.,  10.],
       [267., 115.,  10.],
       [197., 102.,  10.],
       [198., 215.,  10.],
       [123., 279.,  10.],
       [126.,  46.,  10.],
       [259., 247.,  10.],
       [196.,  43.,  10.],
       [ 54., 276.,  10.],
       [267., 358.,  10.],
       [ 58., 100.,  10.],
       [259., 305.,  10.],
       [185., 347.,  16.],
       [261., 174.,  16.],
       [ 46., 336.,  16.],
       [ 54., 217.,  10.],
       [ 55., 157.,  10.],
       [ 57.,  41.,  10.],
       [260.,  47.,  16.]])
cucim.skimage.feature.blob_doh(image, min_sigma=1, max_sigma=30, num_sigma=10, threshold=0.01, overlap=0.5, log_scale=False, *, threshold_rel=None)#

Finds blobs in the given grayscale image.

Blobs are found using the Determinant of Hessian method [1]. For each blob found, the method returns its coordinates and the standard deviation of the Gaussian Kernel used for the Hessian matrix whose determinant detected the blob. Determinant of Hessians is approximated using [2].

Parameters
image2D ndarray

Input grayscale image.Blobs can either be light on dark or vice versa.

min_sigmafloat, optional

The minimum standard deviation for Gaussian Kernel used to compute Hessian matrix. Keep this low to detect smaller blobs.

max_sigmafloat, optional

The maximum standard deviation for Gaussian Kernel used to compute Hessian matrix. Keep this high to detect larger blobs.

num_sigmaint, optional

The number of intermediate values of standard deviations to consider between min_sigma and max_sigma.

thresholdfloat or None, optional

The absolute lower bound for scale space maxima. Local maxima smaller than threshold are ignored. Reduce this to detect blobs with lower intensities. If threshold_rel is also specified, whichever threshold is larger will be used. If None, threshold_rel is used instead.

overlapfloat, optional

A value between 0 and 1. If the area of two blobs overlaps by a fraction greater than threshold, the smaller blob is eliminated.

log_scalebool, optional

If set intermediate values of standard deviations are interpolated using a logarithmic scale to the base 10. If not, linear interpolation is used.

threshold_relfloat or None, optional

Minimum intensity of peaks, calculated as max(doh_space) * threshold_rel, where doh_space refers to the stack of Determinant-of-Hessian (DoH) images computed internally. This should have a value between 0 and 1. If None, threshold is used instead.

Returns
A(n, 3) ndarray

A 2d array with each row representing 3 values, (y,x,sigma) where (y,x) are coordinates of the blob and sigma is the standard deviation of the Gaussian kernel of the Hessian Matrix whose determinant detected the blob.

Notes

The radius of each blob is approximately sigma. Computation of Determinant of Hessians is independent of the standard deviation. Therefore detecting larger blobs won’t take more time. In methods line blob_dog() and blob_log() the computation of Gaussians for larger sigma takes more time. The downside is that this method can’t be used for detecting blobs of radius less than 3px due to the box filters used in the approximation of Hessian Determinant.

References

1

https://en.wikipedia.org/wiki/Blob_detection#The_determinant_of_the_Hessian

2

Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, “SURF: Speeded Up Robust Features” ftp://ftp.vision.ee.ethz.ch/publications/articles/eth_biwi_00517.pdf

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage import feature
>>> img = cp.array(data.coins())
>>> feature.blob_doh(img)
array([[197.      , 153.      ,  20.333334],
       [124.      , 336.      ,  20.333334],
       [126.      , 153.      ,  20.333334],
       [195.      , 100.      ,  23.555555],
       [192.      , 212.      ,  23.555555],
       [121.      , 271.      ,  30.      ],
       [126.      , 101.      ,  20.333334],
       [193.      , 275.      ,  23.555555],
       [123.      , 205.      ,  20.333334],
       [270.      , 363.      ,  30.      ],
       [265.      , 113.      ,  23.555555],
       [262.      , 243.      ,  23.555555],
       [185.      , 348.      ,  30.      ],
       [156.      , 302.      ,  30.      ],
       [123.      ,  44.      ,  23.555555],
       [260.      , 173.      ,  30.      ],
       [197.      ,  44.      ,  20.333334]], dtype=float32)
cucim.skimage.feature.blob_log(image, min_sigma=1, max_sigma=50, num_sigma=10, threshold=0.2, overlap=0.5, log_scale=False, *, threshold_rel=None, exclude_border=False)#

Finds blobs in the given grayscale image. Blobs are found using the Laplacian of Gaussian (LoG) method [1]. For each blob found, the method returns its coordinates and the standard deviation of the Gaussian kernel that detected the blob. Parameters ———- image : ndarray

Input grayscale image, blobs are assumed to be light on dark background (white on black).

min_sigmascalar or sequence of scalars, optional

the minimum standard deviation for Gaussian kernel. Keep this low to detect smaller blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

max_sigmascalar or sequence of scalars, optional

The maximum standard deviation for Gaussian kernel. Keep this high to detect larger blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

num_sigmaint, optional

The number of intermediate values of standard deviations to consider between min_sigma and max_sigma.

thresholdfloat or None, optional

The absolute lower bound for scale space maxima. Local maxima smaller than threshold are ignored. Reduce this to detect blobs with lower intensities. If threshold_rel is also specified, whichever threshold is larger will be used. If None, threshold_rel is used instead.

overlapfloat, optional

A value between 0 and 1. If the area of two blobs overlaps by a fraction greater than threshold, the smaller blob is eliminated.

log_scalebool, optional

If set intermediate values of standard deviations are interpolated using a logarithmic scale to the base 10. If not, linear interpolation is used.

threshold_relfloat or None, optional

Minimum intensity of peaks, calculated as max(log_space) * threshold_rel, where log_space refers to the stack of Laplacian-of-Gaussian (LoG) images computed internally. This should have a value between 0 and 1. If None, threshold is used instead.

exclude_bordertuple of ints, int, or False, optional

If tuple of ints, the length of the tuple must match the input array’s dimensionality. Each element of the tuple will exclude peaks from within exclude_border-pixels of the border of the image along that dimension. If nonzero int, exclude_border excludes peaks from within exclude_border-pixels of the border of the image. If zero or False, peaks are identified regardless of their distance from the border.

Returns
A(n, image.ndim + sigma) ndarray

A 2d array with each row representing 2 coordinate values for a 2D image, or 3 coordinate values for a 3D image, plus the sigma(s) used. When a single sigma is passed, outputs are: (r, c, sigma) or (p, r, c, sigma) where (r, c) or (p, r, c) are coordinates of the blob and sigma is the standard deviation of the Gaussian kernel which detected the blob. When an anisotropic gaussian is used (sigmas per dimension), the detected sigma is returned for each dimension.

References

1

https://en.wikipedia.org/wiki/Blob_detection#The_Laplacian_of_Gaussian

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage import feature, exposure
>>> img = cp.array(data.coins())
>>> img = exposure.equalize_hist(img)  # improves detection
>>> feature.blob_log(img, threshold = .3)
array([[124.        , 336.        ,  11.88888889],
       [198.        , 155.        ,  11.88888889],
       [194.        , 213.        ,  17.33333333],
       [121.        , 272.        ,  17.33333333],
       [263.        , 244.        ,  17.33333333],
       [194.        , 276.        ,  17.33333333],
       [266.        , 115.        ,  11.88888889],
       [128.        , 154.        ,  11.88888889],
       [260.        , 174.        ,  17.33333333],
       [198.        , 103.        ,  11.88888889],
       [126.        , 208.        ,  11.88888889],
       [127.        , 102.        ,  11.88888889],
       [263.        , 302.        ,  17.33333333],
       [197.        ,  44.        ,  11.88888889],
       [185.        , 344.        ,  17.33333333],
       [126.        ,  46.        ,  11.88888889],
       [113.        , 323.        ,   1.        ]])
Notes
-----
The radius of each blob is approximately :math:`\sqrt{2}\sigma` for
a 2-D image and :math:`\sqrt{3}\sigma` for a 3-D image.
cucim.skimage.feature.canny(image, sigma=1.0, low_threshold=None, high_threshold=None, mask=None, use_quantiles=False, *, mode='constant', cval=0.0)#

Edge filter an image using the Canny algorithm.

Parameters
image2D array

Grayscale input image to detect edges on; can be of any dtype.

sigmafloat, optional

Standard deviation of the Gaussian filter.

low_thresholdfloat, optional

Lower bound for hysteresis thresholding (linking edges). If None, low_threshold is set to 10% of dtype’s max.

high_thresholdfloat, optional

Upper bound for hysteresis thresholding (linking edges). If None, high_threshold is set to 20% of dtype’s max.

maskarray, dtype=bool, optional

Mask to limit the application of Canny to a certain area.

use_quantilesbool, optional

If True then treat low_threshold and high_threshold as quantiles of the edge magnitude image, rather than absolute edge magnitude values. If True then the thresholds must be in the range [0, 1].

modestr, {‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}

The mode parameter determines how the array borders are handled during Gaussian filtering, where cval is the value when mode is equal to ‘constant’.

cvalfloat, optional

Value to fill past edges of input if mode is ‘constant’.

Returns
output2D array (image)

The binary edge map.

Notes

The steps of the algorithm are as follows:

  • Smooth the image using a Gaussian with sigma width.

  • Apply the horizontal and vertical Sobel operators to get the gradients within the image. The edge strength is the norm of the gradient.

  • Thin potential edges to 1-pixel wide curves. First, find the normal to the edge at each point. This is done by looking at the signs and the relative magnitude of the X-Sobel and Y-Sobel to sort the points into 4 categories: horizontal, vertical, diagonal and antidiagonal. Then look in the normal and reverse directions to see if the values in either of those directions are greater than the point in question. Use interpolation to get a mix of points instead of picking the one that’s the closest to the normal.

  • Perform a hysteresis thresholding: first label all points above the high threshold as edges. Then recursively label any point above the low threshold that is 8-connected to a labeled point as an edge.

References

1

Canny, J., A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679-714, 1986 DOI:10.1109/TPAMI.1986.4767851

2

William Green’s Canny tutorial https://en.wikipedia.org/wiki/Canny_edge_detector

Examples

>>> import cupy as cp
>>> from cucim.skimage import feature
>>> # Generate noisy image of a square
>>> im = cp.zeros((256, 256))
>>> im[64:-64, 64:-64] = 1
>>> im += 0.2 * cp.random.rand(*im.shape)
>>> # First trial with the Canny filter, with the default smoothing
>>> edges1 = feature.canny(im)
>>> # Increase the smoothing for better results
>>> edges2 = feature.canny(im, sigma=3)
cucim.skimage.feature.corner_foerstner(image, sigma=1)#

Compute Foerstner corner measure response image.

This corner detector uses information from the auto-correlation matrix A:

A = [(imx**2)   (imx*imy)] = [Axx Axy]
    [(imx*imy)   (imy**2)]   [Axy Ayy]

Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as:

w = det(A) / trace(A)           (size of error ellipse)
q = 4 * det(A) / trace(A)**2    (roundness of error ellipse)
Parameters
image(M, N) ndarray

Input image.

sigmafloat, optional

Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix.

Returns
wndarray

Error ellipse sizes.

qndarray

Roundness of error ellipse.

References

1

Förstner, W., & Gülch, E. (1987, June). A fast operator for detection and precise location of distinct points, corners and centres of circular features. In Proc. ISPRS intercommission conference on fast processing of photogrammetric data (pp. 281-305).

2

https://en.wikipedia.org/wiki/Corner_detection

Examples

>>> from cucim.skimage.feature import corner_foerstner, corner_peaks
>>> square = cp.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> w, q = corner_foerstner(square)
>>> accuracy_thresh = 0.5
>>> roundness_thresh = 0.3
>>> foerstner = (q > roundness_thresh) * (w > accuracy_thresh) * w
>>> corner_peaks(foerstner, min_distance=1)  
array([[2, 2],
       [2, 7],
       [7, 2],
       [7, 7]])
cucim.skimage.feature.corner_harris(image, method='k', k=0.05, eps=1e-06, sigma=1)#

Compute Harris corner measure response image.

This corner detector uses information from the auto-correlation matrix A:

A = [(imx**2)   (imx*imy)] = [Axx Axy]
    [(imx*imy)   (imy**2)]   [Axy Ayy]

Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as:

det(A) - k * trace(A)**2

or:

2 * det(A) / (trace(A) + eps)
Parameters
image(M, N) ndarray

Input image.

method{‘k’, ‘eps’}, optional

Method to compute the response image from the auto-correlation matrix.

kfloat, optional

Sensitivity factor to separate corners from edges, typically in range [0, 0.2]. Small values of k result in detection of sharp corners.

epsfloat, optional

Normalisation factor (Noble’s corner measure).

sigmafloat, optional

Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix.

Returns
responsendarray

Harris response image.

References

1

https://en.wikipedia.org/wiki/Corner_detection

Examples

>>> from cucim.skimage.feature import corner_harris, corner_peaks
>>> square = cp.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corner_peaks(corner_harris(square), min_distance=1)  
array([[2, 2],
       [2, 7],
       [7, 2],
       [7, 7]])
cucim.skimage.feature.corner_kitchen_rosenfeld(image, mode='constant', cval=0)#

Compute Kitchen and Rosenfeld corner measure response image.

The corner measure is calculated as follows:

(imxx * imy**2 + imyy * imx**2 - 2 * imxy * imx * imy)
    / (imx**2 + imy**2)

Where imx and imy are the first and imxx, imxy, imyy the second derivatives.

Parameters
image(M, N) ndarray

Input image.

mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional

How to handle values outside the image borders.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

Returns
responsendarray

Kitchen and Rosenfeld response image.

References

1

Kitchen, L., & Rosenfeld, A. (1982). Gray-level corner detection. Pattern recognition letters, 1(2), 95-102. DOI:10.1016/0167-8655(82)90020-4

cucim.skimage.feature.corner_peaks(image, min_distance=1, threshold_abs=None, threshold_rel=None, exclude_border=True, indices=True, num_peaks=inf, footprint=None, labels=None, *, num_peaks_per_label=inf, p_norm=inf)#

Find peaks in corner measure response image.

This differs from skimage.feature.peak_local_max in that it suppresses multiple connected peaks with the same accumulator value.

Parameters
image(M, N) ndarray

Input image.

min_distanceint, optional

The minimal allowed distance separating peaks.

**

See skimage.feature.peak_local_max().

p_normfloat

Which Minkowski p-norm to use. Should be in the range [1, inf]. A finite large p may cause a ValueError if overflow can occur. inf corresponds to the Chebyshev distance and 2 to the Euclidean distance.

Returns
outputndarray or ndarray of bools
  • If indices = True : (row, column, …) coordinates of peaks.

  • If indices = False : Boolean array shaped like image, with peaks represented by True values.

Notes

Changed in version 0.18: The default value of threshold_rel has changed to None, which corresponds to letting skimage.feature.peak_local_max decide on the default. This is equivalent to threshold_rel=0.

The num_peaks limit is applied before suppression of connected peaks. To limit the number of peaks after suppression, set num_peaks=np.inf and post-process the output of this function.

Examples

>>> from cucim.skimage.feature import peak_local_max
>>> response = cp.zeros((5, 5))
>>> response[2:4, 2:4] = 1
>>> response
array([[0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0.],
       [0., 0., 1., 1., 0.],
       [0., 0., 1., 1., 0.],
       [0., 0., 0., 0., 0.]])
>>> peak_local_max(response)
array([[2, 2],
       [2, 3],
       [3, 2],
       [3, 3]])
>>> corner_peaks(response)
array([[2, 2]])
cucim.skimage.feature.corner_shi_tomasi(image, sigma=1)#

Compute Shi-Tomasi (Kanade-Tomasi) corner measure response image.

This corner detector uses information from the auto-correlation matrix A:

A = [(imx**2)   (imx*imy)] = [Axx Axy]
    [(imx*imy)   (imy**2)]   [Axy Ayy]

Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as the smaller eigenvalue of A:

((Axx + Ayy) - sqrt((Axx - Ayy)**2 + 4 * Axy**2)) / 2
Parameters
image(M, N) ndarray

Input image.

sigmafloat, optional

Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix.

Returns
responsendarray

Shi-Tomasi response image.

References

1

https://en.wikipedia.org/wiki/Corner_detection

Examples

>>> from cucim.skimage.feature import corner_shi_tomasi, corner_peaks
>>> square = cp.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corner_peaks(corner_shi_tomasi(square),
...              min_distance=1)  
array([[2, 2],
       [2, 7],
       [7, 2],
       [7, 7]])
cucim.skimage.feature.daisy(image, step=4, radius=15, rings=3, histograms=8, orientations=8, normalization='l1', sigmas=None, ring_radii=None, visualize=False)#

Extract DAISY feature descriptors densely for the given image.

DAISY is a feature descriptor similar to SIFT formulated in a way that allows for fast dense extraction. Typically, this is practical for bag-of-features image representations.

The implementation follows Tola et al. [1] but deviate on the following points:

  • Histogram bin contribution are smoothed with a circular Gaussian window over the tonal range (the angular range).

  • The sigma values of the spatial Gaussian smoothing in this code do not match the sigma values in the original code by Tola et al. [2]. In their code, spatial smoothing is applied to both the input image and the center histogram. However, this smoothing is not documented in [1] and, therefore, it is omitted.

Parameters
image(M, N) array

Input image (grayscale).

stepint, optional

Distance between descriptor sampling points.

radiusint, optional

Radius (in pixels) of the outermost ring.

ringsint, optional

Number of rings.

histogramsint, optional

Number of histograms sampled per ring.

orientationsint, optional

Number of orientations (bins) per histogram.

normalization[ ‘l1’ | ‘l2’ | ‘daisy’ | ‘off’ ], optional

How to normalize the descriptors

  • ‘l1’: L1-normalization of each descriptor.

  • ‘l2’: L2-normalization of each descriptor.

  • ‘daisy’: L2-normalization of individual histograms.

  • ‘off’: Disable normalization.

sigmas1D array of float, optional

Standard deviation of spatial Gaussian smoothing for the center histogram and for each ring of histograms. The array of sigmas should be sorted from the center and out. I.e. the first sigma value defines the spatial smoothing of the center histogram and the last sigma value defines the spatial smoothing of the outermost ring. Specifying sigmas overrides the following parameter.

rings = len(sigmas) - 1

ring_radii1D array of int, optional

Radius (in pixels) for each ring. Specifying ring_radii overrides the following two parameters.

rings = len(ring_radii) radius = ring_radii[-1]

If both sigmas and ring_radii are given, they must satisfy the following predicate since no radius is needed for the center histogram.

len(ring_radii) == len(sigmas) + 1

visualizebool, optional

Generate a visualization of the DAISY descriptors

Returns
descsarray

Grid of DAISY descriptors for the given image as an array dimensionality (P, Q, R) where

P = ceil((M - radius*2) / step) Q = ceil((N - radius*2) / step) R = (rings * histograms + 1) * orientations

descs_img(M, N, 3) array (only if visualize==True)

Visualization of the DAISY descriptors.

References

1(1,2)

Tola et al. “Daisy: An efficient dense descriptor applied to wide- baseline stereo.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 32.5 (2010): 815-830.

2

http://cvlab.epfl.ch/software/daisy

cucim.skimage.feature.hessian_matrix(image, sigma=1, mode='constant', cval=0, order='rc', use_gaussian_derivatives=None)#

Compute the Hessian matrix.

In 2D, the Hessian matrix is defined as:

H = [Hrr Hrc]
    [Hrc Hcc]

which is computed by convolving the image with the second derivatives of the Gaussian kernel in the respective r- and c-directions.

The implementation here also supports n-dimensional data.

Parameters
imagendarray

Input image.

sigmafloat

Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix.

mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional

How to handle values outside the image borders.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

order{‘rc’, ‘xy’}, optional

NOTE: ‘xy’ is only an option for 2D images, higher dimensions must always use ‘rc’ order. This parameter allows for the use of reverse or forward order of the image axes in gradient computation. ‘rc’ indicates the use of the first axis initially (Hrr, Hrc, Hcc), whilst ‘xy’ indicates the usage of the last axis initially (Hxx, Hxy, Hyy).

use_gaussian_derivativesboolean, optional

Indicates whether the Hessian is computed by convolving with Gaussian derivatives, or by a simple finite-difference operation.

Returns
H_elemslist of ndarray

Upper-diagonal elements of the hessian matrix for each pixel in the input image. In 2D, this will be a three element list containing [Hrr, Hrc, Hcc]. In nD, the list will contain (n**2 + n) / 2 arrays.

Notes

The distributive property of derivatives and convolutions allows us to restate the derivative of an image, I, smoothed with a Gaussian kernel, G, as the convolution of the image with the derivative of G.

\[\frac{\partial }{\partial x_i}(I * G) = I * \left( \frac{\partial }{\partial x_i} G \right)\]

When use_gaussian_derivatives is True, this property is used to compute the second order derivatives that make up the Hessian matrix.

When use_gaussian_derivatives is False, simple finite differences on a Gaussian-smoothed image are used instead.

Examples

>>> import cupy as cp
>>> from cucim.skimage.feature import hessian_matrix
>>> square = cp.zeros((5, 5))
>>> square[2, 2] = 4
>>> Hrr, Hrc, Hcc = hessian_matrix(square, sigma=0.1, order='rc',
...                                use_gaussian_derivatives=False)
>>> Hrc
array([[ 0.,  0.,  0.,  0.,  0.],
       [ 0.,  1.,  0., -1.,  0.],
       [ 0.,  0.,  0.,  0.,  0.],
       [ 0., -1.,  0.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  0.]])
cucim.skimage.feature.hessian_matrix_det(image, sigma=1, approximate=True)#

Compute the approximate Hessian Determinant over an image.

The 2D approximate method uses box filters over integral images to compute the approximate Hessian Determinant.

Parameters
imagendarray

The image over which to compute the Hessian Determinant.

sigmafloat, optional

Standard deviation of the Gaussian kernel used for the Hessian matrix.

approximatebool, optional

If True and the image is 2D, use a much faster approximate computation. This argument has no effect on 3D and higher images.

Returns
outarray

The array of the Determinant of Hessians.

Notes

For 2D images when approximate=True, the running time of this method only depends on size of the image. It is independent of sigma as one would expect. The downside is that the result for sigma less than 3 is not accurate, i.e., not similar to the result obtained if someone computed the Hessian and took its determinant.

References

1

Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, “SURF: Speeded Up Robust Features” ftp://ftp.vision.ee.ethz.ch/publications/articles/eth_biwi_00517.pdf

cucim.skimage.feature.hessian_matrix_eigvals(H_elems)#

Compute eigenvalues of Hessian matrix.

Parameters
H_elemslist of ndarray

The upper-diagonal elements of the Hessian matrix, as returned by hessian_matrix.

Returns
eigsndarray

The eigenvalues of the Hessian matrix, in decreasing order. The eigenvalues are the leading dimension. That is, eigs[i, j, k] contains the ith-largest eigenvalue at position (j, k).

Examples

>>> import cupy as cp
>>> from cucim.skimage.feature import (hessian_matrix,
...                                    hessian_matrix_eigvals)
>>> square = cp.zeros((5, 5))
>>> square[2, 2] = 4
>>> H_elems = hessian_matrix(square, sigma=0.1, order='rc',
...                          use_gaussian_derivatives=False)
>>> hessian_matrix_eigvals(H_elems)[0]
array([[ 0.,  0.,  2.,  0.,  0.],
       [ 0.,  1.,  0.,  1.,  0.],
       [ 2.,  0., -2.,  0.,  2.],
       [ 0.,  1.,  0.,  1.,  0.],
       [ 0.,  0.,  2.,  0.,  0.]])
cucim.skimage.feature.match_descriptors(descriptors1, descriptors2, metric=None, p=2, max_distance=inf, cross_check=True, max_ratio=1.0)#

Brute-force matching of descriptors.

For each descriptor in the first set this matcher finds the closest descriptor in the second set (and vice-versa in the case of enabled cross-checking).

Parameters
descriptors1(M, P) array

Descriptors of size P about M keypoints in the first image.

descriptors2(N, P) array

Descriptors of size P about N keypoints in the second image.

metric{‘euclidean’, ‘cityblock’, ‘minkowski’, ‘hamming’, …} , optional

The metric to compute the distance between two descriptors. See scipy.spatial.distance.cdist for all possible types. The hamming distance should be used for binary descriptors. By default the L2-norm is used for all descriptors of dtype float or double and the Hamming distance is used for binary descriptors automatically.

pint, optional

The p-norm to apply for metric='minkowski'.

max_distancefloat, optional

Maximum allowed distance between descriptors of two keypoints in separate images to be regarded as a match.

cross_checkbool, optional

If True, the matched keypoints are returned after cross checking i.e. a matched pair (keypoint1, keypoint2) is returned if keypoint2 is the best match for keypoint1 in second image and keypoint1 is the best match for keypoint2 in first image.

max_ratiofloat, optional

Maximum ratio of distances between first and second closest descriptor in the second set of descriptors. This threshold is useful to filter ambiguous matches between the two descriptor sets. The choice of this value depends on the statistics of the chosen descriptor, e.g., for SIFT descriptors a value of 0.8 is usually chosen, see D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, 2004.

Returns
matches(Q, 2) array

Indices of corresponding matches in first and second set of descriptors, where matches[:, 0] denote the indices in the first and matches[:, 1] the indices in the second set of descriptors.

cucim.skimage.feature.match_template(image, template, pad_input=False, mode='constant', constant_values=0)#

Match a template to a 2-D or 3-D image using normalized correlation.

The output is an array with values between -1.0 and 1.0. The value at a given position corresponds to the correlation coefficient between the image and the template.

For pad_input=True matches correspond to the center and otherwise to the top-left corner of the template. To find the best match you must search for peaks in the response (output) image.

Parameters
image(M, N[, D]) array

2-D or 3-D input image.

template(m, n[, d]) array

Template to locate. It must be (m <= M, n <= N[, d <= D]).

pad_inputbool

If True, pad image so that output is the same size as the image, and output values correspond to the template center. Otherwise, the output is an array with shape (M - m + 1, N - n + 1) for an (M, N) image and an (m, n) template, and matches correspond to origin (top-left corner) of the template.

modesee numpy.pad, optional

Padding mode.

constant_valuessee numpy.pad, optional

Constant values used in conjunction with mode='constant'.

Returns
outputarray

Response image with correlation coefficients.

Notes

Details on the cross-correlation are presented in [1]. This implementation uses FFT convolutions of the image and the template. Reference [2] presents similar derivations but the approximation presented in this reference is not used in our implementation.

This CuPy implementation does not force the image to float64 internally, but will use float32 for single-precision inputs.

References

1

J. P. Lewis, “Fast Normalized Cross-Correlation”, Industrial Light and Magic.

2

Briechle and Hanebeck, “Template Matching using Fast Normalized Cross Correlation”, Proceedings of the SPIE (2001). DOI:10.1117/12.421129

Examples

>>> import cupy as cp
>>> template = cp.zeros((3, 3))
>>> template[1, 1] = 1
>>> template
array([[0., 0., 0.],
       [0., 1., 0.],
       [0., 0., 0.]])
>>> image = cp.zeros((6, 6))
>>> image[1, 1] = 1
>>> image[4, 4] = -1
>>> image
array([[ 0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  1.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0., -1.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.]])
>>> result = match_template(image, template)
>>> cp.around(result, 3)
array([[ 1.   , -0.125,  0.   ,  0.   ],
       [-0.125, -0.125,  0.   ,  0.   ],
       [ 0.   ,  0.   ,  0.125,  0.125],
       [ 0.   ,  0.   ,  0.125, -1.   ]])
>>> result = match_template(image, template, pad_input=True)
>>> cp.around(result, 3)
array([[-0.125, -0.125, -0.125,  0.   ,  0.   ,  0.   ],
       [-0.125,  1.   , -0.125,  0.   ,  0.   ,  0.   ],
       [-0.125, -0.125, -0.125,  0.   ,  0.   ,  0.   ],
       [ 0.   ,  0.   ,  0.   ,  0.125,  0.125,  0.125],
       [ 0.   ,  0.   ,  0.   ,  0.125, -1.   ,  0.125],
       [ 0.   ,  0.   ,  0.   ,  0.125,  0.125,  0.125]])
cucim.skimage.feature.multiscale_basic_features(image, intensity=True, edges=True, texture=True, sigma_min=0.5, sigma_max=16, num_sigma=None, num_workers=None, *, channel_axis=None)#

Local features for a single- or multi-channel nd image.

Intensity, gradient intensity and local structure are computed at different scales thanks to Gaussian blurring.

Parameters
imagendarray

Input image, which can be grayscale or multichannel.

intensitybool, default True

If True, pixel intensities averaged over the different scales are added to the feature set.

edgesbool, default True

If True, intensities of local gradients averaged over the different scales are added to the feature set.

texturebool, default True

If True, eigenvalues of the Hessian matrix after Gaussian blurring at different scales are added to the feature set.

sigma_minfloat, optional

Smallest value of the Gaussian kernel used to average local neighborhoods before extracting features.

sigma_maxfloat, optional

Largest value of the Gaussian kernel used to average local neighborhoods before extracting features.

num_sigmaint, optional

Number of values of the Gaussian kernel between sigma_min and sigma_max. If None, sigma_min multiplied by powers of 2 are used.

num_workersint or None, optional

The number of parallel threads to use. If set to None, the full set of available cores are used.

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

Returns
featuresnp.ndarray

Array of shape image.shape + (n_features,). When channel_axis is not None, all channels are concatenated along the features dimension. (i.e. n_features == n_features_singlechannel * n_channels)

cucim.skimage.feature.peak_local_max(image, min_distance=1, threshold_abs=None, threshold_rel=None, exclude_border=True, num_peaks=inf, footprint=None, labels=None, num_peaks_per_label=inf, p_norm=inf)#

Find peaks in an image as coordinate list.

Peaks are the local maxima in a region of 2 * min_distance + 1 (i.e. peaks are separated by at least min_distance).

If both threshold_abs and threshold_rel are provided, the maximum of the two is chosen as the minimum intensity threshold of peaks.

Changed in version 0.18: Prior to version 0.18, peaks of the same height within a radius of min_distance were all returned, but this could cause unexpected behaviour. From 0.18 onwards, an arbitrary peak within the region is returned. See issue gh-2592.

Parameters
imagendarray

Input image.

min_distanceint, optional

The minimal allowed distance separating peaks. To find the maximum number of peaks, use min_distance=1.

threshold_absfloat or None, optional

Minimum intensity of peaks. By default, the absolute threshold is the minimum intensity of the image.

threshold_relfloat or None, optional

Minimum intensity of peaks, calculated as max(image) * threshold_rel.

exclude_borderint, tuple of ints, or bool, optional

If positive integer, exclude_border excludes peaks from within exclude_border-pixels of the border of the image. If tuple of non-negative ints, the length of the tuple must match the input array’s dimensionality. Each element of the tuple will exclude peaks from within exclude_border-pixels of the border of the image along that dimension. If True, takes the min_distance parameter as value. If zero or False, peaks are identified regardless of their distance from the border.

num_peaksint, optional

Maximum number of peaks. When the number of peaks exceeds num_peaks, return num_peaks peaks based on highest peak intensity.

footprintndarray of bools, optional

If provided, footprint == 1 represents the local region within which to search for peaks at every point in image.

labelsndarray of ints, optional

If provided, each unique region labels == value represents a unique region to search for peaks. Zero is reserved for background.

num_peaks_per_labelint, optional

Maximum number of peaks for each label.

p_normfloat

Which Minkowski p-norm to use. Should be in the range [1, inf]. A finite large p may cause a ValueError if overflow can occur. inf corresponds to the Chebyshev distance and 2 to the Euclidean distance.

Returns
outputndarray

The coordinates of the peaks.

Notes

The peak local maximum function returns the coordinates of local peaks (maxima) in an image. Internally, a maximum filter is used for finding local maxima. This operation dilates the original image. After comparison of the dilated and original images, this function returns the coordinates

Examples

>>> import cupy as cp
>>> img1 = cp.zeros((7, 7))
>>> img1[3, 4] = 1
>>> img1[3, 2] = 1.5
>>> img1
array([[0. , 0. , 0. , 0. , 0. , 0. , 0. ],
       [0. , 0. , 0. , 0. , 0. , 0. , 0. ],
       [0. , 0. , 0. , 0. , 0. , 0. , 0. ],
       [0. , 0. , 1.5, 0. , 1. , 0. , 0. ],
       [0. , 0. , 0. , 0. , 0. , 0. , 0. ],
       [0. , 0. , 0. , 0. , 0. , 0. , 0. ],
       [0. , 0. , 0. , 0. , 0. , 0. , 0. ]])
>>> peak_local_max(img1, min_distance=1)
array([[3, 2],
       [3, 4]])
>>> peak_local_max(img1, min_distance=2)
array([[3, 2]])
>>> img2 = cp.zeros((20, 20, 20))
>>> img2[10, 10, 10] = 1
>>> img2[15, 15, 15] = 1
>>> peak_idx = peak_local_max(img2, exclude_border=0)
>>> peak_idx
array([[10, 10, 10],
       [15, 15, 15]])
>>> peak_mask = cp.zeros_like(img2, dtype=bool)
>>> peak_mask[tuple(peak_idx.T)] = True
>>> np.argwhere(peak_mask)
array([[10, 10, 10],
       [15, 15, 15]])
cucim.skimage.feature.shape_index(image, sigma=1, mode='constant', cval=0)#

Compute the shape index.

The shape index, as defined by Koenderink & van Doorn [1], is a single valued measure of local curvature, assuming the image as a 3D plane with intensities representing heights.

It is derived from the eigenvalues of the Hessian, and its value ranges from -1 to 1 (and is undefined (=NaN) in flat regions), with following ranges representing following shapes:

Ranges of the shape index and corresponding shapes.#

Interval (s in …)

Shape

[ -1, -7/8)

Spherical cup

[-7/8, -5/8)

Through

[-5/8, -3/8)

Rut

[-3/8, -1/8)

Saddle rut

[-1/8, +1/8)

Saddle

[+1/8, +3/8)

Saddle ridge

[+3/8, +5/8)

Ridge

[+5/8, +7/8)

Dome

[+7/8, +1]

Spherical cap

Parameters
image(M, N) ndarray

Input image.

sigmafloat, optional

Standard deviation used for the Gaussian kernel, which is used for smoothing the input data before Hessian eigen value calculation.

mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional

How to handle values outside the image borders

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

Returns
sndarray

Shape index

References

1

Koenderink, J. J. & van Doorn, A. J., “Surface shape and curvature scales”, Image and Vision Computing, 1992, 10, 557-564. DOI:10.1016/0262-8856(92)90076-F

Examples

>>> from cucim.skimage.feature import shape_index
>>> square = cp.zeros((5, 5))
>>> square[2, 2] = 4
>>> s = shape_index(square, sigma=0.1)
>>> s
array([[ nan,  nan, -0.5,  nan,  nan],
       [ nan, -0. ,  nan, -0. ,  nan],
       [-0.5,  nan, -1. ,  nan, -0.5],
       [ nan, -0. ,  nan, -0. ,  nan],
       [ nan,  nan, -0.5,  nan,  nan]])
cucim.skimage.feature.structure_tensor(image, sigma=1, mode='constant', cval=0, order='rc')#

Compute structure tensor using sum of squared differences.

The (2-dimensional) structure tensor A is defined as:

A = [Arr Arc]
    [Arc Acc]

which is approximated by the weighted sum of squared differences in a local window around each pixel in the image. This formula can be extended to a larger number of dimensions (see [1]).

Parameters
imagendarray

Input image.

sigmafloat or array-like of float, optional

Standard deviation used for the Gaussian kernel, which is used as a weighting function for the local summation of squared differences. If sigma is an iterable, its length must be equal to image.ndim and each element is used for the Gaussian kernel applied along its respective axis.

mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional

How to handle values outside the image borders.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

order{‘rc’, ‘xy’}, optional

NOTE: ‘xy’ is only an option for 2D images, higher dimensions must always use ‘rc’ order. This parameter allows for the use of reverse or forward order of the image axes in gradient computation. ‘rc’ indicates the use of the first axis initially (Arr, Arc, Acc), whilst ‘xy’ indicates the usage of the last axis initially (Axx, Axy, Ayy).

Returns
A_elemslist of ndarray

Upper-diagonal elements of the structure tensor for each pixel in the input image.

References

1

https://en.wikipedia.org/wiki/Structure_tensor

Examples

>>> import cupy as cp
>>> from cucim.skimage.feature import structure_tensor
>>> square = cp.zeros((5, 5))
>>> square[2, 2] = 1
>>> Arr, Arc, Acc = structure_tensor(square, sigma=0.1, order="rc")
>>> Acc
array([[0., 0., 0., 0., 0.],
       [0., 1., 0., 1., 0.],
       [0., 4., 0., 4., 0.],
       [0., 1., 0., 1., 0.],
       [0., 0., 0., 0., 0.]])
cucim.skimage.feature.structure_tensor_eigenvalues(A_elems)#

Compute eigenvalues of structure tensor.

Parameters
A_elemslist of ndarray

The upper-diagonal elements of the structure tensor, as returned by structure_tensor.

Returns
ndarray

The eigenvalues of the structure tensor, in decreasing order. The eigenvalues are the leading dimension. That is, the coordinate [i, j, k] corresponds to the ith-largest eigenvalue at position (j, k).

See also

structure_tensor

Examples

>>> import cupy as cp
>>> from cucim.skimage.feature import structure_tensor
>>> from cucim.skimage.feature import structure_tensor_eigenvalues
>>> square = cp.zeros((5, 5))
>>> square[2, 2] = 1
>>> A_elems = structure_tensor(square, sigma=0.1, order='rc')
>>> structure_tensor_eigenvalues(A_elems)[0]
array([[0., 0., 0., 0., 0.],
       [0., 2., 4., 2., 0.],
       [0., 4., 0., 4., 0.],
       [0., 2., 4., 2., 0.],
       [0., 0., 0., 0., 0.]])

filters#

class cucim.skimage.filters.LPIFilter2D(impulse_response, **filter_params)#

Linear Position-Invariant Filter (2-dimensional)

Methods

__call__(data)

Apply the filter to the given data.

cucim.skimage.filters.apply_hysteresis_threshold(image, low, high)#

Apply hysteresis thresholding to image.

This algorithm finds regions where image is greater than high OR image is greater than low and that region is connected to a region greater than high.

Parameters
imagearray, shape (M,[ N, …, P])

Grayscale input image.

lowfloat, or array of same shape as image

Lower threshold.

highfloat, or array of same shape as image

Higher threshold.

Returns
thresholdedarray of bool, same shape as image

Array in which True indicates the locations where image was above the hysteresis threshold.

References

1

J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986; vol. 8, pp.679-698. DOI:10.1109/TPAMI.1986.4767851

Examples

>>> import cupy as cp
>>> from cucim.skimage.filters import apply_hysteresis_threshold
>>> image = cp.asarray([1, 2, 3, 2, 1, 2, 1, 3, 2])
>>> apply_hysteresis_threshold(image, 1.5, 2.5).astype(int)
array([0, 1, 1, 1, 0, 0, 0, 1, 1])
cucim.skimage.filters.butterworth(image, cutoff_frequency_ratio=0.005, high_pass=True, order=2.0, channel_axis=None, *, squared_butterworth=True, npad=0)#

Apply a Butterworth filter to enhance high or low frequency features.

This filter is defined in the Fourier domain.

Parameters
image(M[, N[, …, P]][, C]) ndarray

Input image.

cutoff_frequency_ratiofloat, optional

Determines the position of the cut-off relative to the shape of the FFT. Receives a value between [0, 0.5].

high_passbool, optional

Whether to perform a high pass filter. If False, a low pass filter is performed.

orderfloat, optional

Order of the filter which affects the slope near the cut-off. Higher order means steeper slope in frequency space.

channel_axisint, optional

If there is a channel dimension, provide the index here. If None (default) then all axes are assumed to be spatial dimensions.

squared_butterworthbool, optional

When True, the square of a Butterworth filter is used. See notes below for more details.

npadint, optional

Pad each edge of the image by npad pixels using numpy.pad’s mode='edge' extension.

Returns
resultndarray

The Butterworth-filtered image.

Notes

A band-pass filter can be achieved by combining a high-pass and low-pass filter. The user can increase npad if boundary artifacts are apparent.

The “Butterworth filter” used in image processing textbooks (e.g. [1], [2]) is often the square of the traditional Butterworth filters as described by [3], [4]. The squared version will be used here if squared_butterworth is set to True. The lowpass, squared Butterworth filter is given by the following expression for the lowpass case:

\[H_{low}(f) = \frac{1}{1 + \left(\frac{f}{c f_s}\right)^{2n}}\]

with the highpass case given by

\[H_{hi}(f) = 1 - H_{low}(f)\]

where \(f=\sqrt{\sum_{d=0}^{\mathrm{ndim}} f_{d}^{2}}\) is the absolute value of the spatial frequency, \(f_s\) is the sampling frequency, \(c\) the cutoff_frequency_ratio, and \(n\) is the filter order [1]. When squared_butterworth=False, the square root of the above expressions are used instead.

Note that cutoff_frequency_ratio is defined in terms of the sampling frequency, \(f_s\). The FFT spectrum covers the Nyquist range (\([-f_s/2, f_s/2]\)) so cutoff_frequency_ratio should have a value between 0 and 0.5. The frequency response (gain) at the cutoff is 0.5 when squared_butterworth is true and \(1/\sqrt{2}\) when it is false.

References

1(1,2)

Russ, John C., et al. The Image Processing Handbook, 3rd. Ed. 1999, CRC Press, LLC.

2

Birchfield, Stan. Image Processing and Analysis. 2018. Cengage Learning.

3

Butterworth, Stephen. “On the theory of filter amplifiers.” Wireless Engineer 7.6 (1930): 536-541.

4

https://en.wikipedia.org/wiki/Butterworth_filter

Examples

Apply a high pass and low-pass Butterworth filter to a grayscale and color image respectively:

>>> import cupy as cp
>>> from skimage.data import camera, astronaut
>>> from cucim.skimage.filters import butterworth
>>> cam = cp.asarray(camera())
>>> astro = cp.asarray(astronaut())
>>> high_pass = butterworth(cam, 0.07, True, 8)
>>> low_pass = butterworth(astro, 0.01, False, 4, channel_axis=-1)
cucim.skimage.filters.correlate_sparse(image, kernel, mode='reflect')#

Compute valid cross-correlation of padded_array and kernel.

This function is fast when kernel is large with many zeros.

See scipy.ndimage.correlate for a description of cross-correlation.

Parameters
imagendarray, dtype float, shape (M, N[, …], P)

The input array. If mode is ‘valid’, this array should already be padded, as a margin of the same shape as kernel will be stripped off.

kernelndarray, dtype float shape (Q, R[, …], S)

The kernel to be correlated. Must have the same number of dimensions as padded_array. For high performance, it should be sparse (few nonzero entries).

modestring, optional

See scipy.ndimage.correlate for valid modes. Additionally, mode ‘valid’ is accepted, in which case no padding is applied and the result is the result for the smaller image for which the kernel is entirely inside the original data.

Returns
resultarray of float, shape (M, N[, …], P)

The result of cross-correlating image with kernel. If mode ‘valid’ is used, the resulting shape is (M-Q+1, N-R+1[, …], P-S+1).

cucim.skimage.filters.difference_of_gaussians(image, low_sigma, high_sigma=None, *, mode='nearest', cval=0, channel_axis=None, truncate=4.0)#

Find features between low_sigma and high_sigma in size.

This function uses the Difference of Gaussians method for applying band-pass filters to multi-dimensional arrays. The input array is blurred with two Gaussian kernels of differing sigmas to produce two intermediate, filtered images. The more-blurred image is then subtracted from the less-blurred image. The final output image will therefore have had high-frequency components attenuated by the smaller-sigma Gaussian, and low frequency components will have been removed due to their presence in the more-blurred intermediate.

Parameters
imagendarray

Input array to filter.

low_sigmascalar or sequence of scalars

Standard deviation(s) for the Gaussian kernel with the smaller sigmas across all axes. The standard deviations are given for each axis as a sequence, or as a single number, in which case the single number is used as the standard deviation value for all axes.

high_sigmascalar or sequence of scalars, optional (default is None)

Standard deviation(s) for the Gaussian kernel with the larger sigmas across all axes. The standard deviations are given for each axis as a sequence, or as a single number, in which case the single number is used as the standard deviation value for all axes. If None is given (default), sigmas for all axes are calculated as 1.6 * low_sigma.

mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional

The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘nearest’.

cvalscalar, optional

Value to fill past edges of input if mode is ‘constant’. Default is 0.0

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

truncatefloat, optional (default is 4.0)

Truncate the filter at this many standard deviations.

Returns
filtered_imagendarray

the filtered array.

See also

skimage.feature.blog_dog

Notes

This function will subtract an array filtered with a Gaussian kernel with sigmas given by high_sigma from an array filtered with a Gaussian kernel with sigmas provided by low_sigma. The values for high_sigma must always be greater than or equal to the corresponding values in low_sigma, or a ValueError will be raised.

When high_sigma is none, the values for high_sigma will be calculated as 1.6x the corresponding values in low_sigma. This ratio was originally proposed by Marr and Hildreth (1980) [1] and is commonly used when approximating the inverted Laplacian of Gaussian, which is used in edge and blob detection.

Input image is converted according to the conventions of img_as_float.

Except for sigma values, all parameters are used for both filters.

References

1

Marr, D. and Hildreth, E. Theory of Edge Detection. Proc. R. Soc. Lond. Series B 207, 187-217 (1980). https://doi.org/10.1098/rspb.1980.0020

Examples

Apply a simple Difference of Gaussians filter to a color image:

>>> from skimage.data import astronaut
>>> from cucim.skimage.filters import difference_of_gaussians
>>> astro = cp.asarray(astronaut())
>>> filtered_image = difference_of_gaussians(astro, 2, 10,
...                                          channel_axis=-1)

Apply a Laplacian of Gaussian filter as approximated by the Difference of Gaussians filter:

>>> filtered_image = difference_of_gaussians(astro, 2,
...                                          channel_axis=-1)

Apply a Difference of Gaussians filter to a grayscale image using different sigma values for each axis:

>>> from skimage.data import camera
>>> cam = cp.array(camera())
>>> filtered_image = difference_of_gaussians(cam, (2,5), (3,20))
cucim.skimage.filters.farid(image, mask=None, *, axis=None, mode='reflect', cval=0.0)#

Find the edge magnitude using the Farid transform.

Parameters
imagecp.ndarray

The input image.

maskcp.ndarray of bool, optional

Clip the output image to this mask. (Values where mask=0 will be set to 0.)

axisint or sequence of int, optional

Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as:

farid_mag = cp.sqrt(sum([farid(image, axis=i)**2
                         for i in range(image.ndim)]) / image.ndim)

The magnitude is also computed if axis is a sequence.

modestr or sequence of str, optional

The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.

cvalfloat, optional

When mode is 'constant', this is the constant used in values outside the boundary of the image data.

Returns
——-
output2-D array

The Farid edge map.

See also

farid_h, farid_v

horizontal and vertical edge detection.

scharr, sobel, prewitt, skimage.feature.canny

Notes

Take the square root of the sum of the squares of the horizontal and vertical derivatives to get a magnitude that is somewhat insensitive to direction. Similar to the Scharr operator, this operator is designed with a rotation invariance constraint.

References

1

Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819

2

Wikipedia, “Farid and Simoncelli Derivatives.” Available at: <https://en.wikipedia.org/wiki/Image_derivatives#Farid_and_Simoncelli_Derivatives>

Examples

>>> import cupy as cp
>>> from skimage import data
>>> camera = cp.array(data.camera())
>>> from cucim.skimage import filters
>>> edges = filters.farid(camera)
cucim.skimage.filters.farid_h(image, *, mask=None)#

Find the horizontal edges of an image using the Farid transform.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Farid edge map.

Notes

The kernel was constructed using the 5-tap weights from [1].

References

1

Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819

2

Farid, H. and Simoncelli, E. P. “Optimally rotation-equivariant directional derivative kernels”, In: 7th International Conference on Computer Analysis of Images and Patterns, Kiel, Germany. Sep, 1997.

cucim.skimage.filters.farid_v(image, *, mask=None)#

Find the vertical edges of an image using the Farid transform.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Farid edge map.

Notes

The kernel was constructed using the 5-tap weights from [1].

References

1

Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819

cucim.skimage.filters.filter_forward(data, impulse_response=None, filter_params=None, predefined_filter=None)#

Apply the given filter to data.

Parameters
data(M, N) ndarray

Input data.

impulse_responsecallable f(r, c, **filter_params)

Impulse response of the filter. See LPIFilter2D.__init__.

filter_paramsdict, optional

Additional keyword parameters to the impulse_response function.

Other Parameters
predefined_filterLPIFilter2D

If you need to apply the same filter multiple times over different images, construct the LPIFilter2D and specify it here.

Examples

Gaussian filter without normalization:

>>> def filt_func(r, c, sigma=1):
...     return cp.exp(-(r**2 + c**2)/(2 * sigma**2))
>>>
>>> from skimage import data
>>> filtered = filter_forward(cp.array(data.coins()), filt_func)
cucim.skimage.filters.filter_inverse(data, impulse_response=None, filter_params=None, max_gain=2, predefined_filter=None)#

Apply the filter in reverse to the given data.

Parameters
data(M, N) ndarray

Input data.

impulse_responsecallable f(r, c, **filter_params)

Impulse response of the filter. See LPIFilter2D. This is a required argument unless a predifined_filter is provided.

filter_paramsdict, optional

Additional keyword parameters to the impulse_response function.

max_gainfloat, optional

Limit the filter gain. Often, the filter contains zeros, which would cause the inverse filter to have infinite gain. High gain causes amplification of artefacts, so a conservative limit is recommended.

Other Parameters
predefined_filterLPIFilter2D, optional

If you need to apply the same filter multiple times over different images, construct the LPIFilter2D and specify it here.

cucim.skimage.filters.frangi(image, sigmas=range(1, 10, 2), scale_range=None, scale_step=None, alpha=0.5, beta=0.5, gamma=None, black_ridges=True, mode='reflect', cval=0)#

Filter an image with the Frangi vesselness filter.

This filter can be used to detect continuous ridges, e.g. vessels, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects.

Defined only for 2-D and 3-D images. Calculates the eigenvalues of the Hessian to compute the similarity of an image region to vessels, according to the method described in [1].

Parameters
image(M, N[, P]) ndarray

Array with input image data.

sigmasiterable of floats, optional

Sigmas used as scales of filter, i.e., np.arange(scale_range[0], scale_range[1], scale_step)

scale_range2-tuple of floats, optional

The range of sigmas used.

scale_stepfloat, optional

Step size between sigmas.

alphafloat, optional

Frangi correction constant that adjusts the filter’s sensitivity to deviation from a plate-like structure.

betafloat, optional

Frangi correction constant that adjusts the filter’s sensitivity to deviation from a blob-like structure.

gammafloat, optional

Frangi correction constant that adjusts the filter’s sensitivity to areas of high variance/texture/structure. The default, None, uses half of the maximum Hessian norm.

black_ridgesboolean, optional

When True (the default), the filter detects black ridges; when False, it detects white ridges.

mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional

How to handle values outside the image borders.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

Returns
out(M, N[, P]) ndarray

Filtered image (maximum of pixels across all scales).

Notes

Earlier versions of this filter were implemented by Marc Schrijver, (November 2001), D. J. Kroon, University of Twente (May 2009) [2], and D. G. Ellis (January 2017) [3].

References

1

Frangi, A. F., Niessen, W. J., Vincken, K. L., & Viergever, M. A. (1998,). Multiscale vessel enhancement filtering. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 130-137). Springer Berlin Heidelberg. DOI:10.1007/BFb0056195

2

Kroon, D. J.: Hessian based Frangi vesselness filter.

3

Ellis, D. G.: ellisdg/frangi3d

cucim.skimage.filters.gabor(image, frequency, theta=0, bandwidth=1, sigma_x=None, sigma_y=None, n_stds=3, offset=0, mode='reflect', cval=0)#

Return real and imaginary responses to Gabor filter.

The real and imaginary parts of the Gabor filter kernel are applied to the image and the response is returned as a pair of arrays.

Gabor filter is a linear filter with a Gaussian kernel which is modulated by a sinusoidal plane wave. Frequency and orientation representations of the Gabor filter are similar to those of the human visual system. Gabor filter banks are commonly used in computer vision and image processing. They are especially suitable for edge detection and texture classification.

Parameters
image2-D array

Input image.

frequencyfloat

Spatial frequency of the harmonic function. Specified in pixels.

thetafloat, optional

Orientation in radians. If 0, the harmonic is in the x-direction.

bandwidthfloat, optional

The bandwidth captured by the filter. For fixed bandwidth, sigma_x and sigma_y will decrease with increasing frequency. This value is ignored if sigma_x and sigma_y are set by the user.

sigma_x, sigma_yfloat, optional

Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that sigma_x controls the vertical direction.

n_stdsscalar, optional

The linear size of the kernel is n_stds (3 by default) standard deviations.

offsetfloat, optional

Phase offset of harmonic function in radians.

mode{‘constant’, ‘nearest’, ‘reflect’, ‘mirror’, ‘wrap’}, optional

Mode used to convolve image with a kernel, passed to ndi.convolve

cvalscalar, optional

Value to fill past edges of input if mode of convolution is ‘constant’. The parameter is passed to ndi.convolve.

Returns
real, imagarrays

Filtered images using the real and imaginary parts of the Gabor filter kernel. Images are of the same dimensions as the input one.

References

1

https://en.wikipedia.org/wiki/Gabor_filter

2

https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf

Examples

>>> import cupy as cp
>>> from cucim.skimage.filters import gabor
>>> from skimage import data, io
>>> from matplotlib import pyplot as plt  
>>> image = cp.array(data.coins())
>>> # detecting edges in a coin image
>>> filt_real, filt_imag = gabor(image, frequency=0.6)
>>> plt.figure()                        
>>> io.imshow(cp.asnumpy(filt_real))    
>>> io.show()                           
>>> # less sensitivity to finer details with the lower frequency kernel
>>> filt_real, filt_imag = gabor(image, frequency=0.1)
>>> plt.figure()                       
>>> io.imshow(cp.asnumpy(filt_real)    
>>> io.show()                          
cucim.skimage.filters.gabor_kernel(frequency, theta=0, bandwidth=1, sigma_x=None, sigma_y=None, n_stds=3, offset=0, dtype=None, *, float_dtype=None)#

Return complex 2D Gabor filter kernel.

Gabor kernel is a Gaussian kernel modulated by a complex harmonic function. Harmonic function consists of an imaginary sine function and a real cosine function. Spatial frequency is inversely proportional to the wavelength of the harmonic and to the standard deviation of a Gaussian kernel. The bandwidth is also inversely proportional to the standard deviation.

Parameters
frequencyfloat

Spatial frequency of the harmonic function. Specified in pixels.

thetafloat, optional

Orientation in radians. If 0, the harmonic is in the x-direction.

bandwidthfloat, optional

The bandwidth captured by the filter. For fixed bandwidth, sigma_x and sigma_y will decrease with increasing frequency. This value is ignored if sigma_x and sigma_y are set by the user.

sigma_x, sigma_yfloat, optional

Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that sigma_x controls the vertical direction.

n_stdsscalar, optional

The linear size of the kernel is n_stds (3 by default) standard deviations

offsetfloat, optional

Phase offset of harmonic function in radians.

dtype{np.complex64, np.complex128}

Specifies if the filter is single or double precision complex.

Returns
gcomplex array

Complex filter kernel.

References

1

https://en.wikipedia.org/wiki/Gabor_filter

2

https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf

Examples

>>> import cupy as cp
>>> from cucim.skimage.filters import gabor_kernel
>>> from skimage import io
>>> from matplotlib import pyplot as plt  
>>> gk = gabor_kernel(frequency=0.2)
>>> plt.figure()                    
>>> io.imshow(cp.asnumpy(gk.real))  
>>> io.show()                       
>>> # more ripples (equivalent to increasing the size of the
>>> # Gaussian spread)
>>> gk = gabor_kernel(frequency=0.2, bandwidth=0.1)
>>> plt.figure()                    
>>> io.imshow(cp.asnumpy(gk.real))  
>>> io.show()                       
cucim.skimage.filters.gaussian(image, sigma=1, output=<DEPRECATED>, mode='nearest', cval=0, preserve_range=False, truncate=4.0, *, channel_axis=None, out=None)#

Multi-dimensional Gaussian filter.

Parameters
imagendarray

Input image (grayscale or color) to filter.

sigmascalar or sequence of scalars, optional

Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional

The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘nearest’.

cvalscalar, optional

Value to fill past edges of input if mode is ‘constant’. Default is 0.0

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

truncatefloat, optional

Truncate the filter at this many standard deviations.

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

New in version 24.02: channel_axis was added in 24.02`

outndarray, optional

If given, the filtered image will be stored in this array. It must have a floating point data type.

New in version 24.06: out was added in 24.06`

Returns
filtered_imagendarray

the filtered array

Other Parameters
outputDEPRECATED

Deprecated in favor of out.

Deprecated since version 24.06.

Notes

This function is a wrapper around scipy.ndi.gaussian_filter().

Integer arrays are converted to float.

out should be of floating point data type since gaussian converts the input image to float. If out is not provided, another array will be allocated and returned as the result.

The multi-dimensional filter is implemented as a sequence of one-dimensional convolution filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision.

Examples

>>> import cupy as cp
>>> from cucim import skimage as ski
>>> a = cp.zeros((3, 3))
>>> a[1, 1] = 1
>>> a
array([[0., 0., 0.],
       [0., 1., 0.],
       [0., 0., 0.]])
>>> ski.filters.gaussian(a, sigma=0.4)  # mild smoothing
array([[0.00163116, 0.03712502, 0.00163116],
       [0.03712502, 0.84496158, 0.03712502],
       [0.00163116, 0.03712502, 0.00163116]])
>>> ski.filters.gaussian(a, sigma=1)  # more smoothing
array([[0.05855018, 0.09653293, 0.05855018],
       [0.09653293, 0.15915589, 0.09653293],
       [0.05855018, 0.09653293, 0.05855018]])
>>> # Several modes are possible for handling boundaries
>>> ski.filters.gaussian(a, sigma=1, mode='reflect')
array([[0.08767308, 0.12075024, 0.08767308],
       [0.12075024, 0.16630671, 0.12075024],
       [0.08767308, 0.12075024, 0.08767308]])
>>> # For RGB images, each is filtered separately
>>> from skimage.data import astronaut
>>> image = cp.array(astronaut())
>>> filtered_img = ski.filters.gaussian(image, sigma=1, channel_axis=-1)
cucim.skimage.filters.hessian(image, sigmas=range(1, 10, 2), scale_range=None, scale_step=None, alpha=0.5, beta=0.5, gamma=15, black_ridges=True, mode='reflect', cval=0)#

Filter an image with the Hybrid Hessian filter.

This filter can be used to detect continuous edges, e.g. vessels, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects.

Defined only for 2-D and 3-D images. Almost equal to Frangi filter, but uses alternative method of smoothing. Refer to [1] to find the differences between Frangi and Hessian filters.

Parameters
image(M, N[, P]) ndarray

Array with input image data.

sigmasiterable of floats, optional

Sigmas used as scales of filter, i.e., np.arange(scale_range[0], scale_range[1], scale_step)

scale_range2-tuple of floats, optional

The range of sigmas used.

scale_stepfloat, optional

Step size between sigmas.

betafloat, optional

Frangi correction constant that adjusts the filter’s sensitivity to deviation from a blob-like structure.

gammafloat, optional

Frangi correction constant that adjusts the filter’s sensitivity to areas of high variance/texture/structure.

black_ridgesboolean, optional

When True (the default), the filter detects black ridges; when False, it detects white ridges.

mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional

How to handle values outside the image borders.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

Returns
out(M, N[, P]) ndarray

Filtered image (maximum of pixels across all scales).

See also

meijering
sato
frangi

Notes

Written by Marc Schrijver (November 2001) Re-Written by D. J. Kroon University of Twente (May 2009) [2]

References

1

Ng, C. C., Yap, M. H., Costen, N., & Li, B. (2014,). Automatic wrinkle detection using hybrid Hessian filter. In Asian Conference on Computer Vision (pp. 609-622). Springer International Publishing. DOI:10.1007/978-3-319-16811-1_40

2

Kroon, D. J.: Hessian based Frangi vesselness filter.

cucim.skimage.filters.laplace(image, ksize=3, mask=None)#

Find the edges of an image using the Laplace operator.

Parameters
imagendarray

Image to process.

ksizeint, optional

Define the size of the discrete Laplacian operator such that it will have a size of (ksize,) * image.ndim.

maskndarray, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
outputndarray

The Laplace edge map.

Notes

The Laplacian operator is generated using the function skimage.restoration.uft.laplacian().

cucim.skimage.filters.median(image, footprint=None, out=None, mode='nearest', cval=0.0, behavior='ndimage', *, algorithm='auto', algorithm_kwargs={})#

Return local median of an image.

Parameters
imagearray-like

Input image.

footprintndarray, tuple of int, or None

If None, footprint will be a N-D array with 3 elements for each dimension (e.g., vector, square, cube, etc.). If footprint is a tuple of integers, it will be an array of ones with the given shape. Otherwise, if behavior=='rank', footprint is a 2-D array of 1’s and 0’s. If behavior=='ndimage', footprint is a N-D array of 1’s and 0’s with the same number of dimension as image. Note that upstream scikit-image currently does not support supplying a tuple for footprint. It is added here to avoid overhead of generating a small weights array in cases where it is not needed.

outndarray, (same dtype as image), optional

If None, a new array is allocated.

mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’,’‘wrap’}, optional

The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘nearest’.

New in version 0.15: mode is used when behavior='ndimage'.

cvalscalar, optional

Value to fill past edges of input if mode is ‘constant’. Default is 0.0

New in version 0.15: cval was added in 0.15 is used when behavior='ndimage'.

behavior{‘ndimage’, ‘rank’}, optional

Either to use the old behavior (i.e., < 0.15) or the new behavior. The old behavior will call the skimage.filters.rank.median(). The new behavior will call the scipy.ndimage.median_filter(). Default is ‘ndimage’.

New in version 0.15: behavior is introduced in 0.15

Changed in version 0.16: Default behavior has been changed from ‘rank’ to ‘ndimage’

Returns
out2-D array (same dtype as input image)

Output image.

Other Parameters
algorithm{‘auto’, ‘histogram’, ‘sorting’}

Determines which algorithm is used to compute the median. The default of ‘auto’ will attempt to use a histogram-based algorithm for 2D images with 8 or 16-bit integer data types. Otherwise a sorting-based algorithm will be used. Note: this parameter is cuCIM-specific and does not exist in upstream scikit-image.

algorithm_kwargsdict

Any additional algorithm-specific keywords. Currently can only be used to set the number of parallel partitions for the ‘histogram’ algorithm. (e.g. algorithm_kwargs={'partitions': 256}). Note: this parameter is cuCIM-specific and does not exist in upstream scikit-image.

See also

skimage.filters.rank.median

Rank-based implementation of the median filtering offering more flexibility with additional parameters but dedicated for unsigned integer images.

Notes

An efficient, histogram-based median filter as described in [1] is faster than the sorting based approach for larger kernel sizes (e.g. greater than 13x13 or so in 2D). It has near-constant run time regardless of the kernel size. The algorithm presented in [1] has been adapted to additional bit depths here. When algorithm=’auto’, the histogram-based algorithm will be chosen for integer-valued images with sufficiently large footprint size. Otherwise, the sorting-based approach is used.

References

1(1,2)

O. Green, “Efficient Scalable Median Filtering Using Histogram-Based Operations,” in IEEE Transactions on Image Processing, vol. 27, no. 5, pp. 2217-2228, May 2018, https://doi.org/10.1109/TIP.2017.2781375.

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage.morphology import disk
>>> from cucim.skimage.filters import median
>>> img = cp.array(data.camera())
>>> med = median(img, disk(5))
cucim.skimage.filters.meijering(image, sigmas=range(1, 10, 2), alpha=None, black_ridges=True, mode='reflect', cval=0)#

Filter an image with the Meijering neuriteness filter.

This filter can be used to detect continuous ridges, e.g. neurites, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects.

Calculates the eigenvalues of the Hessian to compute the similarity of an image region to neurites, according to the method described in [1].

Parameters
image(N, M[, …]) ndarray

Array with input image data.

sigmasiterable of floats, optional

Sigmas used as scales of filter

alphafloat, optional

Shaping filter constant, that selects maximally flat elongated features. The default, None, selects the optimal value -1/(ndim+1).

black_ridgesboolean, optional

When True (the default), the filter detects black ridges; when False, it detects white ridges.

mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional

How to handle values outside the image borders.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

Returns
out(N, M[, …, P]) ndarray

Filtered image (maximum of pixels across all scales).

See also

sato
frangi
hessian

References

1

Meijering, E., Jacob, M., Sarria, J. C., Steiner, P., Hirling, H., Unser, M. (2004). Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry Part A, 58(2), 167-176. DOI:10.1002/cyto.a.20022

cucim.skimage.filters.prewitt(image, mask=None, *, axis=None, mode='reflect', cval=0.0)#

Find the edge magnitude using the Prewitt transform.

Parameters
imagearray

The input image.

maskarray of bool, optional

Clip the output image to this mask. (Values where mask=0 will be set to 0.)

axisint or sequence of int, optional

Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as:

prw_mag = np.sqrt(sum([prewitt(image, axis=i)**2
                       for i in range(image.ndim)]) / image.ndim)

The magnitude is also computed if axis is a sequence.

modestr or sequence of str, optional

The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.

cvalfloat, optional

When mode is 'constant', this is the constant used in values outside the boundary of the image data.

Returns
outputarray of float

The Prewitt edge map.

See also

prewitt_h, prewitt_v

horizontal and vertical edge detection.

sobel, scharr, farid, cucim.skimage.feature.canny

Notes

The edge magnitude depends slightly on edge directions, since the approximation of the gradient operator by the Prewitt operator is not completely rotation invariant. For a better rotation invariance, the Scharr operator should be used. The Sobel operator has a better rotation invariance than the Prewitt operator, but a worse rotation invariance than the Scharr operator.

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage import filters
>>> camera = cp.array(data.camera())
>>> edges = filters.prewitt(camera)
cucim.skimage.filters.prewitt_h(image, mask=None)#

Find the horizontal edges of an image using the Prewitt transform.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Prewitt edge map.

Notes

We use the following kernel:

 1/3   1/3   1/3
  0     0     0
-1/3  -1/3  -1/3
cucim.skimage.filters.prewitt_v(image, mask=None)#

Find the vertical edges of an image using the Prewitt transform.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Prewitt edge map.

Notes

We use the following kernel:

1/3   0  -1/3
1/3   0  -1/3
1/3   0  -1/3
cucim.skimage.filters.rank_order(image)#

Return an image of the same shape where each pixel is the index of the pixel value in the ascending order of the unique values of image, aka the rank-order value.

Parameters
imagecp.ndarray
Returns
labelscp.ndarray of unsigned integers, of shape image.shape

New array where each pixel has the rank-order value of the corresponding pixel in image. Pixel values are between 0 and n - 1, where n is the number of distinct unique values in image. The dtype of this array will be determined by cp.min_scalar_type(image.size).

original_values1-D cp.ndarray

Unique original values of image. This will have the same dtype as image.

Examples

>>> a = cp.asarray([[1, 4, 5], [4, 4, 1], [5, 1, 1]])
>>> a
array([[1, 4, 5],
       [4, 4, 1],
       [5, 1, 1]])
>>> rank_order(a)
(array([[0, 1, 2],
       [1, 1, 0],
       [2, 0, 0]], dtype=uint8), array([1, 4, 5]))
>>> b = cp.asarray([-1., 2.5, 3.1, 2.5])
>>> rank_order(b)
(array([0, 1, 2, 1], dtype=uint8), array([-1. ,  2.5,  3.1]))
cucim.skimage.filters.roberts(image, mask=None)#

Find the edge magnitude using Roberts’ cross operator.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Roberts’ Cross edge map.

Examples

>>> import cupy as cp
>>> from skimage import data
>>> camera = cp.array(data.camera())
>>> from cucim.skimage import filters
>>> edges = filters.roberts(camera)
cucim.skimage.filters.roberts_neg_diag(image, mask=None)#

Find the cross edges of an image using the Roberts’ Cross operator.

The kernel is applied to the input image to produce separate measurements of the gradient component one orientation.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Robert’s edge map.

Notes

We use the following kernel:

 0   1
-1   0
cucim.skimage.filters.roberts_pos_diag(image, mask=None)#

Find the cross edges of an image using Roberts’ cross operator.

The kernel is applied to the input image to produce separate measurements of the gradient component one orientation.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Robert’s edge map.

Notes

We use the following kernel:

1   0
0  -1
cucim.skimage.filters.sato(image, sigmas=range(1, 10, 2), black_ridges=True, mode='reflect', cval=0)#

Filter an image with the Sato tubeness filter.

This filter can be used to detect continuous ridges, e.g. tubes, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects.

Defined only for 2-D and 3-D images. Calculates the eigenvalues of the Hessian to compute the similarity of an image region to tubes, according to the method described in [1].

Parameters
image(M, N[, P]) ndarray

Array with input image data.

sigmasiterable of floats, optional

Sigmas used as scales of filter.

black_ridgesboolean, optional

When True (the default), the filter detects black ridges; when False, it detects white ridges.

mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional

How to handle values outside the image borders.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

Returns
out(M, N[, P]) ndarray

Filtered image (maximum of pixels across all scales).

References

1

Sato, Y., Nakajima, S., Shiraga, N., Atsumi, H., Yoshida, S., Koller, T., …, Kikinis, R. (1998). Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Medical image analysis, 2(2), 143-168. DOI:10.1016/S1361-8415(98)80009-1

cucim.skimage.filters.scharr(image, mask=None, *, axis=None, mode='reflect', cval=0.0)#

Find the edge magnitude using the Scharr transform.

Parameters
imagearray

The input image.

maskarray of bool, optional

Clip the output image to this mask. (Values where mask=0 will be set to 0.)

axisint or sequence of int, optional

Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as:

sch_mag = np.sqrt(sum([scharr(image, axis=i)**2
                       for i in range(image.ndim)]) / image.ndim)

The magnitude is also computed if axis is a sequence.

modestr or sequence of str, optional

The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.

cvalfloat, optional

When mode is 'constant', this is the constant used in values outside the boundary of the image data.

Returns
outputarray of float

The Scharr edge map.

See also

scharr_h, scharr_v

horizontal and vertical edge detection.

sobel, prewitt, farid, cucim.skimage.feature.canny

Notes

The Scharr operator has a better rotation invariance than other edge filters such as the Sobel or the Prewitt operators.

References

1

D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.

2

https://en.wikipedia.org/wiki/Sobel_operator#Alternative_operators

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage import filters
>>> camera = cp.array(data.camera())
>>> edges = filters.scharr(camera)
cucim.skimage.filters.scharr_h(image, mask=None)#

Find the horizontal edges of an image using the Scharr transform.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Scharr edge map.

Notes

We use the following kernel:

 3   10   3
 0    0   0
-3  -10  -3

References

1

D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.

cucim.skimage.filters.scharr_v(image, mask=None)#

Find the vertical edges of an image using the Scharr transform.

Parameters
image2-D array

Image to process

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Scharr edge map.

Notes

We use the following kernel:

 3   0   -3
10   0  -10
 3   0   -3

References

1

D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.

cucim.skimage.filters.sobel(image, mask=None, *, axis=None, mode='reflect', cval=0.0)#

Find edges in an image using the Sobel filter.

Parameters
imagearray

The input image.

maskarray of bool, optional

Clip the output image to this mask. (Values where mask=0 will be set to 0.)

axisint or sequence of int, optional

Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as:

sobel_mag = np.sqrt(sum([sobel(image, axis=i)**2
                         for i in range(image.ndim)]) / image.ndim)

The magnitude is also computed if axis is a sequence.

modestr or sequence of str, optional

The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.

cvalfloat, optional

When mode is 'constant', this is the constant used in values outside the boundary of the image data.

Returns
outputarray of float

The Sobel edge map.

See also

sobel_h, sobel_v

horizontal and vertical edge detection.

scharr, prewitt, farid, cucim.skimage.feature.canny

References

1

D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.

2

https://en.wikipedia.org/wiki/Sobel_operator

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage import filters
>>> camera = cp.array(data.camera())
>>> edges = filters.sobel(camera)
cucim.skimage.filters.sobel_h(image, mask=None)#

Find the horizontal edges of an image using the Sobel transform.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Sobel edge map.

Notes

We use the following kernel:

 1   2   1
 0   0   0
-1  -2  -1
cucim.skimage.filters.sobel_v(image, mask=None)#

Find the vertical edges of an image using the Sobel transform.

Parameters
image2-D array

Image to process.

mask2-D array, optional

An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result.

Returns
output2-D array

The Sobel edge map.

Notes

We use the following kernel:

1   0  -1
2   0  -2
1   0  -1
cucim.skimage.filters.threshold_isodata(image=None, nbins=256, return_all=False, *, hist=None)#

Return threshold value(s) based on ISODATA method.

Histogram-based threshold, known as Ridler-Calvard method or inter-means. Threshold values returned satisfy the following equality:

threshold = (image[image <= threshold].mean() +
             image[image > threshold].mean()) / 2.0

That is, returned thresholds are intensities that separate the image into two groups of pixels, where the threshold intensity is midway between the mean intensities of these groups.

For integer images, the above equality holds to within one; for floating- point images, the equality holds to within the histogram bin-width.

Either image or hist must be provided. In case hist is given, the actual histogram of the image is ignored.

Parameters
image(M, N[, …]) ndarray

Grayscale input image.

nbinsint, optional

Number of bins used to calculate histogram. This value is ignored for integer arrays.

return_allbool, optional

If False (default), return only the lowest threshold that satisfies the above equality. If True, return all valid thresholds.

histarray, or 2-tuple of arrays, optional

Histogram to determine the threshold from and a corresponding array of bin center intensities. Alternatively, only the histogram can be passed.

Returns
thresholdfloat or int or array

Threshold value(s).

References

1

Ridler, TW & Calvard, S (1978), “Picture thresholding using an iterative selection method” IEEE Transactions on Systems, Man and Cybernetics 8: 630-632, DOI:10.1109/TSMC.1978.4310039

2

Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165, http://www.busim.ee.boun.edu.tr/~sankur/SankurFolder/Threshold_survey.pdf DOI:10.1117/1.1631315

3

ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold

Examples

>>> from skimage.data import coins
>>> image = cp.array(coins())
>>> thresh = threshold_isodata(image)
>>> binary = image > thresh
cucim.skimage.filters.threshold_li(image, *, tolerance=None, initial_guess=None, iter_callback=None)#

Compute threshold value by Li’s iterative Minimum Cross Entropy method.

Parameters
image(M, N[, …]) ndarray

Grayscale input image.

tolerancefloat, optional

Finish the computation when the change in the threshold in an iteration is less than this value. By default, this is half the smallest difference between intensity values in image.

initial_guessfloat or Callable[[array[float]], float], optional

Li’s iterative method uses gradient descent to find the optimal threshold. If the image intensity histogram contains more than two modes (peaks), the gradient descent could get stuck in a local optimum. An initial guess for the iteration can help the algorithm find the globally-optimal threshold. A float value defines a specific start point, while a callable should take in an array of image intensities and return a float value. Example valid callables include numpy.mean (default), lambda arr: numpy.quantile(arr, 0.95), or even skimage.filters.threshold_otsu().

iter_callbackCallable[[float], Any], optional

A function that will be called on the threshold at every iteration of the algorithm.

Returns
thresholdfloat

Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.

References

1

Li C.H. and Lee C.K. (1993) “Minimum Cross Entropy Thresholding” Pattern Recognition, 26(4): 617-625 DOI:10.1016/0031-3203(93)90115-D

2

Li C.H. and Tam P.K.S. (1998) “An Iterative Algorithm for Minimum Cross Entropy Thresholding” Pattern Recognition Letters, 18(8): 771-776 DOI:10.1016/S0167-8655(98)00057-9

3

Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165 DOI:10.1117/1.1631315

4

ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold

Examples

>>> from skimage.data import camera
>>> image = cp.array(camera())
>>> thresh = threshold_li(image)
>>> binary = image > thresh
cucim.skimage.filters.threshold_local(image, block_size=3, method='gaussian', offset=0, mode='reflect', param=None, cval=0)#

Compute a threshold mask image based on local pixel neighborhood.

Also known as adaptive or dynamic thresholding. The threshold value is the weighted mean for the local neighborhood of a pixel subtracted by a constant. Alternatively the threshold can be determined dynamically by a given function, using the ‘generic’ method.

Parameters
image(M, N[, …]) ndarray

Grayscale input image.

block_sizeint or sequence of int

Odd size of pixel neighborhood which is used to calculate the threshold value (e.g. 3, 5, 7, …, 21, …).

method{‘generic’, ‘gaussian’, ‘mean’, ‘median’}, optional

Method used to determine adaptive threshold for local neighborhood in weighted mean image.

  • ‘generic’: use custom function (see param parameter)

  • ‘gaussian’: apply gaussian filter (see param parameter for custom sigma value)

  • ‘mean’: apply arithmetic mean filter

  • ‘median’: apply median rank filter

By default, the ‘gaussian’ method is used.

offsetfloat, optional

Constant subtracted from weighted mean of neighborhood to calculate the local threshold value. Default offset is 0.

mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional

The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘reflect’.

param{int, function}, optional

Either specify sigma for ‘gaussian’ method or function object for ‘generic’ method. This functions takes the flat array of local neighborhood as a single argument and returns the calculated threshold for the centre pixel.

cvalfloat, optional

Value to fill past edges of input if mode is ‘constant’.

Returns
threshold(M, N[, …]) ndarray

Threshold image. All pixels in the input image higher than the corresponding pixel in the threshold image are considered foreground.

References

1

Gonzalez, R. C. and Wood, R. E. “Digital Image Processing (2nd Edition).” Prentice-Hall Inc., 2002: 600–612. ISBN: 0-201-18075-8

Examples

>>> import cupy as cp
>>> from skimage.data import camera
>>> image = cp.array(camera()[:50, :50])
>>> binary_image1 = image > threshold_local(image, 15, 'mean')
cucim.skimage.filters.threshold_mean(image)#

Return threshold value based on the mean of grayscale values.

Parameters
image(M, N[, …]) ndarray

Grayscale input image.

Returns
thresholdfloat

Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.

References

1

C. A. Glasbey, “An analysis of histogram-based thresholding algorithms,” CVGIP: Graphical Models and Image Processing, vol. 55, pp. 532-537, 1993. DOI:10.1006/cgip.1993.1040

Examples

>>> from skimage.data import camera
>>> image = cp.array(camera())
>>> thresh = threshold_mean(image)
>>> binary = image > thresh
cucim.skimage.filters.threshold_minimum(image=None, nbins=256, max_num_iter=10000, *, hist=None)#

Return threshold value based on minimum method.

The histogram of the input image is computed if not provided and smoothed until there are only two maxima. Then the minimum in between is the threshold value.

Either image or hist must be provided. In case hist is given, the actual histogram of the image is ignored.

Parameters
image(M, N[, …]) ndarray, optional

Grayscale input image.

nbinsint, optional

Number of bins used to calculate histogram. This value is ignored for integer arrays.

max_num_iterint, optional

Maximum number of iterations to smooth the histogram.

histarray, or 2-tuple of arrays, optional

Histogram to determine the threshold from and a corresponding array of bin center intensities. Alternatively, only the histogram can be passed.

Returns
thresholdfloat

Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.

Raises
RuntimeError

If unable to find two local maxima in the histogram or if the smoothing takes more than 1e4 iterations.

References

1

C. A. Glasbey, “An analysis of histogram-based thresholding algorithms,” CVGIP: Graphical Models and Image Processing, vol. 55, pp. 532-537, 1993.

2

Prewitt, JMS & Mendelsohn, ML (1966), “The analysis of cell images”, Annals of the New York Academy of Sciences 128: 1035-1053 DOI:10.1111/j.1749-6632.1965.tb11715.x

Examples

>>> from skimage.data import camera
>>> image = cp.array(camera())
>>> thresh = threshold_minimum(image)
>>> binary = image > thresh
cucim.skimage.filters.threshold_multiotsu(image=None, classes=3, nbins=256, *, hist=None)#

Generate classes-1 threshold values to divide gray levels in image, following Otsu’s method for multiple classes.

The threshold values are chosen to maximize the total sum of pairwise variances between the thresholded graylevel classes. See Notes and [1] for more details.

Either image or hist must be provided. If hist is provided, the actual histogram of the image is ignored.

Parameters
image(M, N[, …]) ndarray, optional

Grayscale input image.

classesint, optional

Number of classes to be thresholded, i.e. the number of resulting regions.

nbinsint, optional

Number of bins used to calculate the histogram. This value is ignored for integer arrays.

histarray, or 2-tuple of arrays, optional

Histogram from which to determine the threshold, and optionally a corresponding array of bin center intensities. If no hist provided, this function will compute it from the image (see notes).

Returns
thresharray

Array containing the threshold values for the desired classes.

Raises
ValueError

If image contains less grayscale value then the desired number of classes.

Notes

This implementation relies on a Cython function whose complexity is \(O\left(\frac{Ch^{C-1}}{(C-1)!}\right)\), where \(h\) is the number of histogram bins and \(C\) is the number of classes desired.

If no hist is given, this function will make use of skimage.exposure.histogram, which behaves differently than np.histogram. While both allowed, use the former for consistent behaviour.

The input image must be grayscale.

References

1

Liao, P-S., Chen, T-S. and Chung, P-C., “A fast algorithm for multilevel thresholding”, Journal of Information Science and Engineering 17 (5): 713-727, 2001. Available at: <https://ftp.iis.sinica.edu.tw/JISE/2001/200109_01.pdf> DOI:10.6688/JISE.2001.17.5.1

2

Tosa, Y., “Multi-Otsu Threshold”, a java plugin for ImageJ. Available at: <http://imagej.net/plugins/download/Multi_OtsuThreshold.java>

Examples

>>> import cupy as cp
>>> from cucim.skimage.color import label2rgb
>>> from skimage import data
>>> image = cp.asarray(data.camera())
>>> thresholds = threshold_multiotsu(image)
>>> regions = cp.digitize(image, bins=thresholds)
>>> regions_colorized = label2rgb(regions)
cucim.skimage.filters.threshold_niblack(image, window_size=15, k=0.2)#

Applies Niblack local threshold to an array.

A threshold T is calculated for every pixel in the image using the following formula:

T = m(x,y) - k * s(x,y)

where m(x,y) and s(x,y) are the mean and standard deviation of pixel (x,y) neighborhood defined by a rectangular window with size w times w centered around the pixel. k is a configurable parameter that weights the effect of standard deviation.

Parameters
image(M, N[, …]) ndarray

Grayscale input image.

window_sizeint, or iterable of int, optional

Window size specified as a single odd integer (3, 5, 7, …), or an iterable of length image.ndim containing only odd integers (e.g. (1, 5, 5)).

kfloat, optional

Value of parameter k in threshold formula.

Returns
threshold(M, N) ndarray

Threshold mask. All pixels with an intensity higher than this value are assumed to be foreground.

Notes

This algorithm is originally designed for text recognition.

The Bradley threshold is a particular case of the Niblack one, being equivalent to

>>> from skimage import data
>>> image = cp.array(data.page())
>>> q = 1
>>> threshold_image = threshold_niblack(image, k=0) * q

for some value q. By default, Bradley and Roth use q=1.

References

1

W. Niblack, An introduction to Digital Image Processing, Prentice-Hall, 1986.

2

D. Bradley and G. Roth, “Adaptive thresholding using Integral Image”, Journal of Graphics Tools 12(2), pp. 13-21, 2007. DOI:10.1080/2151237X.2007.10129236

Examples

>>> from skimage import data
>>> image = cp.array(data.page())
>>> threshold_image = threshold_niblack(image, window_size=7, k=0.1)
cucim.skimage.filters.threshold_otsu(image=None, nbins=256, *, hist=None)#

Return threshold value based on Otsu’s method.

Either image or hist must be provided. If hist is provided, the actual histogram of the image is ignored.

Parameters
image(M, N[, …]) ndarray, optional

Grayscale input image.

nbinsint, optional

Number of bins used to calculate histogram. This value is ignored for integer arrays.

histarray, or 2-tuple of arrays, optional

Histogram from which to determine the threshold, and optionally a corresponding array of bin center intensities. If no hist provided, this function will compute it from the image.

Returns
thresholdfloat

Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.

Notes

The input image must be grayscale.

References

1

Wikipedia, https://en.wikipedia.org/wiki/Otsu’s_Method

Examples

>>> from skimage.data import camera
>>> image = cp.array(camera())
>>> thresh = threshold_otsu(image)
>>> binary = image <= thresh
cucim.skimage.filters.threshold_sauvola(image, window_size=15, k=0.2, r=None)#

Applies Sauvola local threshold to an array. Sauvola is a modification of Niblack technique.

In the original method a threshold T is calculated for every pixel in the image using the following formula:

T = m(x,y) * (1 + k * ((s(x,y) / R) - 1))

where m(x,y) and s(x,y) are the mean and standard deviation of pixel (x,y) neighborhood defined by a rectangular window with size w times w centered around the pixel. k is a configurable parameter that weights the effect of standard deviation. R is the maximum standard deviation of a grayscale image.

Parameters
image(M, N[, …]) ndarray

Grayscale input image.

window_sizeint, or iterable of int, optional

Window size specified as a single odd integer (3, 5, 7, …), or an iterable of length image.ndim containing only odd integers (e.g. (1, 5, 5)).

kfloat, optional

Value of the positive parameter k.

rfloat, optional

Value of R, the dynamic range of standard deviation. If None, set to the half of the image dtype range.

Returns
threshold(M, N) ndarray

Threshold mask. All pixels with an intensity higher than this value are assumed to be foreground.

Notes

This algorithm is originally designed for text recognition.

References

1

J. Sauvola and M. Pietikainen, “Adaptive document image binarization,” Pattern Recognition 33(2), pp. 225-236, 2000. DOI:10.1016/S0031-3203(99)00055-2

Examples

>>> from skimage import data
>>> image = cp.array(data.page())
>>> t_sauvola = threshold_sauvola(image, window_size=15, k=0.2)
>>> binary_image = image > t_sauvola
cucim.skimage.filters.threshold_triangle(image, nbins=256)#

Return threshold value based on the triangle algorithm.

Parameters
image(M, N[, …]) ndarray

Grayscale input image.

nbinsint, optional

Number of bins used to calculate histogram. This value is ignored for integer arrays.

Returns
thresholdfloat

Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.

References

1

Zack, G. W., Rogers, W. E. and Latt, S. A., 1977, Automatic Measurement of Sister Chromatid Exchange Frequency, Journal of Histochemistry and Cytochemistry 25 (7), pp. 741-753 DOI:10.1177/25.7.70454

2

ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold

Examples

>>> import cupy as cp
>>> from cucim.skimage.filters import threshold_triangle
>>> from skimage.data import camera
>>> image = cp.array(camera())
>>> thresh = threshold_triangle(image)
>>> binary = image > thresh
cucim.skimage.filters.threshold_yen(image=None, nbins=256, *, hist=None)#

Return threshold value based on Yen’s method. Either image or hist must be provided. In case hist is given, the actual histogram of the image is ignored.

Parameters
image(M, N[, …]) ndarray

Grayscale input image.

nbinsint, optional

Number of bins used to calculate histogram. This value is ignored for integer arrays.

histarray, or 2-tuple of arrays, optional

Histogram from which to determine the threshold, and optionally a corresponding array of bin center intensities. An alternative use of this function is to pass it only hist.

Returns
thresholdfloat

Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground.

References

1

Yen J.C., Chang F.J., and Chang S. (1995) “A New Criterion for Automatic Multilevel Thresholding” IEEE Trans. on Image Processing, 4(3): 370-378. DOI:10.1109/83.366472

2

Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165, DOI:10.1117/1.1631315 http://www.busim.ee.boun.edu.tr/~sankur/SankurFolder/Threshold_survey.pdf

3

ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold

Examples

>>> from skimage.data import camera
>>> image = cp.array(camera())
>>> thresh = threshold_yen(image)
>>> binary = image <= thresh
cucim.skimage.filters.try_all_threshold(image, figsize=(8, 5), verbose=True)#

Returns a figure comparing the outputs of different thresholding methods.

Parameters
image(M, N) ndarray

Input image.

figsizetuple, optional

Figure size (in inches).

verbosebool, optional

Print function name for each method.

Returns
fig, axtuple

Matplotlib figure and axes.

Notes

The following algorithms are used:

  • isodata

  • li

  • mean

  • minimum

  • otsu

  • triangle

  • yen

Examples

>>> from skimage.data import text
>>> text_img = cp.array(text())
>>> fig, ax = try_all_threshold(text_img, figsize=(10, 6), verbose=False)
cucim.skimage.filters.unsharp_mask(image, radius=1.0, amount=1.0, preserve_range=False, *, channel_axis=None)#

Unsharp masking filter.

The sharp details are identified as the difference between the original image and its blurred version. These details are then scaled, and added back to the original image.

Parameters
image(M[, …][, C]) ndarray

Input image.

radiusscalar or sequence of scalars, optional

If a scalar is given, then its value is used for all dimensions. If sequence is given, then there must be exactly one radius for each dimension except the last dimension for multichannel images. Note that 0 radius means no blurring, and negative values are not allowed.

amountscalar, optional

The details will be amplified with this factor. The factor could be 0 or negative. Typically, it is a small positive number, e.g. 1.0.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

Returns
output(M[, …][, C]) ndarray of float

Image with unsharp mask applied.

Notes

Unsharp masking is an image sharpening technique. It is a linear image operation, and numerically stable, unlike deconvolution which is an ill-posed problem. Because of this stability, it is often preferred over deconvolution.

The main idea is as follows: sharp details are identified as the difference between the original image and its blurred version. These details are added back to the original image after a scaling step:

enhanced image = original + amount * (original - blurred)

When applying this filter to several color layers independently, color bleeding may occur. More visually pleasing result can be achieved by processing only the brightness/lightness/intensity channel in a suitable color space such as HSV, HSL, YUV, or YCbCr.

Unsharp masking is described in most introductory digital image processing books. This implementation is based on [1].

References

1

Maria Petrou, Costas Petrou “Image Processing: The Fundamentals”, (2010), ed ii., page 357, ISBN 13: 9781119994398 DOI:10.1002/9781119994398

2

Wikipedia. Unsharp masking https://en.wikipedia.org/wiki/Unsharp_masking

Examples

>>> import cupy as cp
>>> array = cp.ones(shape=(5,5), dtype=cp.uint8)*100
>>> array[2,2] = 120
>>> array
array([[100, 100, 100, 100, 100],
       [100, 100, 100, 100, 100],
       [100, 100, 120, 100, 100],
       [100, 100, 100, 100, 100],
       [100, 100, 100, 100, 100]], dtype=uint8)
>>> cp.around(unsharp_mask(array, radius=0.5, amount=2),2)
array([[0.39, 0.39, 0.39, 0.39, 0.39],
       [0.39, 0.39, 0.38, 0.39, 0.39],
       [0.39, 0.38, 0.53, 0.38, 0.39],
       [0.39, 0.39, 0.38, 0.39, 0.39],
       [0.39, 0.39, 0.39, 0.39, 0.39]])
>>> array = cp.ones(shape=(5,5), dtype=cp.int8)*100
>>> array[2,2] = 127
>>> cp.around(unsharp_mask(array, radius=0.5, amount=2),2)
array([[0.79, 0.79, 0.79, 0.79, 0.79],
       [0.79, 0.78, 0.75, 0.78, 0.79],
       [0.79, 0.75, 1.  , 0.75, 0.79],
       [0.79, 0.78, 0.75, 0.78, 0.79],
       [0.79, 0.79, 0.79, 0.79, 0.79]])
>>> cp.around(unsharp_mask(array, radius=0.5, amount=2,
...                        preserve_range=True),
...           2)
array([[100.  , 100.  ,  99.99, 100.  , 100.  ],
       [100.  ,  99.39,  95.48,  99.39, 100.  ],
       [ 99.99,  95.48, 147.59,  95.48,  99.99],
       [100.  ,  99.39,  95.48,  99.39, 100.  ],
       [100.  , 100.  ,  99.99, 100.  , 100.  ]])
cucim.skimage.filters.wiener(data, impulse_response=None, filter_params=None, K=0.25, predefined_filter=None)#

Minimum Mean Square Error (Wiener) inverse filter.

Parameters
data(M, N) ndarray

Input data.

Kfloat or (M, N) ndarray

Ratio between power spectrum of noise and undegraded image.

impulse_responsecallable f(r, c, **filter_params)

Impulse response of the filter. See LPIFilter2D.__init__.

filter_paramsdict, optional

Additional keyword parameters to the impulse_response function.

Other Parameters
predefined_filterLPIFilter2D

If you need to apply the same filter multiple times over different images, construct the LPIFilter2D and specify it here.

cucim.skimage.filters.window(window_type, shape, warp_kwargs=None)#

Return an n-dimensional window of a given size and dimensionality.

Parameters
window_typestring, float, or tuple

The type of window to be created. Any window type supported by scipy.signal.get_window is allowed here. See notes below for a current list, or the SciPy documentation for the version of SciPy on your machine.

shapetuple of int or int

The shape of the window along each axis. If an integer is provided, a 1D window is generated.

warp_kwargsdict

Keyword arguments passed to skimage.transform.warp (e.g., warp_kwargs={'order':3} to change interpolation method).

Returns
nd_windowndarray

A window of the specified shape. dtype is np.float64.

Notes

This function is based on scipy.signal.get_window and thus can access all of the window types available to that function (e.g., "hann", "boxcar"). Note that certain window types require parameters that have to be supplied with the window name as a tuple (e.g., ("tukey", 0.8)). If only a float is supplied, it is interpreted as the beta parameter of the Kaiser window.

See https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.windows.get_window.html for more details.

Note that this function generates a double precision array of the specified shape and can thus generate very large arrays that consume a large amount of available memory.

The approach taken here to create nD windows is to first calculate the Euclidean distance from the center of the intended nD window to each position in the array. That distance is used to sample, with interpolation, from a 1D window returned from scipy.signal.get_window. The method of interpolation can be changed with the order keyword argument passed to skimage.transform.warp.

Some coordinates in the output window will be outside of the original signal; these will be filled in with zeros.

Window types: - boxcar - triang - blackman - hamming - hann - bartlett - flattop - parzen - bohman - blackmanharris - nuttall - barthann - kaiser (needs beta) - gaussian (needs standard deviation) - general_gaussian (needs power, width) - slepian (needs width) - dpss (needs normalized half-bandwidth) - chebwin (needs attenuation) - exponential (needs decay scale) - tukey (needs taper fraction)

References

1

Two-dimensional window design, Wikipedia, https://en.wikipedia.org/wiki/Two_dimensional_window_design

Examples

Return a Hann window with shape (512, 512):

>>> from cucim.skimage.filters import window
>>> w = window('hann', (512, 512))

Return a Kaiser window with beta parameter of 16 and shape (256, 256, 35):

>>> w = window(16, (256, 256, 35))

Return a Tukey window with an alpha parameter of 0.8 and shape (100, 300):

>>> w = window(('tukey', 0.8), (100, 300))

measure#

cucim.skimage.measure.approximate_polygon(coords, tolerance)#

Approximate a polygonal chain with the specified tolerance.

It is based on the Douglas-Peucker algorithm.

Note that the approximated polygon is always within the convex hull of the original polygon.

Parameters
coords(K, 2) array

Coordinate array.

tolerancefloat

Maximum distance from original points of polygon to approximated polygonal chain. If tolerance is 0, the original coordinate array is returned.

Returns
coords(L, 2) array

Approximated polygonal chain where M <= N.

References

1

https://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm

cucim.skimage.measure.block_reduce(image, block_size=2, func=<function sum>, cval=0, func_kwargs=None)#

Downsample image by applying function func to local blocks.

This function is useful for max and mean pooling, for example.

Parameters
image(M[, …]) ndarray

N-dimensional input image.

block_sizearray_like or int

Array containing down-sampling integer factor along each axis. Default block_size is 2.

funccallable

Function object which is used to calculate the return value for each local block. This function must implement an axis parameter. Primary functions are numpy.sum, numpy.min, numpy.max, numpy.mean and numpy.median. See also func_kwargs.

cvalfloat

Constant padding value if image is not perfectly divisible by the block size.

func_kwargsdict

Keyword arguments passed to func. Notably useful for passing dtype argument to np.mean. Takes dictionary of inputs, e.g.: func_kwargs={'dtype': np.float16}).

Returns
imagendarray

Down-sampled image with same number of dimensions as input image.

Examples

>>> import cupy as cp
>>> from cucim.skimage.measure import block_reduce
>>> image = cp.arange(3*3*4).reshape(3, 3, 4)
>>> image 
array([[[ 0,  1,  2,  3],
        [ 4,  5,  6,  7],
        [ 8,  9, 10, 11]],
       [[12, 13, 14, 15],
        [16, 17, 18, 19],
        [20, 21, 22, 23]],
       [[24, 25, 26, 27],
        [28, 29, 30, 31],
        [32, 33, 34, 35]]])
>>> block_reduce(image, block_size=(3, 3, 1), func=cp.mean)
array([[[16., 17., 18., 19.]]])
>>> image_max1 = block_reduce(image, block_size=(1, 3, 4), func=cp.max)
>>> image_max1 
array([[[11]],
       [[23]],
       [[35]]])
>>> image_max2 = block_reduce(image, block_size=(3, 1, 4), func=cp.max)
>>> image_max2 
array([[[27],
        [31],
        [35]]])
cucim.skimage.measure.blur_effect(image, h_size=11, channel_axis=None, reduce_func=<built-in function max>)#

Compute a metric that indicates the strength of blur in an image (0 for no blur, 1 for maximal blur).

Parameters
imagendarray

RGB or grayscale nD image. The input image is converted to grayscale before computing the blur metric.

h_sizeint, optional

Size of the re-blurring filter.

channel_axisint or None, optional

If None, the image is assumed to be grayscale (single-channel). Otherwise, this parameter indicates which axis of the array corresponds to color channels.

reduce_funccallable, optional

Function used to calculate the aggregation of blur metrics along all axes. If set to None, the entire list is returned, where the i-th element is the blur metric along the i-th axis. This function should be a host function that operates on standard python floats.

Returns
blurfloat (0 to 1) or list of floats

Blur metric: by default, the maximum of blur metrics along all axes.

Notes

h_size must keep the same value in order to compare results between images. Most of the time, the default size (11) is enough. This means that the metric can clearly discriminate blur up to an average 11x11 filter; if blur is higher, the metric still gives good results but its values tend towards an asymptote.

References

1

Frederique Crete, Thierry Dolmiere, Patricia Ladret, and Marina Nicolas “The blur effect: perception and estimation with a new no-reference perceptual blur metric” Proc. SPIE 6492, Human Vision and Electronic Imaging XII, 64920I (2007) https://hal.archives-ouvertes.fr/hal-00232709 DOI:10.1117/12.702790

cucim.skimage.measure.centroid(image, *, spacing=None)#

Return the (weighted) centroid of an image.

Parameters
imagearray

The input image.

spacing: tuple of float, shape (ndim,)

The pixel spacing along each axis of the image.

Returns
centertuple of float, length image.ndim

The centroid of the (nonzero) pixels in image.

Examples

>>> import cupy as cp
>>> from cucim.skimage.measure import centroid
>>> image = cp.zeros((20, 20), dtype=cp.float64)
>>> image[13:17, 13:17] = 0.5
>>> image[10:12, 10:12] = 1
>>> centroid(image)
array([13.16666667, 13.16666667])
cucim.skimage.measure.euler_number(image, connectivity=None)#

Calculate the Euler characteristic in binary image.

For 2D objects, the Euler number is the number of objects minus the number of holes. For 3D objects, the Euler number is obtained as the number of objects plus the number of holes, minus the number of tunnels, or loops.

Parameters
image: (M, N[, P]) ndarray

Input image. If image is not binary, all values greater than zero are considered as the object.

connectivityint, optional

Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used. 4 or 8 neighborhoods are defined for 2D images (connectivity 1 and 2, respectively). 6 or 26 neighborhoods are defined for 3D images, (connectivity 1 and 3, respectively). Connectivity 2 is not defined.

Returns
euler_numberint

Euler characteristic of the set of all objects in the image.

Notes

The Euler characteristic is an integer number that describes the topology of the set of all objects in the input image. If object is 4-connected, then background is 8-connected, and conversely.

The computation of the Euler characteristic is based on an integral geometry formula in discretized space. In practice, a neighborhood configuration is constructed, and a LUT is applied for each configuration. The coefficients used are the ones of Ohser et al.

It can be useful to compute the Euler characteristic for several connectivities. A large relative difference between results for different connectivities suggests that the image resolution (with respect to the size of objects and holes) is too low.

References

1

S. Rivollier. Analyse d’image geometrique et morphometrique par diagrammes de forme et voisinages adaptatifs generaux. PhD thesis, 2010. Ecole Nationale Superieure des Mines de Saint-Etienne. https://tel.archives-ouvertes.fr/tel-00560838

2

Ohser J., Nagel W., Schladitz K. (2002) The Euler Number of Discretized Sets - On the Choice of Adjacency in Homogeneous Lattices. In: Mecke K., Stoyan D. (eds) Morphology of Condensed Matter. Lecture Notes in Physics, vol 600. Springer, Berlin, Heidelberg.

Examples

>>> import cupy as cp
>>> SAMPLE = cp.zeros((100,100,100))
>>> SAMPLE[40:60, 40:60, 40:60] = 1
>>> euler_number(SAMPLE) 
1...
>>> SAMPLE[45:55,45:55,45:55] = 0;
>>> euler_number(SAMPLE) 
2...
>>> SAMPLE = cp.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0],
...                    [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
...                    [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
...                    [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
...                    [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
...                    [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
...                    [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
...                    [1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0],
...                    [0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1],
...                    [0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1]])
>>> euler_number(SAMPLE)  # doctest:
array(0)
>>> euler_number(SAMPLE, connectivity=1)  # doctest:
array(2)
cucim.skimage.measure.inertia_tensor(image, mu=None, *, spacing=None)#

Compute the inertia tensor of the input image.

Parameters
imagearray

The input image.

muarray, optional

The pre-computed central moments of image. The inertia tensor computation requires the central moments of the image. If an application requires both the central moments and the inertia tensor (for example, skimage.measure.regionprops), then it is more efficient to pre-compute them and pass them to the inertia tensor call.

spacingtuple of float, shape (ndim,), optional

The pixel spacing along each axis of the image.

Returns
Tarray, shape (image.ndim, image.ndim)

The inertia tensor of the input image. \(T_{i, j}\) contains the covariance of image intensity along axes \(i\) and \(j\).

References

1

https://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_tensor

2

Bernd Jähne. Spatio-Temporal Image Processing: Theory and Scientific Applications. (Chapter 8: Tensor Methods) Springer, 1993.

cucim.skimage.measure.inertia_tensor_eigvals(image, mu=None, T=None, *, spacing=None)#

Compute the eigenvalues of the inertia tensor of the image.

The inertia tensor measures covariance of the image intensity along the image axes. (See inertia_tensor.) The relative magnitude of the eigenvalues of the tensor is thus a measure of the elongation of a (bright) object in the image.

Parameters
imagearray

The input image.

muarray, optional

The pre-computed central moments of image.

Tarray, shape (image.ndim, image.ndim)

The pre-computed inertia tensor. If T is given, mu and image are ignored.

spacingtuple of float, shape (ndim,), optional

The pixel spacing along each axis of the image.

Returns
eigvalslist of float, length image.ndim

The eigenvalues of the inertia tensor of image, in descending order.

Notes

Computing the eigenvalues requires the inertia tensor of the input image. This is much faster if the central moments (mu) are provided, or, alternatively, one can provide the inertia tensor (T) directly.

cucim.skimage.measure.intersection_coeff(image0_mask, image1_mask, mask=None)#

Fraction of a channel’s segmented binary mask that overlaps with a second channel’s segmented binary mask.

Parameters
image0_mask(M, N) ndarray of dtype bool

Image mask of channel A.

image1_mask(M, N) ndarray of dtype bool

Image mask of channel B. Must have same dimensions as image0_mask.

mask(M, N) ndarray of dtype bool, optional

Only image0_mask and image1_mask pixels within this region of interest mask are included in the calculation. Must have same dimensions as image0_mask.

Returns
Intersection coefficient, float

Fraction of image0_mask that overlaps with image1_mask.

cucim.skimage.measure.label(label_image, background=None, return_num=False, connectivity=None)#

Label connected regions of an integer array.

Two pixels are connected when they are neighbors and have the same value. In 2D, they can be neighbors either in a 1- or 2-connected sense. The value refers to the maximum number of orthogonal hops to consider a pixel/voxel a neighbor:

1-connectivity     2-connectivity     diagonal connection close-up

     [ ]           [ ]  [ ]  [ ]             [ ]
      |               \  |  /                 |  <- hop 2
[ ]--[x]--[ ]      [ ]--[x]--[ ]        [x]--[ ]
      |               /  |  \             hop 1
     [ ]           [ ]  [ ]  [ ]
Parameters
label_imagendarray of dtype int

Image to label.

backgroundint, optional

Consider all pixels with this value as background pixels, and label them as 0. By default, 0-valued pixels are considered as background pixels.

return_numbool, optional

Whether to return the number of assigned labels.

connectivityint, optional

Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used.

Returns
labelsndarray of dtype int

Labeled array, where all connected regions are assigned the same integer value.

numint, optional

Number of labels, which equals the maximum label index and is only returned if return_num is True.

Notes

Currently the cucim implementation of this function always uses 32-bit integers for the label array. This is done for performance. In the future 64-bit integer support may also be added for better skimage compatibility.

References

1

Christophe Fiorio and Jens Gustedt, “Two linear time Union-Find strategies for image processing”, Theoretical Computer Science 154 (1996), pp. 165-181.

2

Kensheng Wu, Ekow Otoo and Arie Shoshani, “Optimizing connected component labeling algorithms”, Paper LBNL-56864, 2005, Lawrence Berkeley National Laboratory (University of California), http://repositories.cdlib.org/lbnl/LBNL-56864

Examples

>>> import cupy as cp
>>> x = cp.eye(3).astype(int)
>>> print(x)
[[1 0 0]
 [0 1 0]
 [0 0 1]]
>>> print(label(x, connectivity=1))
[[1 0 0]
 [0 2 0]
 [0 0 3]]
>>> print(label(x, connectivity=2))
[[1 0 0]
 [0 1 0]
 [0 0 1]]
>>> print(label(x, background=-1))
[[1 2 2]
 [2 1 2]
 [2 2 1]]
>>> x = cp.asarray([[1, 0, 0],
...                 [1, 1, 5],
...                 [0, 0, 0]])
>>> print(label(x))
[[1 0 0]
 [1 1 2]
 [0 0 0]]
cucim.skimage.measure.manders_coloc_coeff(image0, image1_mask, mask=None)#

Manders’ colocalization coefficient between two channels.

Parameters
image0(M, N) ndarray

Image of channel A. All pixel values should be non-negative.

image1_mask(M, N) ndarray of dtype bool

Binary mask with segmented regions of interest in channel B. Must have same dimensions as image0.

mask(M, N) ndarray of dtype bool, optional

Only image0 pixel values within this region of interest mask are included in the calculation. Must have same dimensions as image0.

Returns
mccfloat

Manders’ colocalization coefficient.

Notes

Manders’ Colocalization Coefficient (MCC) is the fraction of total intensity of a certain channel (channel A) that is within the segmented region of a second channel (channel B) [1]. It ranges from 0 for no colocalisation to 1 for complete colocalization. It is also referred to as M1 and M2.

MCC is commonly used to measure the colocalization of a particular protein in a subceullar compartment. Typically a segmentation mask for channel B is generated by setting a threshold that the pixel values must be above to be included in the MCC calculation. In this implementation, the channel B mask is provided as the argument image1_mask, allowing the exact segmentation method to be decided by the user beforehand.

The implemented equation is:

\[r = \frac{\sum A_{i,coloc}}{\sum A_i}\]
where

\(A_i\) is the value of the \(i^{th}\) pixel in image0 \(A_{i,coloc} = A_i\) if \(Bmask_i > 0\) \(Bmask_i\) is the value of the \(i^{th}\) pixel in mask

MCC is sensitive to noise, with diffuse signal in the first channel inflating its value. Images should be processed to remove out of focus and background light before the MCC is calculated [2].

References

1

Manders, E.M.M., Verbeek, F.J. and Aten, J.A. (1993), Measurement of co-localization of objects in dual-colour confocal images. Journal of Microscopy, 169: 375-382. https://doi.org/10.1111/j.1365-2818.1993.tb03313.x https://imagej.net/media/manders.pdf

2

Dunn, K. W., Kamocka, M. M., & McDonald, J. H. (2011). A practical guide to evaluating colocalization in biological microscopy. American journal of physiology. Cell physiology, 300(4), C723–C742. https://doi.org/10.1152/ajpcell.00462.2010

cucim.skimage.measure.manders_overlap_coeff(image0, image1, mask=None)#

Manders’ overlap coefficient

Parameters
image0(M, N) ndarray

Image of channel A. All pixel values should be non-negative.

image1(M, N) ndarray

Image of channel B. All pixel values should be non-negative. Must have same dimensions as image0

mask(M, N) ndarray of dtype bool, optional

Only image0 and image1 pixel values within this region of interest mask are included in the calculation. Must have same dimensions as image0.

Returns
moc: float

Manders’ Overlap Coefficient of pixel intensities between the two images.

Notes

Manders’ Overlap Coefficient (MOC) is given by the equation [1]:

\[r = \frac{\sum A_i B_i}{\sqrt{\sum A_i^2 \sum B_i^2}}\]
where

\(A_i\) is the value of the \(i^{th}\) pixel in image0 \(B_i\) is the value of the \(i^{th}\) pixel in image1

It ranges between 0 for no colocalization and 1 for complete colocalization of all pixels.

MOC does not take into account pixel intensities, just the fraction of pixels that have positive values for both channels[Rb497c6126263-2]_ [3]. Its usefulness has been criticized as it changes in response to differences in both co-occurence and correlation and so a particular MOC value could indicate a wide range of colocalization patterns [4] [5].

References

1

Manders, E.M.M., Verbeek, F.J. and Aten, J.A. (1993), Measurement of co-localization of objects in dual-colour confocal images. Journal of Microscopy, 169: 375-382. https://doi.org/10.1111/j.1365-2818.1993.tb03313.x https://imagej.net/media/manders.pdf

2

Dunn, K. W., Kamocka, M. M., & McDonald, J. H. (2011). A practical guide to evaluating colocalization in biological microscopy. American journal of physiology. Cell physiology, 300(4), C723–C742. https://doi.org/10.1152/ajpcell.00462.2010

3

Bolte, S. and Cordelières, F.P. (2006), A guided tour into subcellular colocalization analysis in light microscopy. Journal of Microscopy, 224: 213-232. https://doi.org/10.1111/j.1365-2818.2006.01

4

Adler J, Parmryd I. (2010), Quantifying colocalization by correlation: the Pearson correlation coefficient is superior to the Mander’s overlap coefficient. Cytometry A. Aug;77(8):733-42.https://doi.org/10.1002/cyto.a.20896

5

Adler, J, Parmryd, I. Quantifying colocalization: The case for discarding the Manders overlap coefficient. Cytometry. 2021; 99: 910– 920. https://doi.org/10.1002/cyto.a.24336

cucim.skimage.measure.moments(image, order=3, *, spacing=None)#

Calculate all raw image moments up to a certain order.

The following properties can be calculated from raw image moments:
  • Area as: M[0, 0].

  • Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}.

Note that raw moments are neither translation, scale nor rotation invariant.

Parameters
image(N[, …]) double or uint8 array

Rasterized shape as image.

orderint, optional

Maximum order of moments. Default is 3.

spacing: tuple of float, shape (ndim,)

The pixel spacing along each axis of the image.

Returns
m(order + 1, order + 1) array

Raw image moments.

References

1

Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.

2

B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.

3

T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.

4

https://en.wikipedia.org/wiki/Image_moment

Examples

>>> import cupy as cp
>>> from cucim.skimage.measure import moments
>>> image = cp.zeros((20, 20), dtype=cp.float64)
>>> image[13:17, 13:17] = 1
>>> M = moments(image)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> centroid
(array(14.5), array(14.5))
cucim.skimage.measure.moments_central(image, center=None, order=3, *, spacing=None, **kwargs)#

Calculate all central image moments up to a certain order.

The center coordinates (cr, cc) can be calculated from the raw moments as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}.

Note that central moments are translation invariant but not scale and rotation invariant.

Parameters
image(N[, …]) double or uint8 array

Rasterized shape as image.

centertuple of float, optional

Coordinates of the image centroid. This will be computed if it is not provided.

orderint, optional

The maximum order of moments computed.

spacing: tuple of float, shape (ndim,)

The pixel spacing along each axis of the image.

Returns
mu(order + 1, order + 1) array

Central image moments.

References

1

Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.

2

B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.

3

T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.

4

https://en.wikipedia.org/wiki/Image_moment

Examples

>>> import cupy as cp
>>> from cucim.skimage.measure import moments, moments_central
>>> image = cp.zeros((20, 20), dtype=cp.float64)
>>> image[13:17, 13:17] = 1
>>> M = moments(image)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> moments_central(image, centroid)
array([[16.,  0., 20.,  0.],
       [ 0.,  0.,  0.,  0.],
       [20.,  0., 25.,  0.],
       [ 0.,  0.,  0.,  0.]])
cucim.skimage.measure.moments_coords(coords, order=3)#

Calculate all raw image moments up to a certain order.

The following properties can be calculated from raw image moments:
  • Area as: M[0, 0].

  • Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}.

Note that raw moments are neither translation, scale, nor rotation invariant.

Parameters
coords(N, D) double or uint8 array

Array of N points that describe an image of D dimensionality in Cartesian space.

orderint, optional

Maximum order of moments. Default is 3.

Returns
M(order + 1, order + 1, …) array

Raw image moments. (D dimensions)

References

1

Johannes Kilian. Simple Image Analysis By Moments. Durham University, version 0.2, Durham, 2001.

Examples

>>> import cupy as cp
>>> from cucim.skimage.measure import moments_coords
>>> coords = cp.array([[row, col]
...                    for row in range(13, 17)
...                    for col in range(14, 18)], dtype=cp.float64)
>>> M = moments_coords(coords)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> centroid
(array(14.5), array(15.5))
cucim.skimage.measure.moments_coords_central(coords, center=None, order=3)#

Calculate all central image moments up to a certain order.

The following properties can be calculated from raw image moments:
  • Area as: M[0, 0].

  • Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}.

Note that raw moments are neither translation, scale nor rotation invariant.

Parameters
coords(N, D) double or uint8 array

Array of N points that describe an image of D dimensionality in Cartesian space. A tuple of coordinates as returned by cp.nonzero is also accepted as input.

centertuple of float, optional

Coordinates of the image centroid. This will be computed if it is not provided.

orderint, optional

Maximum order of moments. Default is 3.

Returns
Mc(order + 1, order + 1, …) array

Central image moments. (D dimensions)

References

1

Johannes Kilian. Simple Image Analysis By Moments. Durham University, version 0.2, Durham, 2001.

Examples

>>> import cupy as cp
>>> from cucim.skimage.measure import moments_coords_central
>>> coords = cp.array([[row, col]
...                    for row in range(13, 17)
...                    for col in range(14, 18)])
>>> moments_coords_central(coords)
array([[16.,  0., 20.,  0.],
       [ 0.,  0.,  0.,  0.],
       [20.,  0., 25.,  0.],
       [ 0.,  0.,  0.,  0.]])

As seen above, for symmetric objects, odd-order moments (columns 1 and 3, rows 1 and 3) are zero when centered on the centroid, or center of mass, of the object (the default). If we break the symmetry by adding a new point, this no longer holds:

>>> coords2 = cp.concatenate((coords, cp.array([[17, 17]])), axis=0)
>>> cp.around(moments_coords_central(coords2),
...           decimals=2)  
array([[17.  ,  0.  , 22.12, -2.49],
       [ 0.  ,  3.53,  1.73,  7.4 ],
       [25.88,  6.02, 36.63,  8.83],
       [ 4.15, 19.17, 14.8 , 39.6 ]])

Image moments and central image moments are equivalent (by definition) when the center is (0, 0):

>>> cp.allclose(moments_coords(coords),
...             moments_coords_central(coords, (0, 0)))
array(True)
cucim.skimage.measure.moments_hu(nu)#

Calculate Hu’s set of image moments (2D-only).

Note that this set of moments is proved to be translation, scale and rotation invariant.

Parameters
nu(M, M) array

Normalized central image moments, where M must be >= 4.

Returns
nu(7,) array

Hu’s set of image moments.

Notes

Due to the small array sizes, this function will be faster on the CPU. Consider transferring nu to the host and running skimage.measure.moments_hu if the moments are not needed on the device.

References

1

M. K. Hu, “Visual Pattern Recognition by Moment Invariants”, IRE Trans. Info. Theory, vol. IT-8, pp. 179-187, 1962

2

Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.

3

B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.

4

T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.

5

https://en.wikipedia.org/wiki/Image_moment

Examples

>>> import cupy as cp
>>> from cucim.skimage.measure import (moments_central, moments_hu,
...                                      moments_normalized)
>>> image = cp.zeros((20, 20), dtype=np.float64)
>>> image[13:17, 13:17] = 0.5
>>> image[10:12, 10:12] = 1
>>> mu = moments_central(image)
>>> nu = moments_normalized(mu)
>>> moments_hu(nu)
array([7.45370370e-01, 3.51165981e-01, 1.04049179e-01, 4.06442107e-02,
       2.64312299e-03, 2.40854582e-02, 6.50521303e-19])
cucim.skimage.measure.moments_normalized(mu, order=3, spacing=None)#

Calculate all normalized central image moments up to a certain order.

Note that normalized central moments are translation and scale invariant but not rotation invariant.

Parameters
mu(M[, …], M) array

Central image moments, where M must be greater than or equal to order.

orderint, optional

Maximum order of moments. Default is 3.

spacing: tuple of float, shape (ndim,)

The pixel spacing along each axis of the image.

Returns
nu(order + 1``[, ...], ``order + 1) array

Normalized central image moments.

Notes

Differs from the scikit-image implementation in that any moments greater than the requested order will be set to nan.

References

1

Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.

2

B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.

3

T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.

4

https://en.wikipedia.org/wiki/Image_moment

Examples

>>> import cupy as cp
>>> from cucim.skimage.measure import (moments, moments_central,
...                                      moments_normalized)
>>> image = cp.zeros((20, 20), dtype=cp.float64)
>>> image[13:17, 13:17] = 1
>>> m = moments(image)
>>> centroid = (m[0, 1] / m[0, 0], m[1, 0] / m[0, 0])
>>> mu = moments_central(image, centroid)
>>> moments_normalized(mu)
array([[       nan,        nan, 0.078125  , 0.        ],
       [       nan, 0.        , 0.        , 0.        ],
       [0.078125  , 0.        , 0.00610352, 0.        ],
       [0.        , 0.        , 0.        , 0.        ]])
cucim.skimage.measure.pearson_corr_coeff(image0, image1, mask=None)#

Calculate Pearson’s Correlation Coefficient between pixel intensities in channels.

Parameters
image0(M, N) ndarray

Image of channel A.

image1(M, N) ndarray

Image of channel 2 to be correlated with channel B. Must have same dimensions as image0.

mask(M, N) ndarray of dtype bool, optional

Only image0 and image1 pixels within this region of interest mask are included in the calculation. Must have same dimensions as image0.

Returns
pccfloat

Pearson’s correlation coefficient of the pixel intensities between the two images, within the mask if provided.

p-valuefloat

Two-tailed p-value.

Notes

Pearson’s Correlation Coefficient (PCC) measures the linear correlation between the pixel intensities of the two images. Its value ranges from -1 for perfect linear anti-correlation to +1 for perfect linear correlation. The calculation of the p-value assumes that the intensities of pixels in each input image are normally distributed.

Scipy’s implementation of Pearson’s correlation coefficient is used. Please refer to it for further information and caveats [1].

\[r = \frac{\sum (A_i - m_A_i) (B_i - m_B_i)} {\sqrt{\sum (A_i - m_A_i)^2 \sum (B_i - m_B_i)^2}}\]
where

\(A_i\) is the value of the \(i^{th}\) pixel in image0 \(B_i\) is the value of the \(i^{th}\) pixel in image1, \(m_A_i\) is the mean of the pixel values in image0 \(m_B_i\) is the mean of the pixel values in image1

A low PCC value does not necessarily mean that there is no correlation between the two channel intensities, just that there is no linear correlation. You may wish to plot the pixel intensities of each of the two channels in a 2D scatterplot and use Spearman’s rank correlation if a non-linear correlation is visually identified [2]. Also consider if you are interested in correlation or co-occurence, in which case a method involving segmentation masks (e.g. MCC or intersection coefficient) may be more suitable [3] [4].

Providing the mask of only relevant sections of the image (e.g., cells, or particular cellular compartments) and removing noise is important as the PCC is sensitive to these measures [3] [4].

References

1

https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html

2

https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.spearmanr.html

3(1,2)

Dunn, K. W., Kamocka, M. M., & McDonald, J. H. (2011). A practical guide to evaluating colocalization in biological microscopy. American journal of physiology. Cell physiology, 300(4), C723–C742. https://doi.org/10.1152/ajpcell.00462.2010

4(1,2)

Bolte, S. and Cordelières, F.P. (2006), A guided tour into subcellular colocalization analysis in light microscopy. Journal of Microscopy, 224: 213-232. https://doi.org/10.1111/j.1365-2818.2006.01706.x

cucim.skimage.measure.perimeter(image, neighborhood=4)#

Calculate total perimeter of all objects in binary image.

Parameters
image(M, N) ndarray

Binary input image.

neighborhood4 or 8, optional

Neighborhood connectivity for border pixel determination. It is used to compute the contour. A higher neighborhood widens the border on which the perimeter is computed.

Returns
perimeterfloat

Total perimeter of all objects in binary image.

References

1

K. Benkrid, D. Crookes. Design and FPGA Implementation of a Perimeter Estimator. The Queen’s University of Belfast. http://www.cs.qub.ac.uk/~d.crookes/webpubs/papers/perimeter.doc

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage import util
>>> from cucim.skimage.measure import label
>>> # coins image (binary)
>>> img_coins = cp.array(data.coins() > 110)
>>> # total perimeter of all objects in the image
>>> perimeter(img_coins, neighborhood=4)  
array(7796.86799644)
>>> perimeter(img_coins, neighborhood=8)  
array(8806.26807333)
cucim.skimage.measure.perimeter_crofton(image, directions=4)#

Calculate total Crofton perimeter of all objects in binary image.

Parameters
image(M, N) ndarray

Input image. If image is not binary, all values greater than zero are considered as the object.

directions2 or 4, optional

Number of directions used to approximate the Crofton perimeter. By default, 4 is used: it should be more accurate than 2. Computation time is the same in both cases.

Returns
perimeterfloat

Total perimeter of all objects in binary image.

Notes

This measure is based on Crofton formula [1], which is a measure from integral geometry. It is defined for general curve length evaluation via a double integral along all directions. In a discrete space, 2 or 4 directions give a quite good approximation, 4 being more accurate than 2 for more complex shapes.

Similar to perimeter(), this function returns an approximation of the perimeter in continuous space.

References

1

https://en.wikipedia.org/wiki/Crofton_formula

2

S. Rivollier. Analyse d’image geometrique et morphometrique par diagrammes de forme et voisinages adaptatifs generaux. PhD thesis, 2010. Ecole Nationale Superieure des Mines de Saint-Etienne. https://tel.archives-ouvertes.fr/tel-00560838

Examples

>>> import cupy as cp
>>> from cucim.skimage import util
>>> from skimage import data
>>> from skimage.measure import label
>>> # coins image (binary)
>>> img_coins = cp.array(data.coins() > 110)
>>> # total perimeter of all objects in the image
>>> perimeter_crofton(img_coins, directions=2)  
array(8144.57895443)
>>> perimeter_crofton(img_coins, directions=4)  
array(7837.07740694)
cucim.skimage.measure.profile_line(image, src, dst, linewidth=1, order=None, mode='reflect', cval=0.0, *, reduce_func=<function mean>)#

Return the intensity profile of an image measured along a scan line.

Parameters
imagendarray, shape (M, N[, C])

The image, either grayscale (2D array) or multichannel (3D array, where the final axis contains the channel information).

srcarray_like, shape (2,)

The coordinates of the start point of the scan line.

dstarray_like, shape (2,)

The coordinates of the end point of the scan line. The destination point is included in the profile, in contrast to standard numpy indexing.

linewidthint, optional

Width of the scan, perpendicular to the line

orderint in {0, 1, 2, 3, 4, 5}, optional

The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail.

mode{‘constant’, ‘nearest’, ‘reflect’, ‘mirror’, ‘wrap’}, optional

How to compute any values falling outside of the image.

cvalfloat, optional

If mode is ‘constant’, what constant value to use outside the image.

reduce_funccallable, optional

Function used to calculate the aggregation of pixel values perpendicular to the profile_line direction when linewidth > 1. If set to None the unreduced array will be returned.

Returns
return_valuearray

The intensity profile along the scan line. The length of the profile is the ceil of the computed length of the scan line.

Examples

>>> import cupy as cp
>>> x = cp.asarray([[1, 1, 1, 2, 2, 2]])
>>> img = cp.vstack([cp.zeros_like(x), x, x, x, cp.zeros_like(x)])
>>> img
array([[0, 0, 0, 0, 0, 0],
       [1, 1, 1, 2, 2, 2],
       [1, 1, 1, 2, 2, 2],
       [1, 1, 1, 2, 2, 2],
       [0, 0, 0, 0, 0, 0]])
>>> profile_line(img, (2, 1), (2, 4))
array([1., 1., 2., 2.])
>>> profile_line(img, (1, 0), (1, 6), cval=4)
array([1., 1., 1., 2., 2., 2., 2.])

The destination point is included in the profile, in contrast to standard numpy indexing. For example:

>>> profile_line(img, (1, 0), (1, 6))  # The final point is out of bounds
array([1., 1., 1., 2., 2., 2., 2.])
>>> profile_line(img, (1, 0), (1, 5))  # This accesses the full first row
array([1., 1., 1., 2., 2., 2.])

For different reduce_func inputs:

>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=cp.mean)
array([0.66666667, 0.66666667, 0.66666667, 1.33333333])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=cp.max)
array([1, 1, 1, 2])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=cp.sum)
array([2, 2, 2, 4])

The unreduced array will be returned when reduce_func is None or when reduce_func acts on each pixel value individually.

>>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,
...     reduce_func=None)
array([[1, 1, 2],
       [1, 1, 2],
       [1, 1, 2],
       [0, 0, 0]])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=cp.sqrt)
array([[1.        , 1.        , 0.        ],
       [1.        , 1.        , 0.        ],
       [1.        , 1.        , 0.        ],
       [1.41421356, 1.41421356, 0.        ]])
cucim.skimage.measure.regionprops(label_image, intensity_image=None, cache=True, *, extra_properties=None, spacing=None)#

Measure properties of labeled image regions.

Parameters
label_image(M, N[, P]) ndarray

Labeled input image. Labels with value 0 are ignored.

Changed in version 0.14.1: Previously, label_image was processed by numpy.squeeze and so any number of singleton dimensions was allowed. This resulted in inconsistent handling of images with singleton dimensions. To recover the old behaviour, use regionprops(np.squeeze(label_image), ...).

intensity_image(M, N[, P][, C]) ndarray, optional

Intensity (i.e., input) image with same size as labeled image, plus optionally an extra dimension for multichannel data. Currently, this extra channel dimension, if present, must be the last axis. Default is None.

Changed in version 0.18.0: The ability to provide an extra dimension for channels was added.

cachebool, optional

Determine whether to cache calculated properties. The computation is much faster for cached properties, whereas the memory consumption increases.

extra_propertiesIterable of callables

Add extra property computation functions that are not included with skimage. The name of the property is derived from the function name, the dtype is inferred by calling the function on a small sample. If the name of an extra property clashes with the name of an existing property the extra property will not be visible and a UserWarning is issued. A property computation function must take a region mask as its first argument. If the property requires an intensity image, it must accept the intensity image as the second argument.

spacing: tuple of float, shape (ndim,)

The pixel spacing along each axis of the image.

Returns
propertieslist of RegionProperties

Each item describes one labeled region, and can be accessed using the attributes listed below.

See also

label

Notes

The following properties can be accessed as attributes or keys:

num_pixelsint

Number of foreground pixels.

areafloat

Area of the region i.e. number of pixels of the region scaled by pixel-area.

area_bboxfloat

Area of the bounding box i.e. number of pixels of bounding box scaled by pixel-area.

area_convexfloat

Are of the convex hull image, which is the smallest convex polygon that encloses the region.

area_filledfloat

Area of the region with all the holes filled in.

axis_major_lengthfloat

The length of the major axis of the ellipse that has the same normalized second central moments as the region.

axis_minor_lengthfloat

The length of the minor axis of the ellipse that has the same normalized second central moments as the region.

bboxtuple

Bounding box (min_row, min_col, max_row, max_col). Pixels belonging to the bounding box are in the half-open interval [min_row; max_row) and [min_col; max_col).

centroidarray

Centroid coordinate tuple (row, col).

centroid_localarray

Centroid coordinate tuple (row, col), relative to region bounding box.

centroid_weightedarray

Centroid coordinate tuple (row, col) weighted with intensity image.

centroid_weighted_localarray

Centroid coordinate tuple (row, col), relative to region bounding box, weighted with intensity image.

coords_scaled(K, 2) ndarray

Coordinate list (row, col) of the region scaled by spacing.

coords(K, 2) ndarray

Coordinate list (row, col) of the region.

eccentricityfloat

Eccentricity of the ellipse that has the same second-moments as the region. The eccentricity is the ratio of the focal distance (distance between focal points) over the major axis length. The value is in the interval [0, 1). When it is 0, the ellipse becomes a circle.

equivalent_diameter_areafloat

The diameter of a circle with the same area as the region.

euler_numberint

Euler characteristic of the set of non-zero pixels. Computed as number of connected components subtracted by number of holes (input.ndim connectivity). In 3D, number of connected components plus number of holes subtracted by number of tunnels.

extentfloat

Ratio of pixels in the region to pixels in the total bounding box. Computed as area / (rows * cols)

feret_diameter_maxfloat

Maximum Feret’s diameter computed as the longest distance between points around a region’s convex hull contour as determined by find_contours. [5]

image(H, J) ndarray

Sliced binary region image which has the same size as bounding box.

image_convex(H, J) ndarray

Binary convex hull image which has the same size as bounding box.

image_filled(H, J) ndarray

Binary region image with filled holes which has the same size as bounding box.

image_intensityndarray

Image inside region bounding box.

inertia_tensorndarray

Inertia tensor of the region for the rotation around its mass.

inertia_tensor_eigvalstuple

The eigenvalues of the inertia tensor in decreasing order.

intensity_maxfloat

Value with the greatest intensity in the region.

intensity_meanfloat

Value with the mean intensity in the region.

intensity_minfloat

Value with the least intensity in the region.

intensity_stdfloat

Standard deviation of the intensity in the region.

labelint

The label in the labeled input image.

moments(3, 3) ndarray

Spatial moments up to 3rd order:

m_ij = sum{ array(row, col) * row^i * col^j }

where the sum is over the row, col coordinates of the region.

moments_central(3, 3) ndarray

Central moments (translation invariant) up to 3rd order:

mu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }

where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s centroid.

moments_hutuple

Hu moments (translation, scale and rotation invariant).

moments_normalized(3, 3) ndarray

Normalized moments (translation and scale invariant) up to 3rd order:

nu_ij = mu_ij / m_00^[(i+j)/2 + 1]

where m_00 is the zeroth spatial moment.

moments_weighted(3, 3) ndarray

Spatial moments of intensity image up to 3rd order:

wm_ij = sum{ array(row, col) * row^i * col^j }

where the sum is over the row, col coordinates of the region.

moments_weighted_central(3, 3) ndarray

Central moments (translation invariant) of intensity image up to 3rd order:

wmu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }

where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s weighted centroid.

moments_weighted_hutuple

Hu moments (translation, scale and rotation invariant) of intensity image.

moments_weighted_normalized(3, 3) ndarray

Normalized moments (translation and scale invariant) of intensity image up to 3rd order:

wnu_ij = wmu_ij / wm_00^[(i+j)/2 + 1]

where wm_00 is the zeroth spatial moment (intensity-weighted area).

orientationfloat

Angle between the 0th axis (rows) and the major axis of the ellipse that has the same second moments as the region, ranging from -pi/2 to pi/2 counter-clockwise.

perimeterfloat

Perimeter of object which approximates the contour as a line through the centers of border pixels using a 4-connectivity.

perimeter_croftonfloat

Perimeter of object approximated by the Crofton formula in 4 directions.

slicetuple of slices

A slice to extract the object from the source image.

solidityfloat

Ratio of pixels in the region to pixels of the convex hull image.

Each region also supports iteration, so that you can do:

for prop in region:
    print(prop, region[prop])

References

1

Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.

2

B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.

3

T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.

4

https://en.wikipedia.org/wiki/Image_moment

5

W. Pabst, E. Gregorová. Characterization of particles and particle systems, pp. 27-28. ICT Prague, 2007. https://old.vscht.cz/sil/keramika/Characterization_of_particles/CPPS%20_English%20version_.pdf

Examples

>>> from skimage import data, util
>>> from cucim.skimage.measure import label, regionprops
>>> img = cp.asarray(util.img_as_ubyte(data.coins()) > 110)
>>> label_img = label(img, connectivity=img.ndim)
>>> props = regionprops(label_img)
>>> # centroid of first labeled object
>>> props[0].centroid
(22.72987986048314, 81.91228523446583)
>>> # centroid of first labeled object
>>> props[0]['centroid']
(22.72987986048314, 81.91228523446583)

Add custom measurements by passing functions as extra_properties

>>> from skimage import data, util
>>> from cucim.skimage.measure import label, regionprops
>>> import numpy as np
>>> img = cp.asarray(util.img_as_ubyte(data.coins()) > 110)
>>> label_img = label(img, connectivity=img.ndim)
>>> def pixelcount(regionmask):
...     return np.sum(regionmask)
>>> props = regionprops(label_img, extra_properties=(pixelcount,))
>>> props[0].pixelcount
array(7741)
>>> props[1]['pixelcount']
array(42)
cucim.skimage.measure.regionprops_table(label_image, intensity_image=None, properties=('label', 'bbox'), *, cache=True, separator='-', extra_properties=None, spacing=None)#

Compute image properties and return them as a pandas-compatible table.

The table is a dictionary mapping column names to value arrays. See Notes section below for details.

New in version 0.16.

Parameters
label_image(M, N[, P]) ndarray

Labeled input image. Labels with value 0 are ignored.

intensity_image(M, N[, P][, C]) ndarray, optional

Intensity (i.e., input) image with same size as labeled image, plus optionally an extra dimension for multichannel data. The channel dimension, if present, must be the last axis. Default is None.

Changed in version 0.18.0: The ability to provide an extra dimension for channels was added.

propertiestuple or list of str, optional

Properties that will be included in the resulting dictionary For a list of available properties, please see regionprops(). Users should remember to add “label” to keep track of region identities.

cachebool, optional

Determine whether to cache calculated properties. The computation is much faster for cached properties, whereas the memory consumption increases.

separatorstr, optional

For non-scalar properties not listed in OBJECT_COLUMNS, each element will appear in its own column, with the index of that element separated from the property name by this separator. For example, the inertia tensor of a 2D region will appear in four columns: inertia_tensor-0-0, inertia_tensor-0-1, inertia_tensor-1-0, and inertia_tensor-1-1 (where the separator is -).

Object columns are those that cannot be split in this way because the number of columns would change depending on the object. For example, image and coords.

extra_propertiesIterable of callables

Add extra property computation functions that are not included with skimage. The name of the property is derived from the function name, the dtype is inferred by calling the function on a small sample. If the name of an extra property clashes with the name of an existing property the extra property will not be visible and a UserWarning is issued. A property computation function must take a region mask as its first argument. If the property requires an intensity image, it must accept the intensity image as the second argument.

spacing: tuple of float, shape (ndim,)

The pixel spacing along each axis of the image.

Returns
out_dictdict

Dictionary mapping property names to an array of values of that property, one value per region. This dictionary can be used as input to pandas DataFrame to map property names to columns in the frame and regions to rows. If the image has no regions, the arrays will have length 0, but the correct type.

Notes

Each column contains either a scalar property, an object property, or an element in a multidimensional array.

Properties with scalar values for each region, such as “eccentricity”, will appear as a float or int array with that property name as key.

Multidimensional properties of fixed size for a given image dimension, such as “centroid” (every centroid will have three elements in a 3D image, no matter the region size), will be split into that many columns, with the name {property_name}{separator}{element_num} (for 1D properties), {property_name}{separator}{elem_num0}{separator}{elem_num1} (for 2D properties), and so on.

For multidimensional properties that don’t have a fixed size, such as “image” (the image of a region varies in size depending on the region size), an object array will be used, with the corresponding property name as the key.

Examples

>>> from skimage import data, util, measure
>>> image = data.coins()
>>> label_image = measure.label(image > 110, connectivity=image.ndim)
>>> props = measure.regionprops_table(label_image, image,
...                           properties=['label', 'inertia_tensor',
...                                       'inertia_tensor_eigvals'])
>>> props  
{'label': array([ 1,  2, ...]), ...
 'inertia_tensor-0-0': array([  4.012...e+03,   8.51..., ...]), ...
 ...,
 'inertia_tensor_eigvals-1': array([  2.67...e+02,   2.83..., ...])}

The resulting dictionary can be directly passed to pandas, if installed, to obtain a clean DataFrame:

>>> import pandas as pd  
>>> data = pd.DataFrame(props)  
>>> data.head()  
   label  inertia_tensor-0-0  ...  inertia_tensor_eigvals-1
0      1         4012.909888  ...                267.065503
1      2            8.514739  ...                  2.834806
2      3            0.666667  ...                  0.000000
3      4            0.000000  ...                  0.000000
4      5            0.222222  ...                  0.111111

[5 rows x 7 columns]

If we want to measure a feature that does not come as a built-in property, we can define custom functions and pass them as extra_properties. For example, we can create a custom function that measures the intensity quartiles in a region:

>>> from skimage import data, util, measure
>>> import numpy as np
>>> def quartiles(regionmask, intensity):
...     return np.percentile(intensity[regionmask], q=(25, 50, 75))
>>>
>>> image = data.coins()
>>> label_image = measure.label(image > 110, connectivity=image.ndim)
>>> props = measure.regionprops_table(label_image, intensity_image=image,
...                                   properties=('label',),
...                                   extra_properties=(quartiles,))
>>> import pandas as pd 
>>> pd.DataFrame(props).head() 
       label  quartiles-0  quartiles-1  quartiles-2
0      1       117.00        123.0        130.0
1      2       111.25        112.0        114.0
2      3       111.00        111.0        111.0
3      4       111.00        111.5        112.5
4      5       112.50        113.0        114.0
cucim.skimage.measure.shannon_entropy(image, base=2)#

Calculate the Shannon entropy of an image.

The Shannon entropy is defined as S = -sum(pk * log(pk)), where pk are frequency/probability of pixels of value k.

Parameters
image(M, N) ndarray

Grayscale input image.

basefloat, optional

The logarithmic base to use.

Returns
entropy0-dimensional float cupy.ndarray

Notes

The returned value is measured in bits or shannon (Sh) for base=2, natural unit (nat) for base=np.e and hartley (Hart) for base=10.

References

1

https://en.wikipedia.org/wiki/Entropy_(information_theory) <https://en.wikipedia.org/wiki/Entropy_(information_theory)>`_

2

https://en.wiktionary.org/wiki/Shannon_entropy

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage.measure import shannon_entropy
>>> shannon_entropy(cp.array(data.camera()))
array(7.23169501)
cucim.skimage.measure.subdivide_polygon(coords, degree=2, preserve_ends=False)#

Subdivision of polygonal curves using B-Splines.

Note that the resulting curve is always within the convex hull of the original polygon. Circular polygons stay closed after subdivision.

Parameters
coords(K, 2) array

Coordinate array.

degree{1, 2, 3, 4, 5, 6, 7}, optional

Degree of B-Spline. Default is 2.

preserve_endsbool, optional

Preserve first and last coordinate of non-circular polygon. Default is False.

Returns
coords(L, 2) array

Subdivided coordinate array.

References

1

http://mrl.nyu.edu/publications/subdiv-course2000/coursenotes00.pdf

metrics#

cucim.skimage.metrics.adapted_rand_error(image_true=None, image_test=None, *, table=None, ignore_labels=(0,), alpha=0.5)#

Compute Adapted Rand error as defined by the SNEMI3D contest. [1]

Parameters
image_truecp.ndarray of int

Ground-truth label image, same shape as im_test.

image_testcp.ndarray of int

Test image.

tablecupyx.scipy.sparse array in csr format, optional

A contingency table built with skimage.evaluate.contingency_table. If None, it will be computed on the fly.

ignore_labelssequence of int, optional

Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score.

alphafloat, optional

Relative weight given to precision and recall in the adapted Rand error calculation.

Returns
arefloat

The adapted Rand error.

precfloat

The adapted Rand precision: this is the number of pairs of pixels that have the same label in the test label image and in the true image, divided by the number in the test image.

recfloat

The adapted Rand recall: this is the number of pairs of pixels that have the same label in the test label image and in the true image, divided by the number in the true image.

Notes

Pixels with label 0 in the true segmentation are ignored in the score.

The adapted Rand error is calculated as follows:

\(1 - \frac{\sum_{ij} p_{ij}^{2}}{\alpha \sum_{k} s_{k}^{2} + (1-\alpha)\sum_{k} t_{k}^{2}}\), where \(p_{ij}\) is the probability that a pixel has the same label in the test image and in the true image, \(t_{k}\) is the probability that a pixel has label \(k\) in the true image, and \(s_{k}\) is the probability that a pixel has label \(k\) in the test image.

Default behavior is to weight precision and recall equally in the adapted Rand error calculation. When alpha = 0, adapted Rand error = recall. When alpha = 1, adapted Rand error = precision.

References

1

Arganda-Carreras I, Turaga SC, Berger DR, et al. (2015) Crowdsourcing the creation of image segmentation algorithms for connectomics. Front. Neuroanat. 9:142. DOI:10.3389/fnana.2015.00142

cucim.skimage.metrics.contingency_table(im_true, im_test, *, ignore_labels=None, normalize=False)#

Return the contingency table for all regions in matched segmentations.

Parameters
im_truendarray of int

Ground-truth label image, same shape as im_test.

im_testndarray of int

Test image.

ignore_labelssequence of int, optional

Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score.

normalizebool

Determines if the contingency table is normalized by pixel count.

Returns
contscipy.sparse.csr_matrix

A contingency table. cont[i, j] will equal the number of voxels labeled i in im_true and j in im_test.

cucim.skimage.metrics.mean_squared_error(image0, image1)#

Compute the mean-squared error between two images.

Parameters
image0, image1ndarray

Images. Any dimensionality, must have same shape.

Returns
msefloat

The mean-squared error (MSE) metric.

Notes

Changed in version 0.16: This function was renamed from skimage.measure.compare_mse to skimage.metrics.mean_squared_error.

cucim.skimage.metrics.normalized_mutual_information(image0, image1, *, bins=100)#

Compute the normalized mutual information (NMI).

The normalized mutual information of \(A\) and \(B\) is given by:

.. math::

Y(A, B) = frac{H(A) + H(B)}{H(A, B)}

where \(H(X) := - \sum_{x \in X}{x \log x}\) is the entropy.

It was proposed to be useful in registering images by Colin Studholme and colleagues [1]. It ranges from 1 (perfectly uncorrelated image values) to 2 (perfectly correlated image values, whether positively or negatively).

Parameters
image0, image1ndarray

Images to be compared. The two input images must have the same number of dimensions.

binsint or sequence of int, optional

The number of bins along each axis of the joint histogram.

Returns
nmifloat

The normalized mutual information between the two arrays, computed at the granularity given by bins. Higher NMI implies more similar input images.

Raises
ValueError

If the images don’t have the same number of dimensions.

Notes

If the two input images are not the same shape, the smaller image is padded with zeros.

References

1

C. Studholme, D.L.G. Hill, & D.J. Hawkes (1999). An overlap invariant entropy measure of 3D medical image alignment. Pattern Recognition 32(1):71-86 DOI:10.1016/S0031-3203(98)00091-0

cucim.skimage.metrics.normalized_root_mse(image_true, image_test, *, normalization='euclidean')#

Compute the normalized root mean-squared error (NRMSE) between two images.

Parameters
image_truendarray

Ground-truth image, same shape as im_test.

image_testndarray

Test image.

normalization{‘euclidean’, ‘min-max’, ‘mean’}, optional

Controls the normalization method to use in the denominator of the NRMSE. There is no standard method of normalization across the literature [1]. The methods available here are as follows:

  • ‘euclidean’ : normalize by the averaged Euclidean norm of im_true:

    NRMSE = RMSE * sqrt(N) / || im_true ||
    

    where || . || denotes the Frobenius norm and N = im_true.size. This result is equivalent to:

    NRMSE = || im_true - im_test || / || im_true ||.
    
  • ‘min-max’ : normalize by the intensity range of im_true.

  • ‘mean’ : normalize by the mean of im_true

Returns
nrmsefloat

The NRMSE metric.

Notes

Changed in version 0.16: This function was renamed from skimage.measure.compare_nrmse to skimage.metrics.normalized_root_mse.

References

1

https://en.wikipedia.org/wiki/Root-mean-square_deviation

cucim.skimage.metrics.peak_signal_noise_ratio(image_true, image_test, *, data_range=None)#

Compute the peak signal to noise ratio (PSNR) for an image.

Parameters
image_truendarray

Ground-truth image, same shape as im_test.

image_testndarray

Test image.

data_rangeint, optional

The data range of the input image (distance between minimum and maximum possible values). By default, this is estimated from the image data-type.

Returns
psnrfloat

The PSNR metric.

Notes

Changed in version 0.16: This function was renamed from skimage.measure.compare_psnr to skimage.metrics.peak_signal_noise_ratio.

References

1

https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio

cucim.skimage.metrics.structural_similarity(im1, im2, *, win_size=None, gradient=False, data_range=None, channel_axis=None, gaussian_weights=False, full=False, **kwargs)#

Compute the mean structural similarity index between two images. Please pay attention to the data_range parameter with floating-point images.

Parameters
im1, im2ndarray

Images. Any dimensionality with same shape.

win_sizeint or None, optional

The side-length of the sliding window used in comparison. Must be an odd value. If gaussian_weights is True, this is ignored and the window size will depend on sigma.

gradientbool, optional

If True, also return the gradient with respect to im2.

data_rangefloat, optional

The data range of the input image (difference between maximum and minimum possible values). By default, this is estimated from the image data type. This estimate may be wrong for floating-point image data. Therefore it is recommended to always pass this scalar value explicitly (see note below).

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

gaussian_weightsbool, optional

If True, each patch has its mean and variance spatially weighted by a normalized Gaussian kernel of width sigma=1.5.

fullbool, optional

If True, also return the full structural similarity image.

Returns
mssimfloat

The mean structural similarity index over the image.

gradndarray

The gradient of the structural similarity between im1 and im2 [2]. This is only returned if gradient is set to True.

Sndarray

The full SSIM image. This is only returned if full is set to True.

Other Parameters
use_sample_covariancebool

If True, normalize covariances by N-1 rather than, N where N is the number of pixels within the sliding window.

K1float

Algorithm parameter, K1 (small constant, see [1]).

K2float

Algorithm parameter, K2 (small constant, see [1]).

sigmafloat

Standard deviation for the Gaussian when gaussian_weights is True.

Notes

If data_range is not specified, the range is automatically guessed based on the image data type. However for floating-point image data, this estimate yields a result double the value of the desired range, as the dtype_range in skimage.util.dtype.py has defined intervals from -1 to +1. This yields an estimate of 2, instead of 1, which is most often required when working with image data (as negative light intensities are nonsensical). In case of working with YCbCr-like color data, note that these ranges are different per channel (Cb and Cr have double the range of Y), so one cannot calculate a channel-averaged SSIM with a single call to this function, as identical ranges are assumed for each channel.

To match the implementation of Wang et al. [1], set gaussian_weights to True, sigma to 1.5, use_sample_covariance to False, and specify the data_range argument.

Changed in version 0.16: This function was renamed from skimage.measure.compare_ssim to skimage.metrics.structural_similarity.

References

1(1,2,3)

Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13, 600-612. https://ece.uwaterloo.ca/~z70wang/publications/ssim.pdf, DOI:10.1109/TIP.2003.819861

2

Avanaki, A. N. (2009). Exact global histogram specification optimized for structural similarity. Optical Review, 16, 613-621. arXiv:0901.0065 DOI:10.1007/s10043-009-0119-z

cucim.skimage.metrics.variation_of_information(image0=None, image1=None, *, table=None, ignore_labels=())#

Return symmetric conditional entropies associated with the VI. [1]

The variation of information is defined as VI(X,Y) = H(X|Y) + H(Y|X). If X is the ground-truth segmentation, then H(X|Y) can be interpreted as the amount of under-segmentation and H(X|Y) as the amount of over-segmentation. In other words, a perfect over-segmentation will have H(X|Y)=0 and a perfect under-segmentation will have H(Y|X)=0.

Parameters
image0, image1cp.ndarray of int

Label images / segmentations, must have same shape.

tablecupyx.scipy.sparse array in csr format, optional

A contingency table built with skimage.evaluate.contingency_table. If None, it will be computed with skimage.evaluate.contingency_table. If given, the entropies will be computed from this table and any images will be ignored.

ignore_labelssequence of int, optional

Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score.

Returns
vicp.ndarray of float, shape (2,)

The conditional entropies of image1|image0 and image0|image1.

References

1

Marina Meilă (2007), Comparing clusterings—an information based distance, Journal of Multivariate Analysis, Volume 98, Issue 5, Pages 873-895, ISSN 0047-259X, DOI:10.1016/j.jmva.2006.11.013.

morphology#

Utilities that operate on shapes in images.

These operations are particularly suited for binary images, although some may be useful for images of other types as well.

Basic morphological operations include dilation and erosion.

cucim.skimage.morphology.ball(radius, dtype=<class 'numpy.uint8'>, *, strict_radius=True, decomposition=None)#

Generates a ball-shaped footprint.

This is the 3D equivalent of a disk. A pixel is within the neighborhood if the Euclidean distance between it and the origin is no greater than radius.

Parameters
radiusint

The radius of the ball-shaped footprint.

Returns
footprintcupy.ndarray

The footprint where elements of the neighborhood are 1 and 0 otherwise.

Other Parameters
dtypedata-type, optional

The data type of the footprint.

strict_radiusbool, optional

If False, extend the radius by 0.5. This allows the circle to expand further within a cube that remains of size 2 * radius + 1 along each axis. This parameter is ignored if decomposition is not None.

decomposition{None, ‘sequence’}, optional

If None, a single array is returned. For ‘sequence’, a tuple of smaller footprints is returned. Applying this series of smaller footprints will given a result equivalent to a single, larger footprint, but with better computational performance. For ball footprints, the sequence decomposition is not exactly equivalent to decomposition=None. See Notes for more details.

Notes

The disk produced by the decomposition=’sequence’ mode is not identical to that with decomposition=None. Here we extend the approach taken in [1] for disks to the 3D case, using 3-dimensional extensions of the “square”, “diamond” and “t-shaped” elements from that publication. All of these elementary elements have size (3,) * ndim. We numerically computed the number of repetitions of each element that gives the closest match to the ball computed with kwargs strict_radius=False, decomposition=None.

Empirically, the equivalent composite footprint to the sequence decomposition approaches a rhombicuboctahedron (26-faces [2]).

References

1

Park, H and Chin R.T. Decomposition of structuring elements for optimal implementation of morphological operations. In Proceedings: 1997 IEEE Workshop on Nonlinear Signal and Image Processing, London, UK. https://www.iwaenc.org/proceedings/1997/nsip97/pdf/scan/ns970226.pdf

2

https://en.wikipedia.org/wiki/Rhombicuboctahedron

cucim.skimage.morphology.binary_closing(image, footprint=None, out=None, *, mode='ignore')#

Return fast binary morphological closing of an image.

This function returns the same result as grayscale closing but performs faster for binary images.

The morphological closing on an image is defined as a dilation followed by an erosion. Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks. This tends to “close” up (dark) gaps between (bright) features.

Parameters
imagendarray

Binary input image.

footprintndarray or tuple, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outndarray of bool, optional

The array to store the result of the morphology. If None, is passed, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘max’, ‘min’, ‘ignore’. If ‘ignore’, pixels outside the image domain are assumed to be True for the erosion and False for the dilation, which causes them to not influence the result. Default is ‘ignore’.

New in version 24.06: mode was added in 24.06.

Returns
closingndarray of bool

The result of the morphological closing.

Notes

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

cucim.skimage.morphology.binary_dilation(image, footprint=None, out=None, *, mode='ignore')#

Return fast binary morphological dilation of an image.

This function returns the same result as grayscale dilation but performs faster for binary images.

Morphological dilation sets a pixel at (i,j) to the maximum over all pixels in the neighborhood centered at (i,j). Dilation enlarges bright regions and shrinks dark regions.

Parameters
imagendarray

Binary input image.

footprintndarray or tuple, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outndarray of bool, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘max’, ‘min’, ‘ignore’. If ‘min’ or ‘ignore’, pixels outside the image domain are assumed to be False, which causes them to not influence the result. Default is ‘ignore’.

New in version 24.06: mode was added in 24.06.

Returns
dilatedndarray of bool or uint

The result of the morphological dilation with values in [False, True].

Notes

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

For non-symmetric footprints, skimage.morphology.binary_dilation() and skimage.morphology.dilation() produce an output that differs: binary_dilation mirrors the footprint, whereas dilation does not.

cucim.skimage.morphology.binary_erosion(image, footprint=None, out=None, *, mode='ignore')#

Return fast binary morphological erosion of an image.

This function returns the same result as grayscale erosion but performs faster for binary images.

Morphological erosion sets a pixel at (i,j) to the minimum over all pixels in the neighborhood centered at (i,j). Erosion shrinks bright regions and enlarges dark regions.

Parameters
imagendarray

Binary input image.

footprintndarray or tuple, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outndarray of bool, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘max’, ‘min’, ‘ignore’. If ‘max’ or ‘ignore’, pixels outside the image domain are assumed to be True, which causes them to not influence the result. Default is ‘ignore’.

New in version 24.06: mode was added in 24.06.

Returns
erodedndarray of bool or uint

The result of the morphological erosion taking values in [False, True].

Notes

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

For even-sized footprints, skimage.morphology.erosion() and this function produce an output that differs: one is shifted by one pixel compared to the other.

cucim.skimage.morphology.binary_opening(image, footprint=None, out=None, *, mode='ignore')#

Return fast binary morphological opening of an image.

This function returns the same result as grayscale opening but performs faster for binary images.

The morphological opening on an image is defined as an erosion followed by a dilation. Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks. This tends to “open” up (dark) gaps between (bright) features.

Parameters
imagendarray

Binary input image.

footprintndarray or tuple, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outndarray of bool, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘max’, ‘min’, ‘ignore’. If ‘ignore’, pixels outside the image domain are assumed to be True for the erosion and False for the dilation, which causes them to not influence the result. Default is ‘ignore’.

New in version 24.06: mode was added in 24.06

Returns
openingndarray of bool

The result of the morphological opening.

Notes

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

cucim.skimage.morphology.black_tophat(image, footprint=None, out=None, *, mode='reflect', cval=0.0)#

Return black top hat of an image.

The black top hat of an image is defined as its morphological closing minus the original image. This operation returns the dark spots of the image that are smaller than the footprint. Note that dark spots in the original image are bright spots after the black top hat.

Parameters
imagecupy.ndarray

Image array.

footprintcupy.ndarray, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outcupy.ndarray, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’, ‘max’, ‘min’, or ‘ignore’. If ‘max’ or ‘ignore’, pixels outside the image domain are assumed to be the maximum for the image’s dtype, which causes them to not influence the result. Default is ‘reflect’.

cvalscalar, optional

Value to fill past edges of input if mode is ‘constant’. Default is 0.0.

New in version 24.06: mode and cval were added in 24.06.

Returns
outcupy.ndarray, same shape and type as image

The result of the morphological black top hat.

See also

white_tophat

Notes

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

References

1

https://en.wikipedia.org/wiki/Top-hat_transform

Examples

>>> # Change dark peak to bright peak and subtract background
>>> import cupy as cp
>>> from cucim.skimage.morphology import square
>>> dark_on_grey = cp.asarray([[7, 6, 6, 6, 7],
...                            [6, 5, 4, 5, 6],
...                            [6, 4, 0, 4, 6],
...                            [6, 5, 4, 5, 6],
...                            [7, 6, 6, 6, 7]], dtype=cp.uint8)
>>> black_tophat(dark_on_grey, square(3))
array([[0, 0, 0, 0, 0],
       [0, 0, 1, 0, 0],
       [0, 1, 5, 1, 0],
       [0, 0, 1, 0, 0],
       [0, 0, 0, 0, 0]], dtype=uint8)
cucim.skimage.morphology.closing(image, footprint=None, out=None, *, mode='reflect', cval=0.0)#

Return grayscale morphological closing of an image.

The morphological closing of an image is defined as a dilation followed by an erosion. Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks. This tends to “close” up (dark) gaps between (bright) features.

Parameters
imagecupy.ndarray

Image array.

footprintcupy.ndarray, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outcupy.ndarray, optional

The array to store the result of the morphology. If None, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’, ‘max’, ‘min’, or ‘ignore’. If ‘max’ or ‘ignore’, pixels outside the image domain are assumed to be the maximum for the image’s dtype, which causes them to not influence the result. Default is ‘reflect’.

cvalscalar, optional

Value to fill past edges of input if mode is ‘constant’. Default is 0.0.

New in version 24.06: mode and cval were added in 24.06.

Returns
closingcupy.ndarray, same shape and type as image

The result of the morphological closing.

Notes

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

Examples

>>> # Close a gap between two bright lines
>>> import cupy as cp
>>> from cucim.skimage.morphology import square
>>> broken_line = cp.asarray([[0, 0, 0, 0, 0],
...                           [0, 0, 0, 0, 0],
...                           [1, 1, 0, 1, 1],
...                           [0, 0, 0, 0, 0],
...                           [0, 0, 0, 0, 0]], dtype=cp.uint8)
>>> closing(broken_line, square(3))
array([[0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0],
       [1, 1, 1, 1, 1],
       [0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0]], dtype=uint8)
cucim.skimage.morphology.cube(width, dtype=None, *, decomposition=None)#

Generates a cube-shaped footprint.

This is the 3D equivalent of a square. Every pixel along the perimeter has a chessboard distance no greater than radius (radius=floor(width/2)) pixels.

Parameters
widthint

The width, height and depth of the cube.

Returns
footprintcupy.ndarray

The footprint where elements of the neighborhood are 1 and 0 otherwise. When decomposition is None, this is just a numpy.ndarray. Otherwise, this will be a tuple whose length is equal to the number of unique structuring elements to apply (see Notes for more detail)

Other Parameters
dtypedata-type or None, optional

The data type of the footprint. When None, a tuple will be returned in place of the actual footprint array. This can be be passed to grayscale and binary morphology functions in place of an explicit array to avoid array allocation overhead.

decomposition{None, ‘separable’, ‘sequence’}, optional

If None, a single array is returned. For ‘sequence’, a tuple of smaller footprints is returned. Applying this series of smaller footprints will given an identical result to a single, larger footprint, but often with better computational performance. See Notes for more details.

Notes

When decomposition is not None, each element of the footprint tuple is a 2-tuple of the form (ndarray, num_iter) that specifies a footprint array and the number of iterations it is to be applied.

For binary morphology, using decomposition='sequence' was observed to give better performance, with the magnitude of the performance increase rapidly increasing with footprint size. For grayscale morphology with square footprints, it is recommended to use decomposition=None since the internal SciPy functions that are called already have a fast implementation based on separable 1D sliding windows.

The ‘sequence’ decomposition mode only supports odd valued width. If width is even, the sequence used will be identical to the ‘separable’ mode.

cucim.skimage.morphology.diamond(radius, dtype=<class 'numpy.uint8'>, *, decomposition=None)#

Generates a flat, diamond-shaped footprint.

A pixel is part of the neighborhood (i.e. labeled 1) if the city block/Manhattan distance between it and the center of the neighborhood is no greater than radius.

Parameters
radiusint

The radius of the diamond-shaped footprint.

Returns
footprintcupy.ndarray

The footprint where elements of the neighborhood are 1 and 0 otherwise. When decomposition is None, this is just a numpy.ndarray. Otherwise, this will be a tuple whose length is equal to the number of unique structuring elements to apply (see Notes for more detail)

Other Parameters
dtypedata-type, optional

The data type of the footprint.

decomposition{None, ‘sequence’}, optional

If None, a single array is returned. For ‘sequence’, a tuple of smaller footprints is returned. Applying this series of smaller footprints will given an identical result to a single, larger footprint, but with better computational performance. See Notes for more details.

Notes

When decomposition is not None, each element of the footprint tuple is a 2-tuple of the form (ndarray, num_iter) that specifies a footprint array and the number of iterations it is to be applied.

For either binary or grayscale morphology, using decomposition='sequence' was observed to have a performance benefit, with the magnitude of the benefit increasing with increasing footprint size.

cucim.skimage.morphology.dilation(image, footprint=None, out=None, shift_x=<DEPRECATED>, shift_y=<DEPRECATED>, *, mode='reflect', cval=0.0)#

Return grayscale morphological dilation of an image.

Morphological dilation sets the value of a pixel to the maximum over all pixel values within a local neighborhood centered about it. The values where the footprint is 1 define this neighborhood. Dilation enlarges bright regions and shrinks dark regions.

Parameters
imagecupy.ndarray

Image array.

footprintcupy.ndarray, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outcupy.ndarray, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’, ‘max’, ‘min’, or ‘ignore’. If ‘max’ or ‘ignore’, pixels outside the image domain are assumed to be the maximum for the image’s dtype, which causes them to not influence the result. Default is ‘reflect’.

cvalscalar, optional

Value to fill past edges of input if mode is ‘constant’. Default is 0.0.

New in version 24.06: mode and cval were added in 24.06.

Returns
dilatedcupy.ndarray, same shape and type as image

The result of the morphological dilation.

Other Parameters
shift_x, shift_yDEPRECATED

Deprecated since version 24.06.

Notes

For uint8 (and uint16 up to a certain bit-depth) data, the lower algorithm complexity makes the skimage.filters.rank.maximum() function more efficient for larger images and footprints.

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

For non-symmetric footprints, skimage.morphology.binary_dilation() and skimage.morphology.dilation() produce an output that differs: binary_dilation mirrors the footprint, whereas dilation does not.

Examples

>>> # Dilation enlarges bright regions
>>> import cupy as cp
>>> from cucim.skimage.morphology import square
>>> bright_pixel = cp.asarray([[0, 0, 0, 0, 0],
...                            [0, 0, 0, 0, 0],
...                            [0, 0, 1, 0, 0],
...                            [0, 0, 0, 0, 0],
...                            [0, 0, 0, 0, 0]], dtype=cp.uint8)
>>> dilation(bright_pixel, square(3))
array([[0, 0, 0, 0, 0],
       [0, 1, 1, 1, 0],
       [0, 1, 1, 1, 0],
       [0, 1, 1, 1, 0],
       [0, 0, 0, 0, 0]], dtype=uint8)
cucim.skimage.morphology.disk(radius, dtype=<class 'numpy.uint8'>, *, strict_radius=True, decomposition=None)#

Generates a flat, disk-shaped footprint.

A pixel is within the neighborhood if the Euclidean distance between it and the origin is no greater than radius (This is only approximately True, when decomposition == ‘sequence’).

Parameters
radiusint

The radius of the disk-shaped footprint.

Returns
footprintcupy.ndarray

The footprint where elements of the neighborhood are 1 and 0 otherwise.

Other Parameters
dtypedata-type, optional

The data type of the footprint.

strict_radiusbool, optional

If False, extend the radius by 0.5. This allows the circle to expand further within a cube that remains of size 2 * radius + 1 along each axis. This parameter is ignored if decomposition is not None.

decomposition{None, ‘sequence’, ‘crosses’}, optional

If None, a single array is returned. For ‘sequence’, a tuple of smaller footprints is returned. Applying this series of smaller footprints will given a result equivalent to a single, larger footprint, but with better computational performance. For disk footprints, the ‘sequence’ or ‘crosses’ decompositions are not always exactly equivalent to decomposition=None. See Notes for more details.

Notes

When decomposition is not None, each element of the footprint tuple is a 2-tuple of the form (ndarray, num_iter) that specifies a footprint array and the number of iterations it is to be applied.

The disk produced by the decomposition='sequence' mode may not be identical to that with decomposition=None. A disk footprint can be approximated by applying a series of smaller footprints of extent 3 along each axis. Specific solutions for this are given in [1] for the case of 2D disks with radius 2 through 10. Here, we numerically computed the number of repetitions of each element that gives the closest match to the disk computed with kwargs strict_radius=False, decomposition=None.

Empirically, the series decomposition at large radius approaches a hexadecagon (a 16-sided polygon [2]). In [3], the authors demonstrate that a hexadecagon is the closest approximation to a disk that can be achieved for decomposition with footprints of shape (3, 3).

The disk produced by the decomposition='crosses' is often but not always identical to that with decomposition=None. It tends to give a closer approximation than decomposition='sequence', at a performance that is fairly comparable. The individual cross-shaped elements are not limited to extent (3, 3) in size. Unlike the ‘seqeuence’ decomposition, the ‘crosses’ decomposition can also accurately approximate the shape of disks with strict_radius=True. The method is based on an adaption of algorithm 1 given in [4].

References

1

Park, H and Chin R.T. Decomposition of structuring elements for optimal implementation of morphological operations. In Proceedings: 1997 IEEE Workshop on Nonlinear Signal and Image Processing, London, UK. https://www.iwaenc.org/proceedings/1997/nsip97/pdf/scan/ns970226.pdf

2

https://en.wikipedia.org/wiki/Hexadecagon

3

Vanrell, M and Vitrià, J. Optimal 3 × 3 decomposable disks for morphological transformations. Image and Vision Computing, Vol. 15, Issue 11, 1997. DOI:10.1016/S0262-8856(97)00026-7

4

Li, D. and Ritter, G.X. Decomposition of Separable and Symmetric Convex Templates. Proc. SPIE 1350, Image Algebra and Morphological Image Processing, (1 November 1990). DOI:10.1117/12.23608

cucim.skimage.morphology.erosion(image, footprint=None, out=None, shift_x=<DEPRECATED>, shift_y=<DEPRECATED>, *, mode='reflect', cval=0.0)#

Return grayscale morphological erosion of an image.

Morphological erosion sets a pixel at (i,j) to the minimum over all pixels in the neighborhood centered at (i,j). Erosion shrinks bright regions and enlarges dark regions.

Parameters
imagecupy.ndarray

Image array.

footprintcupy.ndarray, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outcupy.ndarray, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’, ‘max’, ‘min’, or ‘ignore’. If ‘max’ or ‘ignore’, pixels outside the image domain are assumed to be the maximum for the image’s dtype, which causes them to not influence the result. Default is ‘reflect’.

cvalscalar, optional

Value to fill past edges of input if mode is ‘constant’. Default is 0.0.

New in version 24.06: mode and cval were added in 24.06.

Returns
erodedcupy.ndarray, same shape as image

The result of the morphological erosion.

Other Parameters
shift_x, shift_yDEPRECATED

Deprecated since version 24.06.

Notes

For uint8 (and uint16 up to a certain bit-depth) data, the lower algorithm complexity makes the skimage.filters.rank.minimum() function more efficient for larger images and footprints.

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

For even-sized footprints, skimage.morphology.binary_erosion() and this function produce an output that differs: one is shifted by one pixel compared to the other.

Examples

>>> # Erosion shrinks bright regions
>>> import cupy as cp
>>> from cucim.skimage.morphology import square
>>> bright_square = cp.asarray([[0, 0, 0, 0, 0],
...                             [0, 1, 1, 1, 0],
...                             [0, 1, 1, 1, 0],
...                             [0, 1, 1, 1, 0],
...                             [0, 0, 0, 0, 0]], dtype=cp.uint8)
>>> erosion(bright_square, square(3))
array([[0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0],
       [0, 0, 1, 0, 0],
       [0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0]], dtype=uint8)
cucim.skimage.morphology.isotropic_closing(image, radius, out=None, spacing=None)#

Return binary morphological closing of an image.

This function returns the same result as binary skimage.morphology.binary_closing() but performs faster for large circular structuring elements. This works by thresholding the exact Euclidean distance map [1], [2]. The implementation is based on: func:cucim.core.operations.morphology.distance_transform_edt.

Parameters
imagendarray

Binary input image.

radiusfloat

The radius with which the regions should be closed.

outndarray of bool, optional

The array to store the result of the morphology. If None, is passed, a new array will be allocated.

spacingfloat, or sequence of float, optional

Spacing of elements along each dimension. If a sequence, must be of length equal to the input’s dimension (number of axes). If a single number, this value is used for all axes. If not specified, a grid spacing of unity is implied.

Returns
closedndarray of bool

The result of the morphological closing.

Notes

Empirically, on an RTX A6000 GPU, it was observed that isotropic_closing is faster than binary_closing with decomposition=None at radius 12 in 2D and radius 3 in 3D. It becomes faster than binary_erosion with decomposition="sequence" at radius 14 in 2D and radius 5 in 3D. In practice, the exact point at which these isotropic functions become faster than their binary counterparts will also be dependent on image shape and content.

References

1

Cuisenaire, O. and Macq, B., “Fast Euclidean morphological operators using local distance transformation by propagation, and applications,” Image Processing And Its Applications, 1999. Seventh International Conference on (Conf. Publ. No. 465), 1999, pp. 856-860 vol.2. DOI:10.1049/cp:19990446

2

Ingemar Ragnemalm, Fast erosion and dilation by contour processing and thresholding of distance maps, Pattern Recognition Letters, Volume 13, Issue 3, 1992, Pages 161-166. DOI:10.1016/0167-8655(92)90055-5

cucim.skimage.morphology.isotropic_dilation(image, radius, out=None, spacing=None)#

Return binary morphological dilation of an image.

This function returns the same result as skimage.morphology.binary_dilation() but performs faster for large circular structuring elements. This works by applying a threshold to the exact Euclidean distance map of the inverted image [1], [2]. The implementation is based on: func:cucim.core.operations.morphology.distance_transform_edt.

Parameters
imagendarray

Binary input image.

radiusfloat

The radius by which regions should be dilated.

outndarray of bool, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

spacingfloat, or sequence of float, optional

Spacing of elements along each dimension. If a sequence, must be of length equal to the input’s dimension (number of axes). If a single number, this value is used for all axes. If not specified, a grid spacing of unity is implied.

Returns
dilatedndarray of bool

The result of the morphological dilation with values in [False, True].

Notes

Empirically, on an RTX A6000 GPU, it was observed that isotropic_dilation is faster than binary_dilation with decomposition=None at radius 12 in 2D and radius 3 in 3D. It becomes faster than binary_dilation with decomposition="sequence" at radius 14 in 2D and radius 5 in 3D. In practice, the exact point at which these isotropic functions become faster than their binary counterparts will also be dependent on image shape and content.

References

1

Cuisenaire, O. and Macq, B., “Fast Euclidean morphological operators using local distance transformation by propagation, and applications,” Image Processing And Its Applications, 1999. Seventh International Conference on (Conf. Publ. No. 465), 1999, pp. 856-860 vol.2. DOI:10.1049/cp:19990446

2

Ingemar Ragnemalm, Fast erosion and dilation by contour processing and thresholding of distance maps, Pattern Recognition Letters, Volume 13, Issue 3, 1992, Pages 161-166. DOI:10.1016/0167-8655(92)90055-5

cucim.skimage.morphology.isotropic_erosion(image, radius, out=None, spacing=None)#

Return binary morphological erosion of an image.

This function returns the same result as skimage.morphology.binary_erosion() but performs faster for large circular structuring elements. This works by applying a threshold to the exact Euclidean distance map of the image [1], [2]. The implementation is based on: func:cucim.core.operations.morphology.distance_transform_edt.

Parameters
imagendarray

Binary input image.

radiusfloat

The radius by which regions should be eroded.

outndarray of bool, optional

The array to store the result of the morphology. If None, a new array will be allocated.

spacingfloat, or sequence of float, optional

Spacing of elements along each dimension. If a sequence, must be of length equal to the input’s dimension (number of axes). If a single number, this value is used for all axes. If not specified, a grid spacing of unity is implied.

Returns
erodedndarray of bool

The result of the morphological erosion taking values in [False, True].

Notes

Empirically, on an RTX A6000 GPU, it was observed that isotropic_erosion is faster than binary_erosion with decomposition=None at radius 12 in 2D and radius 3 in 3D. It becomes faster than binary_erosion with decomposition="sequence" at radius 14 in 2D and radius 5 in 3D. In practice, the exact point at which these isotropic functions become faster than their binary counterparts will also be dependent on image shape and content.

References

1

Cuisenaire, O. and Macq, B., “Fast Euclidean morphological operators using local distance transformation by propagation, and applications,” Image Processing And Its Applications, 1999. Seventh International Conference on (Conf. Publ. No. 465), 1999, pp. 856-860 vol.2. DOI:10.1049/cp:19990446

2

Ingemar Ragnemalm, Fast erosion and dilation by contour processing and thresholding of distance maps, Pattern Recognition Letters, Volume 13, Issue 3, 1992, Pages 161-166. DOI:10.1016/0167-8655(92)90055-5

cucim.skimage.morphology.isotropic_opening(image, radius, out=None, spacing=None)#

Return binary morphological opening of an image.

This function returns the same result as skimage.morphology.binary_opening() but performs faster for large circular structuring elements. This works by thresholding the exact Euclidean distance map [1], [2]. The implementation is based on: func:cucim.core.operations.morphology.distance_transform_edt.

Parameters
imagendarray

Binary input image.

radiusfloat

The radius with which the regions should be opened.

outndarray of bool, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

spacingfloat, or sequence of float, optional

Spacing of elements along each dimension. If a sequence, must be of length equal to the input’s dimension (number of axes). If a single number, this value is used for all axes. If not specified, a grid spacing of unity is implied.

Returns
openedndarray of bool

The result of the morphological opening.

Notes

Empirically, on an RTX A6000 GPU, it was observed that isotropic_opening is faster than binary_opening with decomposition=None at radius 12 in 2D and radius 3 in 3D. It becomes faster than binary_erosion with decomposition="sequence" at radius 14 in 2D and radius 5 in 3D. In practice, the exact point at which these isotropic functions become faster than their binary counterparts will also be dependent on image shape and content.

References

1

Cuisenaire, O. and Macq, B., “Fast Euclidean morphological operators using local distance transformation by propagation, and applications,” Image Processing And Its Applications, 1999. Seventh International Conference on (Conf. Publ. No. 465), 1999, pp. 856-860 vol.2. DOI:10.1049/cp:19990446

2

Ingemar Ragnemalm, Fast erosion and dilation by contour processing and thresholding of distance maps, Pattern Recognition Letters, Volume 13, Issue 3, 1992, Pages 161-166. DOI:10.1016/0167-8655(92)90055-5

cucim.skimage.morphology.medial_axis(image, mask=None, return_distance=False, *, seed=<DEPRECATED>, rng=None)#

Compute the medial axis transform of a binary image.

Parameters
imagebinary ndarray, shape (M, N)

The image of the shape to skeletonize. If this input isn’t already a binary image, it gets converted into one: In this case, zero values are considered background (False), nonzero values are considered foreground (True).

maskbinary ndarray, shape (M, N), optional

If a mask is given, only those elements in image with a true value in mask are used for computing the medial axis.

return_distancebool, optional

If true, the distance transform is returned as well as the skeleton.

rng{numpy.random.Generator, int}, optional

Pseudo-random number generator. By default, a PCG64 generator is used (see numpy.random.default_rng()). If rng is an int, it is used to seed the generator.

The PRNG determines the order in which pixels are processed for tiebreaking.

Note: Due to a missing permute method on CuPy’s random Generator class, only a numpy.random.Generator is currently supported.

Returns
outndarray of bools

Medial axis transform of the image

distndarray of ints, optional

Distance transform of the image (only returned if return_distance is True)

Other Parameters
seedDEPRECATED

Deprecated in favor of rng.

Deprecated since version 23.12.

See also

skeletonize(), thin()

Notes

This algorithm computes the medial axis transform of an image as the ridges of its distance transform.

The different steps of the algorithm are as follows
  • A lookup table is used, that assigns 0 or 1 to each configuration of the 3x3 binary square, whether the central pixel should be removed or kept. We want a point to be removed if it has more than one neighbor and if removing it does not change the number of connected components.

  • The distance transform to the background is computed, as well as the cornerness of the pixel.

  • The foreground (value of 1) points are ordered by the distance transform, then the cornerness.

  • A cython function is called to reduce the image to its skeleton. It processes pixels in the order determined at the previous step, and removes or maintains a pixel according to the lookup table. Because of the ordering, it is possible to process all pixels in only one pass.

Examples

>>> square = np.zeros((7, 7), dtype=bool)
>>> square[1:-1, 2:-2] = 1
>>> square.view(cp.uint8)
array([[0, 0, 0, 0, 0, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> medial_axis(square).view(cp.uint8)
array([[0, 0, 0, 0, 0, 0, 0],
       [0, 0, 1, 0, 1, 0, 0],
       [0, 0, 0, 1, 0, 0, 0],
       [0, 0, 0, 1, 0, 0, 0],
       [0, 0, 0, 1, 0, 0, 0],
       [0, 0, 1, 0, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
cucim.skimage.morphology.octagon(m, n, dtype=<class 'numpy.uint8'>, *, decomposition=None)#

Generates an octagon shaped footprint.

For a given size of (m) horizontal and vertical sides and a given (n) height or width of slanted sides octagon is generated. The slanted sides are 45 or 135 degrees to the horizontal axis and hence the widths and heights are equal. The overall size of the footprint along a single axis will be m + 2 * n.

Parameters
mint

The size of the horizontal and vertical sides.

nint

The height or width of the slanted sides.

Returns
footprintcupy.ndarray

The footprint where elements of the neighborhood are 1 and 0 otherwise. When decomposition is None, this is just a numpy.ndarray. Otherwise, this will be a tuple whose length is equal to the number of unique structuring elements to apply (see Notes for more detail)

Other Parameters
dtypedata-type, optional

The data type of the footprint.

decomposition{None, ‘sequence’}, optional

If None, a single array is returned. For ‘sequence’, a tuple of smaller footprints is returned. Applying this series of smaller footprints will given an identical result to a single, larger footprint, but with better computational performance. See Notes for more details.

Notes

When decomposition is not None, each element of the footprint tuple is a 2-tuple of the form (ndarray, num_iter) that specifies a footprint array and the number of iterations it is to be applied.

For either binary or grayscale morphology, using decomposition='sequence' was observed to have a performance benefit, with the magnitude of the benefit increasing with increasing footprint size.

cucim.skimage.morphology.octahedron(radius, dtype=<class 'numpy.uint8'>, *, decomposition=None)#

Generates a octahedron-shaped footprint.

This is the 3D equivalent of a diamond. A pixel is part of the neighborhood (i.e. labeled 1) if the city block/Manhattan distance between it and the center of the neighborhood is no greater than radius.

Parameters
radiusint

The radius of the octahedron-shaped footprint.

Returns
footprintcupy.ndarray

The footprint where elements of the neighborhood are 1 and 0 otherwise. When decomposition is None, this is just a numpy.ndarray. Otherwise, this will be a tuple whose length is equal to the number of unique structuring elements to apply (see Notes for more detail)

Other Parameters
dtypedata-type, optional

The data type of the footprint.

decomposition{None, ‘sequence’}, optional

If None, a single array is returned. For ‘sequence’, a tuple of smaller footprints is returned. Applying this series of smaller footprints will given an identical result to a single, larger footprint, but with better computational performance. See Notes for more details.

Notes

When decomposition is not None, each element of the footprint tuple is a 2-tuple of the form (ndarray, num_iter) that specifies a footprint array and the number of iterations it is to be applied.

For either binary or grayscale morphology, using decomposition='sequence' was observed to have a performance benefit, with the magnitude of the benefit increasing with increasing footprint size.

cucim.skimage.morphology.opening(image, footprint=None, out=None, *, mode='reflect', cval=0.0)#

Return grayscale morphological opening of an image.

The morphological opening of an image is defined as an erosion followed by a dilation. Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks. This tends to “open” up (dark) gaps between (bright) features.

Parameters
imagecupy.ndarray

Image array.

footprintcupy.ndarray, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outcupy.ndarray, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’, ‘max’, ‘min’, or ‘ignore’. If ‘max’ or ‘ignore’, pixels outside the image domain are assumed to be the maximum for the image’s dtype, which causes them to not influence the result. Default is ‘reflect’.

cvalscalar, optional

Value to fill past edges of input if mode is ‘constant’. Default is 0.0.

New in version 24.06: mode and cval were added in 24.06.

Returns
openingcupy.ndarray, same shape and type as image

The result of the morphological opening.

Notes

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

Examples

>>> # Open up gap between two bright regions (but also shrink regions)
>>> import cupy as cp
>>> from cucim.skimage.morphology import square
>>> bad_connection = cp.asarray([[1, 0, 0, 0, 1],
...                              [1, 1, 0, 1, 1],
...                              [1, 1, 1, 1, 1],
...                              [1, 1, 0, 1, 1],
...                              [1, 0, 0, 0, 1]], dtype=cp.uint8)
>>> opening(bad_connection, square(3))
array([[0, 0, 0, 0, 0],
       [1, 1, 0, 1, 1],
       [1, 1, 0, 1, 1],
       [1, 1, 0, 1, 1],
       [0, 0, 0, 0, 0]], dtype=uint8)
cucim.skimage.morphology.reconstruction(seed, mask, method='dilation', footprint=None, offset=None)#

Perform a morphological reconstruction of an image.

Morphological reconstruction by dilation is similar to basic morphological dilation: high-intensity values will replace nearby low-intensity values. The basic dilation operator, however, uses a footprint to determine how far a value in the input image can spread. In contrast, reconstruction uses two images: a “seed” image, which specifies the values that spread, and a “mask” image, which gives the maximum allowed value at each pixel. The mask image, like the footprint, limits the spread of high-intensity values. Reconstruction by erosion is simply the inverse: low-intensity values spread from the seed image and are limited by the mask image, which represents the minimum allowed value.

Alternatively, you can think of reconstruction as a way to isolate the connected regions of an image. For dilation, reconstruction connects regions marked by local maxima in the seed image: neighboring pixels less-than-or-equal-to those seeds are connected to the seeded region. Local maxima with values larger than the seed image will get truncated to the seed value.

Parameters
seedndarray

The seed image (a.k.a. marker image), which specifies the values that are dilated or eroded.

maskndarray

The maximum (dilation) / minimum (erosion) allowed value at each pixel.

method{‘dilation’|’erosion’}, optional

Perform reconstruction by dilation or erosion. In dilation (or erosion), the seed image is dilated (or eroded) until limited by the mask image. For dilation, each seed value must be less than or equal to the corresponding mask value; for erosion, the reverse is true. Default is ‘dilation’.

footprintndarray, optional

The neighborhood expressed as an n-D array of 1’s and 0’s. Default is the n-D square of radius equal to 1 (i.e. a 3x3 square for 2D images, a 3x3x3 cube for 3D images, etc.)

offsetndarray, optional

The coordinates of the center of the footprint. Default is located on the geometrical center of the footprint, in that case footprint dimensions must be odd.

Returns
reconstructedndarray

The result of morphological reconstruction.

Notes

The algorithm is taken from [1]. Applications for grayscale reconstruction are discussed in [2] and [3].

References

1

Robinson, “Efficient morphological reconstruction: a downhill filter”, Pattern Recognition Letters 25 (2004) 1759-1767.

2

Vincent, L., “Morphological Grayscale Reconstruction in Image Analysis: Applications and Efficient Algorithms”, IEEE Transactions on Image Processing (1993)

3

Soille, P., “Morphological Image Analysis: Principles and Applications”, Chapter 6, 2nd edition (2003), ISBN 3540429883.

Examples

>>> import cupy as cp
>>> from cucim.skimage.morphology import reconstruction

First, we create a sinusoidal mask image with peaks at middle and ends.

>>> x = cp.linspace(0, 4 * np.pi)
>>> y_mask = cp.cos(x)

Then, we create a seed image initialized to the minimum mask value (for reconstruction by dilation, min-intensity values don’t spread) and add “seeds” to the left and right peak, but at a fraction of peak value (1).

>>> y_seed = y_mask.min() * cp.ones_like(x)
>>> y_seed[0] = 0.5
>>> y_seed[-1] = 0
>>> y_rec = reconstruction(y_seed, y_mask)

The reconstructed image (or curve, in this case) is exactly the same as the mask image, except that the peaks are truncated to 0.5 and 0. The middle peak disappears completely: Since there were no seed values in this peak region, its reconstructed value is truncated to the surrounding value (-1).

As a more practical example, we try to extract the bright features of an image by subtracting a background image created by reconstruction.

>>> y, x = cp.mgrid[:20:0.5, :20:0.5]
>>> bumps = cp.sin(x) + cp.sin(y)

To create the background image, set the mask image to the original image, and the seed image to the original image with an intensity offset, h.

>>> h = 0.3
>>> seed = bumps - h
>>> background = reconstruction(seed, bumps)

The resulting reconstructed image looks exactly like the original image, but with the peaks of the bumps cut off. Subtracting this reconstructed image from the original image leaves just the peaks of the bumps

>>> hdome = bumps - background

This operation is known as the h-dome of the image and leaves features of height h in the subtracted image.

cucim.skimage.morphology.rectangle(nrows, ncols, dtype=None, *, decomposition=None)#

Generates a flat, rectangular-shaped footprint.

Every pixel in the rectangle generated for a given width and given height belongs to the neighborhood.

Parameters
nrowsint

The number of rows of the rectangle.

ncolsint

The number of columns of the rectangle.

Returns
footprintcupy.ndarray

A footprint consisting only of ones, i.e. every pixel belongs to the neighborhood. When decomposition is None, this is just a numpy.ndarray. Otherwise, this will be a tuple whose length is equal to the number of unique structuring elements to apply (see Notes for more detail)

Other Parameters
dtypedata-type or None, optional

The data type of the footprint. When None, a tuple will be returned in place of the actual footprint array. This can be be passed to grayscale and binary morphology functions in place of an explicit array to avoid array allocation overhead.

decomposition{None, ‘separable’, ‘sequence’}, optional

If None, a single array is returned. For ‘sequence’, a tuple of smaller footprints is returned. Applying this series of smaller footprints will given an identical result to a single, larger footprint, but often with better computational performance. See Notes for more details. With ‘separable’, this function uses separable 1D footprints for each axis. Whether ‘seqeunce’ or ‘separable’ is computationally faster may be architecture-dependent.

Notes

When decomposition is not None, each element of the footprint tuple is a 2-tuple of the form (ndarray, num_iter) that specifies a footprint array and the number of iterations it is to be applied.

For binary morphology, using decomposition='sequence' was observed to give better performance, with the magnitude of the performance increase rapidly increasing with footprint size. For grayscale morphology with rectangular footprints, it is recommended to use decomposition=None since the internal SciPy functions that are called already have a fast implementation based on separable 1D sliding windows.

The sequence decomposition mode only supports odd valued nrows and ncols. If either nrows or ncols is even, the sequence used will be identical to decomposition='separable'.

  • The use of width and height has been deprecated in version 0.18.0. Use nrows and ncols instead.

cucim.skimage.morphology.remove_small_holes(ar, area_threshold=64, connectivity=1, *, out=None)#

Remove contiguous holes smaller than the specified size.

Parameters
arndarray (arbitrary shape, int or bool type)

The array containing the connected components of interest.

area_thresholdint, optional (default: 64)

The maximum area, in pixels, of a contiguous hole that will be filled. Replaces min_size.

connectivityint, {1, 2, …, ar.ndim}, optional (default: 1)

The connectivity defining the neighborhood of a pixel.

outndarray

Array of the same shape as ar and bool dtype, into which the output is placed. By default, a new array is created.

Returns
outndarray, same shape and type as input ar

The input array with small holes within connected components removed.

Raises
TypeError

If the input array is of an invalid type, such as float or string.

ValueError

If the input array contains negative values.

Notes

If the array type is int, it is assumed that it contains already-labeled objects. The labels are not kept in the output image (this function always outputs a bool image). It is suggested that labeling is completed after using this function.

Examples

>>> import cupy as cp
>>> from cucim.skimage import morphology
>>> a = cp.array([[1, 1, 1, 1, 1, 0],
...               [1, 1, 1, 0, 1, 0],
...               [1, 0, 0, 1, 1, 0],
...               [1, 1, 1, 1, 1, 0]], bool)
>>> b = morphology.remove_small_holes(a, 2)
>>> b
array([[ True,  True,  True,  True,  True, False],
       [ True,  True,  True,  True,  True, False],
       [ True, False, False,  True,  True, False],
       [ True,  True,  True,  True,  True, False]])
>>> c = morphology.remove_small_holes(a, 2, connectivity=2)
>>> c
array([[ True,  True,  True,  True,  True, False],
       [ True,  True,  True, False,  True, False],
       [ True, False, False,  True,  True, False],
       [ True,  True,  True,  True,  True, False]])
>>> d = morphology.remove_small_holes(a, 2, out=a)
>>> d is a
True
cucim.skimage.morphology.remove_small_objects(ar, min_size=64, connectivity=1, *, out=None)#

Remove objects smaller than the specified size.

Expects ar to be an array with labeled objects, and removes objects smaller than min_size. If ar is bool, the image is first labeled. This leads to potentially different behavior for bool and 0-and-1 arrays.

Parameters
arndarray (arbitrary shape, int or bool type)

The array containing the objects of interest. If the array type is int, the ints must be non-negative.

min_sizeint, optional (default: 64)

The smallest allowable object size.

connectivityint, {1, 2, …, ar.ndim}, optional (default: 1)

The connectivity defining the neighborhood of a pixel. Used during labelling if ar is bool.

outndarray

Array of the same shape as ar, into which the output is placed. By default, a new array is created.

Returns
outndarray, same shape and type as input ar

The input array with small connected components removed.

Raises
TypeError

If the input array is of an invalid type, such as float or string.

ValueError

If the input array contains negative values.

Examples

>>> import cupy as cp
>>> from cucim.skimage import morphology
>>> a = cp.array([[0, 0, 0, 1, 0],
...               [1, 1, 1, 0, 0],
...               [1, 1, 1, 0, 1]], bool)
>>> b = morphology.remove_small_objects(a, 6)
>>> b
array([[False, False, False, False, False],
       [ True,  True,  True, False, False],
       [ True,  True,  True, False, False]])
>>> c = morphology.remove_small_objects(a, 7, connectivity=2)
>>> c
array([[False, False, False,  True, False],
       [ True,  True,  True, False, False],
       [ True,  True,  True, False, False]])
>>> d = morphology.remove_small_objects(a, 6, out=a)
>>> d is a
True
cucim.skimage.morphology.square(width, dtype=None, *, decomposition=None)#

Generates a flat, square-shaped footprint.

Every pixel along the perimeter has a chessboard distance no greater than radius (radius=floor(width/2)) pixels.

Parameters
widthint

The width and height of the square.

Returns
footprintcupy.ndarray

The footprint where elements of the neighborhood are 1 and 0 otherwise. When decomposition is None, this is just a numpy.ndarray. Otherwise, this will be a tuple whose length is equal to the number of unique structuring elements to apply (see Notes for more detail)

Other Parameters
dtypedata-type or None, optional

The data type of the footprint. When None, a tuple will be returned in place of the actual footprint array. This can be be passed to grayscale and binary morphology functions in place of an explicit array to avoid array allocation overhead.

decomposition{None, ‘separable’, ‘sequence’}, optional

If None, a single array is returned. For ‘sequence’, a tuple of smaller footprints is returned. Applying this series of smaller footprints will give an identical result to a single, larger footprint, but often with better computational performance. See Notes for more details. With ‘separable’, this function uses separable 1D footprints for each axis. Whether ‘sequence’ or ‘separable’ is computationally faster may be architecture-dependent.

Notes

When decomposition is not None, each element of the footprint tuple is a 2-tuple of the form (ndarray, num_iter) that specifies a footprint array and the number of iterations it is to be applied.

For binary morphology, using decomposition='sequence' or decomposition='separable' were observed to give better performance than decomposition=None, with the magnitude of the performance increase rapidly increasing with footprint size. For grayscale morphology with square footprints, it is recommended to use decomposition=None since the internal SciPy functions that are called already have a fast implementation based on separable 1D sliding windows.

The ‘sequence’ decomposition mode only supports odd valued width. If width is even, the sequence used will be identical to the ‘separable’ mode.

cucim.skimage.morphology.star(a, dtype=<class 'numpy.uint8'>)#

Generates a star shaped footprint.

Start has 8 vertices and is an overlap of square of size 2*a + 1 with its 45 degree rotated version. The slanted sides are 45 or 135 degrees to the horizontal axis.

Parameters
aint

Parameter deciding the size of the star structural element. The side of the square array returned is 2*a + 1 + 2*floor(a / 2).

Returns
footprintcupy.ndarray

The footprint where elements of the neighborhood are 1 and 0 otherwise.

Other Parameters
dtypedata-type, optional

The data type of the footprint.

cucim.skimage.morphology.thin(image, max_num_iter=None)#

Perform morphological thinning of a binary image.

Parameters
imagebinary (M, N) ndarray

The image to thin. If this input isn’t already a binary image, it gets converted into one: In this case, zero values are considered background (False), nonzero values are considered foreground (True).

max_num_iterint, number of iterations, optional

Regardless of the value of this parameter, the thinned image is returned immediately if an iteration produces no change. If this parameter is specified it thus sets an upper bound on the number of iterations performed.

Returns
outndarray of bool

Thinned image.

See also

medial_axis

Notes

This algorithm [1] works by making multiple passes over the image, removing pixels matching a set of criteria designed to thin connected regions while preserving eight-connected components and 2 x 2 squares [2]. In each of the two sub-iterations the algorithm correlates the intermediate skeleton image with a neighborhood mask, then looks up each neighborhood in a lookup table indicating whether the central pixel should be deleted in that sub-iteration.

References

1

Z. Guo and R. W. Hall, “Parallel thinning with two-subiteration algorithms,” Comm. ACM, vol. 32, no. 3, pp. 359-373, 1989. DOI:10.1145/62065.62074

2

Lam, L., Seong-Whan Lee, and Ching Y. Suen, “Thinning Methodologies-A Comprehensive Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 14, No. 9, p. 879, 1992. DOI:10.1109/34.161346

Examples

>>> square = np.zeros((7, 7), dtype=bool)
>>> square[1:-1, 2:-2] = 1
>>> square[0, 1] =  1
>>> square.view(cp.uint8)
array([[0, 1, 0, 0, 0, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> skel = thin(square)
>>> skel.view(np.uint8)
array([[0, 1, 0, 0, 0, 0, 0],
       [0, 0, 1, 0, 0, 0, 0],
       [0, 0, 0, 1, 0, 0, 0],
       [0, 0, 0, 1, 0, 0, 0],
       [0, 0, 0, 1, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
cucim.skimage.morphology.white_tophat(image, footprint=None, out=None, *, mode='reflect', cval=0.0)#

Return white top hat of an image.

The white top hat of an image is defined as the image minus its morphological opening. This operation returns the bright spots of the image that are smaller than the footprint.

Parameters
imagecupy.ndarray

Image array.

footprintcupy.ndarray, optional

The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped footprint (connectivity=1). The footprint can also be provided as a sequence of smaller footprints as described in the notes below.

outcupy.ndarray, optional

The array to store the result of the morphology. If None is passed, a new array will be allocated.

modestr, optional

The mode parameter determines how the array borders are handled. Valid modes are: ‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’, ‘max’, ‘min’, or ‘ignore’. If ‘max’ or ‘ignore’, pixels outside the image domain are assumed to be the maximum for the image’s dtype, which causes them to not influence the result. Default is ‘reflect’.

cvalscalar, optional

Value to fill past edges of input if mode is ‘constant’. Default is 0.0.

New in version 24.06: mode and cval were added in 24.06.

Returns
outcupy.ndarray, same shape and type as image

The result of the morphological white top hat.

See also

black_tophat

Notes

The footprint can also be a provided as a sequence of 2-tuples where the first element of each 2-tuple is a footprint ndarray and the second element is an integer describing the number of times it should be iterated. For example footprint=[(cp.ones((9, 1)), 1), (cp.ones((1, 9)), 1)] would apply a 9x1 footprint followed by a 1x9 footprint resulting in a net effect that is the same as footprint=cp.ones((9, 9)), but with lower computational cost. Most of the builtin footprints such as skimage.morphology.disk() provide an option to automatically generate a footprint sequence of this type.

References

1

https://en.wikipedia.org/wiki/Top-hat_transform

Examples

>>> # Subtract grey background from bright peak
>>> import cupy as cp
>>> from cucim.skimage.morphology import square
>>> bright_on_grey = cp.asarray([[2, 3, 3, 3, 2],
...                              [3, 4, 5, 4, 3],
...                              [3, 5, 9, 5, 3],
...                              [3, 4, 5, 4, 3],
...                              [2, 3, 3, 3, 2]], dtype=cp.uint8)
>>> white_tophat(bright_on_grey, square(3))
array([[0, 0, 0, 0, 0],
       [0, 0, 1, 0, 0],
       [0, 1, 5, 1, 0],
       [0, 0, 1, 0, 0],
       [0, 0, 0, 0, 0]], dtype=uint8)

registration#

cucim.skimage.registration.optical_flow_ilk(reference_image, moving_image, *, radius=7, num_warp=10, gaussian=False, prefilter=False, dtype=<class 'numpy.float32'>)#

Coarse to fine optical flow estimator.

The iterative Lucas-Kanade (iLK) solver is applied at each level of the image pyramid. iLK [1] is a fast and robust alternative to TVL1 algorithm although less accurate for rendering flat surfaces and object boundaries (see [2]).

Parameters
reference_imagendarray, shape (M, N[, P[, …]])

The first grayscale image of the sequence.

moving_imagendarray, shape (M, N[, P[, …]])

The second grayscale image of the sequence.

radiusint, optional

Radius of the window considered around each pixel.

num_warpint, optional

Number of times moving_image is warped.

gaussianbool, optional

If True, a Gaussian kernel is used for the local integration. Otherwise, a uniform kernel is used.

prefilterbool, optional

Whether to prefilter the estimated optical flow before each image warp. When True, a median filter with window size 3 along each axis is applied. This helps to remove potential outliers.

dtypedtype, optional

Output data type: must be floating point. Single precision provides good results and saves memory usage and computation time compared to double precision.

Returns
flowndarray, shape (reference_image.ndim, M, N[, P[, …]])

The estimated optical flow components for each axis.

Notes

  • The implemented algorithm is described in Table2 of [1].

  • Color images are not supported.

References

1(1,2)

Le Besnerais, G., & Champagnat, F. (2005, September). Dense optical flow by iterative local window registration. In IEEE International Conference on Image Processing 2005 (Vol. 1, pp. I-137). IEEE. DOI:10.1109/ICIP.2005.1529706

2

Plyer, A., Le Besnerais, G., & Champagnat, F. (2016). Massively parallel Lucas Kanade optical flow for real-time video processing applications. Journal of Real-Time Image Processing, 11(4), 713-730. DOI:10.1007/s11554-014-0423-0

Examples

>>> import cupy as cp
>>> from skimage.data import stereo_motorcycle
>>> from cucim.skimage.color import rgb2gray
>>> from cucim.skimage.registration import optical_flow_ilk
>>> reference_image, moving_image, disp = map(cp.array, stereo_motorcycle())
>>> # --- Convert the images to gray level: color is not supported.
>>> reference_image = rgb2gray(reference_image)
>>> moving_image = rgb2gray(moving_image)
>>> flow = optical_flow_ilk(moving_image, reference_image)
cucim.skimage.registration.optical_flow_tvl1(reference_image, moving_image, *, attachment=15, tightness=0.3, num_warp=5, num_iter=10, tol=0.0001, prefilter=False, dtype=<class 'numpy.float32'>)#

Coarse to fine optical flow estimator.

The TV-L1 solver is applied at each level of the image pyramid. TV-L1 is a popular algorithm for optical flow estimation introduced by Zack et al. [1], improved in [2] and detailed in [3].

Parameters
reference_imagendarray, shape (M, N[, P[, …]])

The first grayscale image of the sequence.

moving_imagendarray, shape (M, N[, P[, …]])

The second grayscale image of the sequence.

attachmentfloat, optional

Attachment parameter (\(\lambda\) in [1]). The smaller this parameter is, the smoother the returned result will be.

tightnessfloat, optional

Tightness parameter (\(\theta\) in [1]). It should have a small value in order to maintain attachment and regularization parts in correspondence.

num_warpint, optional

Number of times moving_image is warped.

num_iterint, optional

Number of fixed point iteration.

tolfloat, optional

Tolerance used as stopping criterion based on the L² distance between two consecutive values of (u, v).

prefilterbool, optional

Whether to prefilter the estimated optical flow before each image warp. When True, a median filter with window size 3 along each axis is applied. This helps to remove potential outliers.

dtypedtype, optional

Output data type: must be floating point. Single precision provides good results and saves memory usage and computation time compared to double precision.

Returns
flowndarray, shape (image0.ndim, M, N[, P[, …]])

The estimated optical flow components for each axis.

Notes

Color images are not supported.

References

1(1,2,3)

Zach, C., Pock, T., & Bischof, H. (2007, September). A duality based approach for realtime TV-L 1 optical flow. In Joint pattern recognition symposium (pp. 214-223). Springer, Berlin, Heidelberg. DOI:10.1007/978-3-540-74936-3_22

2

Wedel, A., Pock, T., Zach, C., Bischof, H., & Cremers, D. (2009). An improved algorithm for TV-L 1 optical flow. In Statistical and geometrical approaches to visual motion analysis (pp. 23-45). Springer, Berlin, Heidelberg. DOI:10.1007/978-3-642-03061-1_2

3

Pérez, J. S., Meinhardt-Llopis, E., & Facciolo, G. (2013). TV-L1 optical flow estimation. Image Processing On Line, 2013, 137-150. DOI:10.5201/ipol.2013.26

Examples

>>> import cupy as cp
>>> from cucim.skimage.color import rgb2gray
>>> from skimage.data import stereo_motorcycle
>>> from cucim.skimage.registration import optical_flow_tvl1
>>> image0, image1, disp = [cp.array(a) for a in stereo_motorcycle()]
>>> # --- Convert the images to gray level: color is not supported.
>>> image0 = rgb2gray(image0)
>>> image1 = rgb2gray(image1)
>>> flow = optical_flow_tvl1(image1, image0)
cucim.skimage.registration.phase_cross_correlation(reference_image, moving_image, *, upsample_factor=1, space='real', disambiguate=False, reference_mask=None, moving_mask=None, overlap_ratio=0.3, normalization='phase')#

Efficient subpixel image translation registration by cross-correlation.

This code gives the same precision as the FFT upsampled cross-correlation in a fraction of the computation time and with reduced memory requirements. It obtains an initial estimate of the cross-correlation peak by an FFT and then refines the shift estimation by upsampling the DFT only in a small neighborhood of that estimate by means of a matrix-multiply DFT [1].

Parameters
reference_imagearray

Reference image.

moving_imagearray

Image to register. Must be same dimensionality as reference_image.

upsample_factorint, optional

Upsampling factor. Images will be registered to within 1 / upsample_factor of a pixel. For example upsample_factor == 20 means the images will be registered within 1/20th of a pixel. Default is 1 (no upsampling). Not used if any of reference_mask or moving_mask is not None.

spacestring, one of “real” or “fourier”, optional

Defines how the algorithm interprets input data. “real” means data will be FFT’d to compute the correlation, while “fourier” data will bypass FFT of input data. Case insensitive. Not used if any of reference_mask or moving_mask is not None.

disambiguatebool

The shift returned by this function is only accurate modulo the image shape, due to the periodic nature of the Fourier transform. If this parameter is set to True, the real space cross-correlation is computed for each possible shift, and the shift with the highest cross-correlation within the overlapping area is returned.

reference_maskndarray

Boolean mask for reference_image. The mask should evaluate to True (or 1) on valid pixels. reference_mask should have the same shape as reference_image.

moving_maskndarray or None, optional

Boolean mask for moving_image. The mask should evaluate to True (or 1) on valid pixels. moving_mask should have the same shape as moving_image. If None, reference_mask will be used.

overlap_ratiofloat, optional

Minimum allowed overlap ratio between images. The correlation for translations corresponding with an overlap ratio lower than this threshold will be ignored. A lower overlap_ratio leads to smaller maximum translation, while a higher overlap_ratio leads to greater robustness against spurious matches due to small overlap between masked images. Used only if one of reference_mask or moving_mask is None.

normalization{“phase”, None}

The type of normalization to apply to the cross-correlation. This parameter is unused when masks (reference_mask and moving_mask) are supplied.

Returns
shifttuple

Shift vector (in pixels) required to register moving_image with reference_image. Axis ordering is consistent with the axis order of the input array.

errorfloat

Translation invariant normalized RMS error between reference_image and moving_image. For masked cross-correlation this error is not available and NaN is returned.

phasedifffloat

Global phase difference between the two images (should be zero if images are non-negative). For masked cross-correlation this phase difference is not available and NaN is returned.

Notes

The use of cross-correlation to estimate image translation has a long history dating back to at least [2]. The “phase correlation” method (selected by normalization="phase") was first proposed in [3]. Publications [1] and [2] use an unnormalized cross-correlation (normalization=None). Which form of normalization is better is application-dependent. For example, the phase correlation method works well in registering images under different illumination, but is not very robust to noise. In a high noise scenario, the unnormalized method may be preferable.

When masks are provided, a masked normalized cross-correlation algorithm is used [5], [6].

References

1(1,2)

Manuel Guizar-Sicairos, Samuel T. Thurman, and James R. Fienup, “Efficient subpixel image registration algorithms,” Optics Letters 33, 156-158 (2008). DOI:10.1364/OL.33.000156

2(1,2)

P. Anuta, Spatial registration of multispectral and multitemporal digital imagery using fast Fourier transform techniques, IEEE Trans. Geosci. Electron., vol. 8, no. 4, pp. 353–368, Oct. 1970. DOI:10.1109/TGE.1970.271435.

3

C. D. Kuglin D. C. Hines. The phase correlation image alignment method, Proceeding of IEEE International Conference on Cybernetics and Society, pp. 163-165, New York, NY, USA, 1975, pp. 163–165.

4

James R. Fienup, “Invariant error metrics for image reconstruction” Optics Letters 36, 8352-8357 (1997). DOI:10.1364/AO.36.008352

5

Dirk Padfield. Masked Object Registration in the Fourier Domain. IEEE Transactions on Image Processing, vol. 21(5), pp. 2706-2718 (2012). DOI:10.1109/TIP.2011.2181402

6

D. Padfield. “Masked FFT registration”. In Proc. Computer Vision and Pattern Recognition, pp. 2918-2925 (2010). DOI:10.1109/CVPR.2010.5540032

restoration#

cucim.skimage.restoration.calibrate_denoiser(image, denoise_function, denoise_parameters, *, stride=4, approximate_loss=True, extra_output=False)#

Calibrate a denoising function and return optimal J-invariant version.

The returned function is partially evaluated with optimal parameter values set for denoising the input image.

Parameters
imagendarray

Input data to be denoised (converted using img_as_float).

denoise_functionfunction

Denoising function to be calibrated.

denoise_parametersdict of list

Ranges of parameters for denoise_function to be calibrated over.

strideint, optional

Stride used in masking procedure that converts denoise_function to J-invariance.

approximate_lossbool, optional

Whether to approximate the self-supervised loss used to evaluate the denoiser by only computing it on one masked version of the image. If False, the runtime will be a factor of stride**image.ndim longer.

extra_outputbool, optional

If True, return parameters and losses in addition to the calibrated denoising function

Returns
best_denoise_functionfunction

The optimal J-invariant version of denoise_function.

If extra_output is True, the following tuple is also returned:
(parameters_tested, losses)tuple (list of dict, list of int)

List of parameters tested for denoise_function, as a dictionary of kwargs Self-supervised loss for each set of parameters in parameters_tested.

Notes

The calibration procedure uses a self-supervised mean-square-error loss to evaluate the performance of J-invariant versions of denoise_function. The minimizer of the self-supervised loss is also the minimizer of the ground-truth loss (i.e., the true MSE error) [1]. The returned function can be used on the original noisy image, or other images with similar characteristics.

Increasing the stride increases the performance of best_denoise_function

at the expense of increasing its runtime. It has no effect on the runtime of the calibration.

References

1

J. Batson & L. Royer. Noise2Self: Blind Denoising by Self-Supervision, International Conference on Machine Learning, p. 524-533 (2019).

Examples

>>> import cupy as cp
>>> from cucim.skimage import color
>>> from skimage import data
>>> from cucim.skimage.restoration import (denoise_tv_chambolle,
...                                          calibrate_denoiser)
>>> img = color.rgb2gray(cp.array(data.astronaut()[:50, :50]))
>>> noisy = img + 0.5 * img.std() * cp.random.randn(*img.shape)
>>> parameters = {'weight': cp.arange(0.01, 0.3, 0.02)}
>>> denoising_function = calibrate_denoiser(noisy, denoise_tv_chambolle,
...                                         denoise_parameters=parameters)
>>> denoised_img = denoising_function(img)
cucim.skimage.restoration.denoise_invariant(image, denoise_function, *, stride=4, masks=None, denoiser_kwargs=None)#

Apply a J-invariant version of denoise_function.

Parameters
imagendarray (M[, N[, …]][, C]) of ints, uints or floats

Input data to be denoised. image can be of any numeric type, but it is cast into a ndarray of floats (using img_as_float) for the computation of the denoised image.

denoise_functionfunction

Original denoising function.

strideint, optional

Stride used in masking procedure that converts denoise_function to J-invariance.

maskslist of ndarray, optional

Set of masks to use for computing J-invariant output. If None, a full set of masks covering the image will be used.

denoiser_kwargs:

Keyword arguments passed to denoise_function.

Returns
outputndarray

Denoised image, of same shape as image.

Notes

A denoising function is J-invariant if the prediction it makes for each pixel does not depend on the value of that pixel in the original image. The prediction for each pixel may instead use all the relevant information contained in the rest of the image, which is typically quite significant. Any function can be converted into a J-invariant one using a simple masking procedure, as described in [1].

The pixel-wise error of a J-invariant denoiser is uncorrelated to the noise, so long as the noise in each pixel is independent. Consequently, the average difference between the denoised image and the oisy image, the self-supervised loss, is the same as the difference between the denoised image and the original clean image, the ground-truth loss (up to a constant).

This means that the best J-invariant denoiser for a given image can be found using the noisy data alone, by selecting the denoiser minimizing the self- supervised loss.

References

1

J. Batson & L. Royer. Noise2Self: Blind Denoising by Self-Supervision, International Conference on Machine Learning, p. 524-533 (2019).

Examples

>>> import cucim.skimage
>>> import cupy as cp
>>> import skimage
>>> from cucim.skimage.restoration import denoise_invariant, denoise_tv_chambolle
>>> image = cucim.skimage.util.img_as_float(cp.asarray(skimage.data.chelsea()))
>>> noisy = cucim.skimage.util.random_noise(image, var=0.2 ** 2)
>>> denoised = denoise_invariant(noisy, denoise_function=denoise_tv_chambolle)
cucim.skimage.restoration.denoise_tv_chambolle(image, weight=0.1, eps=0.0002, max_num_iter=200, *, channel_axis=None)#

Perform total variation denoising in nD.

Given \(f\), a noisy image (input data), total variation denoising (also known as total variation regularization) aims to find an image \(u\) with less total variation than \(f\), under the constraint that \(u\) remain similar to \(f\). This can be expressed by the Rudin–Osher–Fatemi (ROF) minimization problem:

\[\min_{u} \sum_{i=0}^{N-1} \left( \left| \nabla{u_i} \right| + \frac{\lambda}{2}(f_i - u_i)^2 \right)\]

where \(\lambda\) is a positive parameter. The first term of this cost function is the total variation; the second term represents data fidelity. As \(\lambda \to 0\), the total variation term dominates, forcing the solution to have smaller total variation, at the expense of looking less like the input data.

This code is an implementation of the algorithm proposed by Chambolle in [1] to solve the ROF problem.

Parameters
imagendarray

Input image to be denoised. If its dtype is not float, it gets converted with img_as_float().

weightfloat, optional

Denoising weight. It is equal to \(\frac{1}{\lambda}\). Therefore, the greater the weight, the more denoising (at the expense of fidelity to image).

epsfloat, optional

Tolerance \(\varepsilon > 0\) for the stop criterion (compares to absolute value of relative difference of the cost function \(E\)): The algorithm stops when \(|E_{n-1} - E_n| < \varepsilon * E_0\).

max_num_iterint, optional

Maximal number of iterations used for the optimization.

channel_axisint or None, optional

If None, the image is assumed to be grayscale (single-channel). Otherwise, this parameter indicates which axis of the array corresponds to channels.

New in version 0.19: channel_axis was added in 0.19.

Returns
undarray

Denoised image.

See also

denoise_tv_bregman

Perform total variation denoising using split-Bregman optimization.

Notes

Make sure to set the channel_axis parameter appropriately for color images.

The principle of total variation denoising is explained in [2]. It is about minimizing the total variation of an image, which can be roughly described as the integral of the norm of the image gradient. Total variation denoising tends to produce cartoon-like images, that is, piecewise-constant images.

References

1

A. Chambolle, An algorithm for total variation minimization and applications, Journal of Mathematical Imaging and Vision, Springer, 2004, 20, 89-97.

2

https://en.wikipedia.org/wiki/Total_variation_denoising

Examples

2D example on astronaut image:

>>> import cupy as cp
>>> from cucim.skimage import color
>>> from skimage import data
>>> img = color.rgb2gray(cp.array(data.astronaut()[:50, :50]))
>>> img += 0.5 * img.std() * cp.random.randn(*img.shape)
>>> denoised_img = denoise_tv_chambolle(img, weight=60)

3D example on synthetic data:

>>> x, y, z = cp.ogrid[0:20, 0:20, 0:20]
>>> mask = (x - 22)**2 + (y - 20)**2 + (z - 17)**2 < 8**2
>>> mask = mask.astype(float)
>>> mask += 0.2*cp.random.randn(*mask.shape)
>>> res = denoise_tv_chambolle(mask, weight=100)
cucim.skimage.restoration.richardson_lucy(image, psf, num_iter=50, clip=True, filter_epsilon=None)#

Richardson-Lucy deconvolution.

Parameters
imagendarray

Input degraded image (can be n-dimensional).

psfndarray

The point spread function.

num_iterint, optional

Number of iterations. This parameter plays the role of regularisation.

clipboolean, optional

True by default. If true, pixel value of the result above 1 or under -1 are thresholded for skimage pipeline compatibility.

filter_epsilon: float, optional

Value below which intermediate results become 0 to avoid division by small numbers.

Returns
im_deconvndarray

The deconvolved image.

References

1

https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution

Examples

>>> import cupy as cp
>>> from cucim.skimage import img_as_float, restoration
>>> from skimage import data
>>> camera = cp.asarray(img_as_float(cp.array(data.camera())))
>>> from cupyx.scipy.signal import convolve2d
>>> psf = cp.ones((5, 5)) / 25
>>> camera = convolve2d(camera, psf, 'same')
>>> camera += 0.1 * camera.std() * cp.random.standard_normal(camera.shape)
>>> deconvolved = restoration.richardson_lucy(camera, psf, 5)
cucim.skimage.restoration.unsupervised_wiener(image, psf, reg=None, user_params=None, is_real=True, clip=True, *, rng=None, random_state=<DEPRECATED>, seed=<DEPRECATED>)#

Unsupervised Wiener-Hunt deconvolution.

Return the deconvolution with a Wiener-Hunt approach, where the hyperparameters are automatically estimated. The algorithm is a stochastic iterative process (Gibbs sampler) described in the reference below. See also wiener function.

Parameters
image(M, N) ndarray

The input degraded image.

psfndarray

The impulse response (input image’s space) or the transfer function (Fourier space). Both are accepted. The transfer function is automatically recognized as being complex (cupy.iscomplexobj(psf)).

regndarray, optional

The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf.

user_paramsdict, optional

Dictionary of parameters for the Gibbs sampler. See below.

clipboolean, optional

True by default. If true, pixel values of the result above 1 or under -1 are thresholded for skimage pipeline compatibility.

rng{cupy.random.Generator, int}, optional

Pseudo-random number generator. By default, a PCG64 generator is used (see cupy.random.default_rng()). If rng is an int, it is used to seed the generator.

Returns
x_postmean(M, N) ndarray

The deconvolved image (the posterior mean).

chainsdict

The keys noise and prior contain the chain list of noise and prior precision respectively.

Other Parameters
The keys of ``user_params`` are:
thresholdfloat

The stopping criterion: the norm of the difference between to successive approximated solution (empirical mean of object samples, see Notes section). 1e-4 by default.

burninint

The number of sample to ignore to start computation of the mean. 15 by default.

min_num_iterint

The minimum number of iterations. 30 by default.

max_num_iterint

The maximum number of iterations if threshold is not satisfied. 200 by default.

callbackcallable (None by default)

A user provided callable to which is passed, if the function exists, the current image sample for whatever purpose. The user can store the sample, or compute other moments than the mean. It has no influence on the algorithm execution and is only for inspection.

seedDEPRECATED

Deprecated in favor of rng.

Deprecated since version 23.08.00.

random_stateDEPRECATED

Deprecated in favor of rng.

Deprecated since version 23.08.00.

Notes

The estimated image is design as the posterior mean of a probability law (from a Bayesian analysis). The mean is defined as a sum over all the possible images weighted by their respective probability. Given the size of the problem, the exact sum is not tractable. This algorithm use of MCMC to draw image under the posterior law. The practical idea is to only draw highly probable images since they have the biggest contribution to the mean. At the opposite, the less probable images are drawn less often since their contribution is low. Finally, the empirical mean of these samples give us an estimation of the mean, and an exact computation with an infinite sample set.

References

1

François Orieux, Jean-François Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for Wiener-Hunt deconvolution”, J. Opt. Soc. Am. A 27, 1593-1607 (2010)

https://www.osapublishing.org/josaa/abstract.cfm?URI=josaa-27-7-1593

https://hal.archives-ouvertes.fr/hal-00674508

Examples

>>> import cupy as cp
>>> import cupyx.scipy.ndimage as ndi
>>> from cucim.skimage import color, restoration
>>> from skimage import data
>>> img = color.rgb2gray(cp.array(data.astronaut()))
>>> psf = cp.ones((5, 5)) / 25
>>> img = ndi.uniform_filter(img, size=psf.shape)
>>> rng = cp.random.default_rng()
>>> img += 0.1 * img.std() * rng.standard_normal(img.shape)
>>> deconvolved_img = restoration.unsupervised_wiener(img, psf)
cucim.skimage.restoration.wiener(image, psf, balance, reg=None, is_real=True, clip=True)#

Wiener-Hunt deconvolution

Return the deconvolution with a Wiener-Hunt approach (i.e. with Fourier diagonalisation).

Parameters
imagecp.ndarray

Input degraded image (can be n-dimensional).

psfndarray

Point Spread Function. This is assumed to be the impulse response (input image space) if the data-type is real, or the transfer function (Fourier space) if the data-type is complex. There is no constraints on the shape of the impulse response. The transfer function must be of shape (N1, N2, …, ND) if is_real is True, (N1, N2, …, ND // 2 + 1) otherwise (see cp.fft.rfftn).

balancefloat

The regularisation parameter value that tunes the balance between the data adequacy that improve frequency restoration and the prior adequacy that reduce frequency restoration (to avoid noise artifacts).

regndarray, optional

The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf. Shape constraint is the same as for the psf parameter.

is_realboolean, optional

True by default. Specify if psf and reg are provided with hermitian hypothesis, that is only half of the frequency plane is provided (due to the redundancy of Fourier transform of real signal). It’s apply only if psf and/or reg are provided as transfer function. For the hermitian property see uft module or cupy.fft.rfftn.

clipboolean, optional

True by default. If True, pixel values of the result above 1 or under -1 are thresholded for skimage pipeline compatibility.

Returns
im_deconv(M, N) ndarray

The deconvolved image.

Notes

This function applies the Wiener filter to a noisy and degraded image by an impulse response (or PSF). If the data model is

\[y = Hx + n\]

where \(n\) is noise, \(H\) the PSF and \(x\) the unknown original image, the Wiener filter is

\[\hat x = F^\dagger \left( |\Lambda_H|^2 + \lambda |\Lambda_D|^2 \right)^{-1} \Lambda_H^\dagger F y\]

where \(F\) and \(F^\dagger\) are the Fourier and inverse Fourier transforms respectively, \(\Lambda_H\) the transfer function (or the Fourier transform of the PSF, see [Hunt] below) and \(\Lambda_D\) the filter to penalize the restored image frequencies (Laplacian by default, that is penalization of high frequency). The parameter \(\lambda\) tunes the balance between the data (that tends to increase high frequency, even those coming from noise), and the regularization.

These methods are then specific to a prior model. Consequently, the application or the true image nature must correspond to the prior model. By default, the prior model (Laplacian) introduce image smoothness or pixel correlation. It can also be interpreted as high-frequency penalization to compensate the instability of the solution with respect to the data (sometimes called noise amplification or “explosive” solution).

Finally, the use of Fourier space implies a circulant property of \(H\), see [2].

References

1

François Orieux, Jean-François Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for Wiener-Hunt deconvolution”, J. Opt. Soc. Am. A 27, 1593-1607 (2010)

https://www.osapublishing.org/josaa/abstract.cfm?URI=josaa-27-7-1593

https://hal.archives-ouvertes.fr/hal-00674508

2

B. R. Hunt “A matrix theory proof of the discrete convolution theorem”, IEEE Trans. on Audio and Electroacoustics, vol. au-19, no. 4, pp. 285-288, dec. 1971

Examples

>>> import cupy as cp
>>> import cupyx.scipy.ndimage as ndi
>>> from cucim.skimage import color, restoration
>>> from skimage import data
>>> img = color.rgb2gray(cp.array(data.astronaut()))
>>> psf = cp.ones((5, 5)) / 25
>>> img = ndi.uniform_filter(img, size=psf.shape)
>>> img += 0.1 * img.std() * cp.random.standard_normal(img.shape)
>>> deconvolved_img = restoration.wiener(img, psf, 0.1)

segmentation#

Algorithms to partition images into meaningful regions or boundaries.

cucim.skimage.segmentation.chan_vese(image, mu=0.25, lambda1=1.0, lambda2=1.0, tol=0.001, max_num_iter=500, dt=0.5, init_level_set='checkerboard', extended_output=False)#

Chan-Vese segmentation algorithm.

Active contour model by evolving a level set. Can be used to segment objects without clearly defined boundaries.

Parameters
image(M, N) ndarray

Grayscale image to be segmented.

mufloat, optional

‘edge length’ weight parameter. Higher mu values will produce a ‘round’ edge, while values closer to zero will detect smaller objects.

lambda1float, optional

‘difference from average’ weight parameter for the output region with value ‘True’. If it is lower than lambda2, this region will have a larger range of values than the other.

lambda2float, optional

‘difference from average’ weight parameter for the output region with value ‘False’. If it is lower than lambda1, this region will have a larger range of values than the other.

tolfloat, positive, optional

Level set variation tolerance between iterations. If the L2 norm difference between the level sets of successive iterations normalized by the area of the image is below this value, the algorithm will assume that the solution was reached.

max_num_iteruint, optional

Maximum number of iterations allowed before the algorithm interrupts itself.

dtfloat, optional

A multiplication factor applied at calculations for each step, serves to accelerate the algorithm. While higher values may speed up the algorithm, they may also lead to convergence problems.

init_level_setstr or (M, N) ndarray, optional

Defines the starting level set used by the algorithm. If a string is inputted, a level set that matches the image size will automatically be generated. Alternatively, it is possible to define a custom level set, which should be an array of float values, with the same shape as ‘image’. Accepted string values are as follows.

‘checkerboard’

the starting level set is defined as sin(x/5*pi)*sin(y/5*pi), where x and y are pixel coordinates. This level set has fast convergence, but may fail to detect implicit edges.

‘disk’

the starting level set is defined as the opposite of the distance from the center of the image minus half of the minimum value between image width and image height. This is somewhat slower, but is more likely to properly detect implicit edges.

‘small disk’

the starting level set is defined as the opposite of the distance from the center of the image minus a quarter of the minimum value between image width and image height.

extended_outputbool, optional

If set to True, the return value will be a tuple containing the three return values (see below). If set to False which is the default value, only the ‘segmentation’ array will be returned.

Returns
segmentation(M, N) ndarray, bool

Segmentation produced by the algorithm.

phi(M, N) ndarray of floats

Final level set computed by the algorithm.

energieslist of floats

Shows the evolution of the ‘energy’ for each step of the algorithm. This should allow to check whether the algorithm converged.

Notes

The Chan-Vese Algorithm is designed to segment objects without clearly defined boundaries. This algorithm is based on level sets that are evolved iteratively to minimize an energy, which is defined by weighted values corresponding to the sum of differences intensity from the average value outside the segmented region, the sum of differences from the average value inside the segmented region, and a term which is dependent on the length of the boundary of the segmented region.

This algorithm was first proposed by Tony Chan and Luminita Vese, in a publication entitled “An Active Contour Model Without Edges” [1].

This implementation of the algorithm is somewhat simplified in the sense that the area factor ‘nu’ described in the original paper is not implemented, and is only suitable for grayscale images.

Typical values for lambda1 and lambda2 are 1. If the ‘background’ is very different from the segmented object in terms of distribution (for example, a uniform black image with figures of varying intensity), then these values should be different from each other.

Typical values for mu are between 0 and 1, though higher values can be used when dealing with shapes with very ill-defined contours.

The ‘energy’ which this algorithm tries to minimize is defined as the sum of the differences from the average within the region squared and weighed by the ‘lambda’ factors to which is added the length of the contour multiplied by the ‘mu’ factor.

Supports 2D grayscale images only, and does not implement the area term described in the original article.

References

1

An Active Contour Model without Edges, Tony Chan and Luminita Vese, Scale-Space Theories in Computer Vision, 1999, DOI:10.1007/3-540-48236-9_13

2

Chan-Vese Segmentation, Pascal Getreuer Image Processing On Line, 2 (2012), pp. 214-224, DOI:10.5201/ipol.2012.g-cv

3

The Chan-Vese Algorithm - Project Report, Rami Cohen, 2011 arXiv:1107.2782

cucim.skimage.segmentation.checkerboard_level_set(image_shape, square_size=5)#

Create a checkerboard level set with binary values.

Parameters
image_shapetuple of positive integers

Shape of the image.

square_sizeint, optional

Size of the squares of the checkerboard. It defaults to 5.

Returns
outarray with shape image_shape

Binary level set of the checkerboard.

See also

disk_level_set
cucim.skimage.segmentation.clear_border(labels, buffer_size=0, bgval=0, mask=None, *, out=None)#

Clear objects connected to the label image border.

Parameters
labels(M[, N[, …, P]]) array of int or bool

Imaging data labels.

buffer_sizeint, optional

The width of the border examined. By default, only objects that touch the outside of the image are removed.

bgvalfloat or int, optional

Cleared objects are set to this value.

maskndarray of bool, same shape as image, optional.

Image data mask. Objects in labels image overlapping with False pixels of mask will be removed. If defined, the argument buffer_size will be ignored.

outndarray

Array of the same shape as labels, into which the output is placed. By default, a new array is created.

Returns
out(M[, N[, …, P]]) array

Imaging data labels with cleared borders

Examples

>>> import cupy as cp
>>> from cucim.skimage.segmentation import clear_border
>>> labels = cp.array([[0, 0, 0, 0, 0, 0, 0, 1, 0],
...                    [1, 1, 0, 0, 1, 0, 0, 1, 0],
...                    [1, 1, 0, 1, 0, 1, 0, 0, 0],
...                    [0, 0, 0, 1, 1, 1, 1, 0, 0],
...                    [0, 1, 1, 1, 1, 1, 1, 1, 0],
...                    [0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> clear_border(labels)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 1, 0, 0, 0, 0],
       [0, 0, 0, 1, 0, 1, 0, 0, 0],
       [0, 0, 0, 1, 1, 1, 1, 0, 0],
       [0, 1, 1, 1, 1, 1, 1, 1, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> mask = cp.array([[0, 0, 1, 1, 1, 1, 1, 1, 1],
...                  [0, 0, 1, 1, 1, 1, 1, 1, 1],
...                  [1, 1, 1, 1, 1, 1, 1, 1, 1],
...                  [1, 1, 1, 1, 1, 1, 1, 1, 1],
...                  [1, 1, 1, 1, 1, 1, 1, 1, 1],
...                  [1, 1, 1, 1, 1, 1, 1, 1, 1]]).astype(bool)
>>> clear_border(labels, mask=mask)
array([[0, 0, 0, 0, 0, 0, 0, 1, 0],
       [0, 0, 0, 0, 1, 0, 0, 1, 0],
       [0, 0, 0, 1, 0, 1, 0, 0, 0],
       [0, 0, 0, 1, 1, 1, 1, 0, 0],
       [0, 1, 1, 1, 1, 1, 1, 1, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0]])
cucim.skimage.segmentation.disk_level_set(image_shape, *, center=None, radius=None)#

Create a disk level set with binary values.

Parameters
image_shapetuple of positive integers

Shape of the image

centertuple of positive integers, optional

Coordinates of the center of the disk given in (row, column). If not given, it defaults to the center of the image.

radiusfloat, optional

Radius of the disk. If not given, it is set to the 75% of the smallest image dimension.

Returns
outarray with shape image_shape

Binary level set of the disk with the given radius and center.

cucim.skimage.segmentation.expand_labels(label_image, distance=1, spacing=1)#

Expand labels in label image by distance pixels without overlapping.

Given a label image, expand_labels grows label regions (connected outwards by up to distance units without overflowing into neighboring regions. More specifically, each background pixel that is within Euclidean distance of <= distance pixels of a connected component is assigned the label of that connected component. The spacing parameter can be used to specify the spacing rate of the distance transform used to calculate the Euclidean distance for anisotropic images where multiple connected components are within distance pixels of a background pixel, the label value of the closest connected component will be assigned (see Notes for the case of multiple labels at equal distance).

Parameters
label_imagendarray of dtype int

label image

distancefloat

Euclidean distance in pixels by which to grow the labels. Default is one.

spacingfloat, or sequence of float, optional

Spacing of elements along each dimension. If a sequence, must be of length equal to the input rank; if a single number, this is used for all axes. If not specified, a grid spacing of unity is implied.

Returns
enlarged_labelsndarray of dtype int

Labeled array, where all connected regions have been enlarged

Notes

Where labels are spaced more than distance pixels are apart, this is equivalent to a morphological dilation with a disc or hyperball of radius distance. However, in contrast to a morphological dilation, expand_labels will not expand a label region into a neighboring region.

This implementation of expand_labels is derived from CellProfiler [1], where it is known as module “IdentifySecondaryObjects (Distance-N)” [2].

There is an important edge case when a pixel has the same distance to multiple regions, as it is not defined which region expands into that space. Here, the exact behavior depends on the upstream implementation of scipy.ndimage.distance_transform_edt.

References

1

https://cellprofiler.org

2

CellProfiler/CellProfiler

Examples

>>> labels = np.array([0, 1, 0, 0, 0, 0, 2])
>>> expand_labels(labels, distance=1)
array([1, 1, 1, 0, 0, 2, 2])

Labels will not overwrite each other:

>>> expand_labels(labels, distance=3)
array([1, 1, 1, 1, 2, 2, 2])

In case of ties, behavior is undefined, but currently resolves to the label closest to (0,) * ndim in lexicographical order.

>>> labels_tied = np.array([0, 1, 0, 2, 0])
>>> expand_labels(labels_tied, 1)
array([1, 1, 1, 2, 2])
>>> labels2d = np.array(
...     [[0, 1, 0, 0],
...      [2, 0, 0, 0],
...      [0, 3, 0, 0]]
... )
>>> expand_labels(labels2d, 1)
array([[2, 1, 1, 0],
       [2, 2, 0, 0],
       [2, 3, 3, 0]])
>>> expand_labels(labels2d, 1, spacing=[1, 0.5])
array([[1, 1, 1, 1],
       [2, 2, 2, 0],
       [3, 3, 3, 3]])
cucim.skimage.segmentation.find_boundaries(label_img, connectivity=1, mode='thick', background=0)#

Return bool array where boundaries between labeled regions are True.

Parameters
label_imgarray of int or bool

An array in which different regions are labeled with either different integers or boolean values.

connectivityint in {1, …, label_img.ndim}, optional

A pixel is considered a boundary pixel if any of its neighbors has a different label. connectivity controls which pixels are considered neighbors. A connectivity of 1 (default) means pixels sharing an edge (in 2D) or a face (in 3D) will be considered neighbors. A connectivity of label_img.ndim means pixels sharing a corner will be considered neighbors.

modestring in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’}

How to mark the boundaries:

  • thick: any pixel not completely surrounded by pixels of the same label (defined by connectivity) is marked as a boundary. This results in boundaries that are 2 pixels thick.

  • inner: outline the pixels just inside of objects, leaving background pixels untouched.

  • outer: outline pixels in the background around object boundaries. When two objects touch, their boundary is also marked.

  • subpixel: return a doubled image, with pixels between the original pixels marked as boundary where appropriate.

backgroundint, optional

For modes ‘inner’ and ‘outer’, a definition of a background label is required. See mode for descriptions of these two.

Returns
boundariesarray of bool, same shape as label_img

A bool image where True represents a boundary pixel. For mode equal to ‘subpixel’, boundaries.shape[i] is equal to 2 * label_img.shape[i] - 1 for all i (a pixel is inserted in between all other pairs of pixels).

Examples

>>> labels = cp.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
...                    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
...                    [0, 0, 0, 0, 0, 5, 5, 5, 0, 0],
...                    [0, 0, 1, 1, 1, 5, 5, 5, 0, 0],
...                    [0, 0, 1, 1, 1, 5, 5, 5, 0, 0],
...                    [0, 0, 1, 1, 1, 5, 5, 5, 0, 0],
...                    [0, 0, 0, 0, 0, 5, 5, 5, 0, 0],
...                    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
...                    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=cp.uint8)
>>> find_boundaries(labels, mode='thick').astype(cp.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 1, 0],
       [0, 1, 1, 1, 1, 1, 0, 1, 1, 0],
       [0, 1, 1, 0, 1, 1, 0, 1, 1, 0],
       [0, 1, 1, 1, 1, 1, 0, 1, 1, 0],
       [0, 0, 1, 1, 1, 1, 1, 1, 1, 0],
       [0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> find_boundaries(labels, mode='inner').astype(cp.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 0, 1, 0, 0],
       [0, 0, 1, 0, 1, 1, 0, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 0, 1, 0, 0],
       [0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> find_boundaries(labels, mode='outer').astype(cp.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
       [0, 0, 1, 1, 1, 1, 0, 0, 1, 0],
       [0, 1, 0, 0, 1, 1, 0, 0, 1, 0],
       [0, 1, 0, 0, 1, 1, 0, 0, 1, 0],
       [0, 1, 0, 0, 1, 1, 0, 0, 1, 0],
       [0, 0, 1, 1, 1, 1, 0, 0, 1, 0],
       [0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> labels_small = labels[::2, ::3]
>>> labels_small
array([[0, 0, 0, 0],
       [0, 0, 5, 0],
       [0, 1, 5, 0],
       [0, 0, 5, 0],
       [0, 0, 0, 0]], dtype=uint8)
>>> find_boundaries(labels_small, mode='subpixel').astype(cp.uint8)
array([[0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 1, 1, 1, 0],
       [0, 0, 0, 1, 0, 1, 0],
       [0, 1, 1, 1, 0, 1, 0],
       [0, 1, 0, 1, 0, 1, 0],
       [0, 1, 1, 1, 0, 1, 0],
       [0, 0, 0, 1, 0, 1, 0],
       [0, 0, 0, 1, 1, 1, 0],
       [0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> bool_image = cp.array([[False, False, False, False, False],
...                        [False, False, False, False, False],
...                        [False, False,  True,  True,  True],
...                        [False, False,  True,  True,  True],
...                        [False, False,  True,  True,  True]],
...                       dtype=bool)
>>> find_boundaries(bool_image)
array([[False, False, False, False, False],
       [False, False,  True,  True,  True],
       [False,  True,  True,  True,  True],
       [False,  True,  True, False, False],
       [False,  True,  True, False, False]])
cucim.skimage.segmentation.inverse_gaussian_gradient(image, alpha=100.0, sigma=5.0)#

Inverse of gradient magnitude.

Compute the magnitude of the gradients in the image and then inverts the result in the range [0, 1]. Flat areas are assigned values close to 1, while areas close to borders are assigned values close to 0.

This function or a similar one defined by the user should be applied over the image as a preprocessing step before calling morphological_geodesic_active_contour.

Parameters
image(M, N) or (L, M, N) array

Grayscale image or volume.

alphafloat, optional

Controls the steepness of the inversion. A larger value will make the transition between the flat areas and border areas steeper in the resulting array.

sigmafloat, optional

Standard deviation of the Gaussian filter applied over the image.

Returns
gimage(M, N) or (L, M, N) array

Preprocessed image (or volume) suitable for morphological_geodesic_active_contour.

cucim.skimage.segmentation.join_segmentations(s1, s2, return_mapping: bool = False)#

Return the join of the two input segmentations.

The join J of S1 and S2 is defined as the segmentation in which two voxels are in the same segment if and only if they are in the same segment in both S1 and S2.

Parameters
s1, s2numpy arrays

s1 and s2 are label fields of the same shape.

return_mappingbool, optional

If true, return mappings for joined segmentation labels to the original labels.

Returns
jnumpy array

The join segmentation of s1 and s2.

map_j_to_s1ArrayMap, optional

Mapping from labels of the joined segmentation j to labels of s1.

map_j_to_s2ArrayMap, optional

Mapping from labels of the joined segmentation j to labels of s2.

Examples

>>> import cupy as cp
>>> from cucim.skimage.segmentation import join_segmentations
>>> s1 = cp.array([[0, 0, 1, 1],
...                [0, 2, 1, 1],
...                [2, 2, 2, 1]])
>>> s2 = cp.array([[0, 1, 1, 0],
...                [0, 1, 1, 0],
...                [0, 1, 1, 1]])
>>> join_segmentations(s1, s2)
array([[0, 1, 3, 2],
       [0, 5, 3, 2],
       [4, 5, 5, 3]])
cucim.skimage.segmentation.mark_boundaries(image, label_img, color=(1, 1, 0), outline_color=None, mode='outer', background_label=0, *, order=3)#

Return image with boundaries between labeled regions highlighted.

Parameters
image(M, N[, 3]) array

Grayscale or RGB image.

label_img(M, N) array of int

Label array where regions are marked by different integer values.

colorlength-3 sequence, optional

RGB color of boundaries in the output image.

outline_colorlength-3 sequence, optional

RGB color surrounding boundaries in the output image. If None, no outline is drawn.

modestring in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’}, optional

The mode for finding boundaries.

background_labelint, optional

Which label to consider background (this is only useful for modes inner and outer).

Returns
marked(M, N, 3) array of float

An image in which the boundaries between labels are superimposed on the original image.

See also

find_boundaries
cucim.skimage.segmentation.morphological_chan_vese(image, num_iter, init_level_set='checkerboard', smoothing=1, lambda1=1, lambda2=1, iter_callback=<function <lambda>>)#

Morphological Active Contours without Edges (MorphACWE)

Active contours without edges implemented with morphological operators. It can be used to segment objects in images and volumes without well defined borders. It is required that the inside of the object looks different on average than the outside (i.e., the inner area of the object should be darker or lighter than the outer area on average).

Parameters
image(M, N) or (L, M, N) array

Grayscale image or volume to be segmented.

num_iteruint

Number of num_iter to run

init_level_setstr, (M, N) array, or (L, M, N) array

Initial level set. If an array is given, it will be binarized and used as the initial level set. If a string is given, it defines the method to generate a reasonable initial level set with the shape of the image. Accepted values are ‘checkerboard’ and ‘disk’. See the documentation of checkerboard_level_set and disk_level_set respectively for details about how these level sets are created.

smoothinguint, optional

Number of times the smoothing operator is applied per iteration. Reasonable values are around 1-4. Larger values lead to smoother segmentations.

lambda1float, optional

Weight parameter for the outer region. If lambda1 is larger than lambda2, the outer region will contain a larger range of values than the inner region.

lambda2float, optional

Weight parameter for the inner region. If lambda2 is larger than lambda1, the inner region will contain a larger range of values than the outer region.

iter_callbackfunction, optional

If given, this function is called once per iteration with the current level set as the only argument. This is useful for debugging or for plotting intermediate results during the evolution.

Returns
out(M, N) or (L, M, N) array

Final segmentation (i.e., the final level set)

Notes

This is a version of the Chan-Vese algorithm that uses morphological operators instead of solving a partial differential equation (PDE) for the evolution of the contour. The set of morphological operators used in this algorithm are proved to be infinitesimally equivalent to the Chan-Vese PDE (see [1]). However, morphological operators are do not suffer from the numerical stability issues typically found in PDEs (it is not necessary to find the right time step for the evolution), and are computationally faster.

The algorithm and its theoretical derivation are described in [1].

References

1(1,2)

A Morphological Approach to Curvature-based Evolution of Curves and Surfaces, Pablo Márquez-Neila, Luis Baumela, Luis Álvarez. In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2014, DOI:10.1109/TPAMI.2013.106

cucim.skimage.segmentation.morphological_geodesic_active_contour(gimage, num_iter, init_level_set='disk', smoothing=1, threshold='auto', balloon=0, iter_callback=<function <lambda>>)#

Morphological Geodesic Active Contours (MorphGAC).

Geodesic active contours implemented with morphological operators. It can be used to segment objects with visible but noisy, cluttered, broken borders.

Parameters
gimage(M, N) or (L, M, N) array

Preprocessed image or volume to be segmented. This is very rarely the original image. Instead, this is usually a preprocessed version of the original image that enhances and highlights the borders (or other structures) of the object to segment. morphological_geodesic_active_contour() will try to stop the contour evolution in areas where gimage is small. See inverse_gaussian_gradient() as an example function to perform this preprocessing. Note that the quality of morphological_geodesic_active_contour() might greatly depend on this preprocessing.

num_iteruint

Number of num_iter to run.

init_level_setstr, (M, N) array, or (L, M, N) array

Initial level set. If an array is given, it will be binarized and used as the initial level set. If a string is given, it defines the method to generate a reasonable initial level set with the shape of the image. Accepted values are ‘checkerboard’ and ‘disk’. See the documentation of checkerboard_level_set and disk_level_set respectively for details about how these level sets are created.

smoothinguint, optional

Number of times the smoothing operator is applied per iteration. Reasonable values are around 1-4. Larger values lead to smoother segmentations.

thresholdfloat, optional

Areas of the image with a value smaller than this threshold will be considered borders. The evolution of the contour will stop in these areas.

balloonfloat, optional

Balloon force to guide the contour in non-informative areas of the image, i.e., areas where the gradient of the image is too small to push the contour towards a border. A negative value will shrink the contour, while a positive value will expand the contour in these areas. Setting this to zero will disable the balloon force.

iter_callbackfunction, optional

If given, this function is called once per iteration with the current level set as the only argument. This is useful for debugging or for plotting intermediate results during the evolution.

Returns
out(M, N) or (L, M, N) array

Final segmentation (i.e., the final level set)

Notes

This is a version of the Geodesic Active Contours (GAC) algorithm that uses morphological operators instead of solving partial differential equations (PDEs) for the evolution of the contour. The set of morphological operators used in this algorithm are proved to be infinitesimally equivalent to the GAC PDEs (see [1]). However, morphological operators are do not suffer from the numerical stability issues typically found in PDEs (e.g., it is not necessary to find the right time step for the evolution), and are computationally faster.

The algorithm and its theoretical derivation are described in [1].

References

1(1,2)

A Morphological Approach to Curvature-based Evolution of Curves and Surfaces, Pablo Márquez-Neila, Luis Baumela, Luis Álvarez. In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2014, DOI:10.1109/TPAMI.2013.106

cucim.skimage.segmentation.random_walker(data, labels, beta=130, mode='cg_j', tol=0.001, copy=True, return_full_prob=False, spacing=None, *, prob_tol=0.001, channel_axis=None)#

Random walker algorithm for segmentation from markers.

Random walker algorithm is implemented for gray-level or multichannel images.

Parameters
data(M, N[, P][, C]) ndarray

Image to be segmented in phases. Gray-level data can be two- or three-dimensional; multichannel data can be three- or four- dimensional with channel_axis specifying the dimension containing channels. Data spacing is assumed isotropic unless the spacing keyword argument is used.

labels(M, N[, P]) array of ints

Array of seed markers labeled with different positive integers for different phases. Zero-labeled pixels are unlabeled pixels. Negative labels correspond to inactive pixels that are not taken into account (they are removed from the graph). If labels are not consecutive integers, the labels array will be transformed so that labels are consecutive. In the multichannel case, labels should have the same shape as a single channel of data, i.e. without the final dimension denoting channels.

betafloat, optional

Penalization coefficient for the random walker motion (the greater beta, the more difficult the diffusion).

modestring, available options {‘cg’, ‘cg_j’, ‘cg_mg’, ‘bf’}

Mode for solving the linear system in the random walker algorithm.

  • ‘bf’ (brute force): an LU factorization of the Laplacian is computed. This is fast for small images (<1024x1024), but very slow and memory-intensive for large images (e.g., 3-D volumes).

  • ‘cg’ (conjugate gradient): the linear system is solved iteratively using the Conjugate Gradient method from scipy.sparse.linalg. This is less memory-consuming than the brute force method for large images, but it is quite slow.

  • ‘cg_j’ (conjugate gradient with Jacobi preconditionner): the Jacobi preconditionner is applied during the Conjugate gradient method iterations. This may accelerate the convergence of the ‘cg’ method.

  • ‘cg_mg’ (conjugate gradient with multigrid preconditioner): a preconditioner is computed using a multigrid solver, then the solution is computed with the Conjugate Gradient method. This mode requires that the pyamg module is installed.

tolfloat, optional

Tolerance to achieve when solving the linear system using the conjugate gradient based modes (‘cg’, ‘cg_j’ and ‘cg_mg’).

copybool, optional

If copy is False, the labels array will be overwritten with the result of the segmentation. Use copy=False if you want to save on memory.

return_full_probbool, optional

If True, the probability that a pixel belongs to each of the labels will be returned, instead of only the most likely label.

spacingiterable of floats, optional

Spacing between voxels in each spatial dimension. If None, then the spacing between pixels/voxels in each dimension is assumed 1.

prob_tolfloat, optional

Tolerance on the resulting probability to be in the interval [0, 1]. If the tolerance is not satisfied, a warning is displayed.

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

Returns
outputndarray
  • If return_full_prob is False, array of ints of same shape and data type as labels, in which each pixel has been labeled according to the marker that reached the pixel first by anisotropic diffusion.

  • If return_full_prob is True, array of floats of shape (nlabels, labels.shape). output[label_nb, i, j] is the probability that label label_nb reaches the pixel (i, j) first.

Notes

Multichannel inputs are scaled with all channel data combined. Ensure all channels are separately normalized prior to running this algorithm.

The spacing argument is specifically for anisotropic datasets, where data points are spaced differently in one or more spatial dimensions. Anisotropic data is commonly encountered in medical imaging.

The algorithm was first proposed in [1].

The algorithm solves the diffusion equation at infinite times for sources placed on markers of each phase in turn. A pixel is labeled with the phase that has the greatest probability to diffuse first to the pixel.

The diffusion equation is solved by minimizing x.T L x for each phase, where L is the Laplacian of the weighted graph of the image, and x is the probability that a marker of the given phase arrives first at a pixel by diffusion (x=1 on markers of the phase, x=0 on the other markers, and the other coefficients are looked for). Each pixel is attributed the label for which it has a maximal value of x. The Laplacian L of the image is defined as:

  • L_ii = d_i, the number of neighbors of pixel i (the degree of i)

  • L_ij = -w_ij if i and j are adjacent pixels

The weight w_ij is a decreasing function of the norm of the local gradient. This ensures that diffusion is easier between pixels of similar values.

When the Laplacian is decomposed into blocks of marked and unmarked pixels:

L = M B.T
    B A

with first indices corresponding to marked pixels, and then to unmarked pixels, minimizing x.T L x for one phase amount to solving:

A x = - B x_m

where x_m = 1 on markers of the given phase, and 0 on other markers. This linear system is solved in the algorithm using a direct method for small images, and an iterative method for larger images.

References

1

Leo Grady, Random walks for image segmentation, IEEE Trans Pattern Anal Mach Intell. 2006 Nov;28(11):1768-83. DOI:10.1109/TPAMI.2006.233.

Examples

>>> import cupy as cp
>>> cp.random.seed(0)
>>> a = cp.zeros((10, 10)) + 0.2 * cp.random.rand(10, 10)
>>> a[5:8, 5:8] += 1
>>> b = cp.zeros_like(a, dtype=cp.int32)
>>> b[3, 3] = 1  # Marker for first phase
>>> b[6, 6] = 2  # Marker for second phase
>>> random_walker(a, b)
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
       [1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
       [1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
       [1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
       [1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
       [1, 1, 1, 1, 1, 2, 2, 2, 1, 1],
       [1, 1, 1, 1, 1, 2, 2, 2, 1, 1],
       [1, 1, 1, 1, 1, 2, 2, 2, 1, 1],
       [1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
       [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)
cucim.skimage.segmentation.relabel_sequential(label_field, offset=1)#

Relabel arbitrary labels to {offset, … offset + number_of_labels}.

This function also returns the forward map (mapping the original labels to the reduced labels) and the inverse map (mapping the reduced labels back to the original ones).

Parameters
label_fieldnumpy array of int, arbitrary shape

An array of labels, which must be non-negative integers.

offsetint, optional

The return labels will start at offset, which should be strictly positive.

Returns
relabelednumpy array of int, same shape as label_field

The input label field with labels mapped to {offset, …, number_of_labels + offset - 1}. The data type will be the same as label_field, except when offset + number_of_labels causes overflow of the current data type.

forward_mapArrayMap

The map from the original label space to the returned label space. Can be used to re-apply the same mapping. See examples for usage. The output data type will be the same as relabeled.

inverse_mapArrayMap

The map from the new label space to the original space. This can be used to reconstruct the original label field from the relabeled one. The output data type will be the same as label_field.

Notes

The label 0 is assumed to denote the background and is never remapped.

The forward map can be extremely big for some inputs, since its length is given by the maximum of the label field. However, in most situations, label_field.max() is much smaller than label_field.size, and in these cases the forward map is guaranteed to be smaller than either the input or output images.

Examples

>>> import cupy as cp
>>> from cucim.skimage.segmentation import relabel_sequential
>>> label_field = cp.array([1, 1, 5, 5, 8, 99, 42])
>>> relab, fw, inv = relabel_sequential(label_field)
>>> relab
array([1, 1, 2, 2, 3, 5, 4])
>>> print(fw)
ArrayMap:
  1 → 1
  5 → 2
  8 → 3
  42 → 4
  99 → 5
>>> cp.array(fw)
array([0, 1, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5])
>>> cp.array(inv)
array([ 0,  1,  5,  8, 42, 99])
>>> (fw[label_field] == relab).all()
array(True)
>>> (inv[relab] == label_field).all()
array(True)
>>> relab, fw, inv = relabel_sequential(label_field, offset=5)
>>> relab
array([5, 5, 6, 6, 7, 9, 8])

transform#

class cucim.skimage.transform.AffineTransform(matrix=None, scale=None, rotation=None, shear=None, translation=None, *, dimensionality=2, xp=<module 'cupy' from '/opt/conda/envs/docs/lib/python3.12/site-packages/cupy/__init__.py'>)#

Affine transformation.

Has the following form:

X = a0 * x + a1 * y + a2
  =   sx * x * [cos(rotation) + tan(shear_y) * sin(rotation)]
    - sy * y * [tan(shear_x) * cos(rotation) + sin(rotation)]
    + translation_x

Y = b0 * x + b1 * y + b2
  =   sx * x * [sin(rotation) - tan(shear_y) * cos(rotation)]
    - sy * y * [tan(shear_x) * sin(rotation) - cos(rotation)]
    + translation_y

where sx and sy are scale factors in the x and y directions.

This is equivalent to applying the operations in the following order:

  1. Scale

  2. Shear

  3. Rotate

  4. Translate

The homogeneous transformation matrix is:

[[a0  a1  a2]
 [b0  b1  b2]
 [0   0    1]]

In 2D, the transformation parameters can be given as the homogeneous transformation matrix, above, or as the implicit parameters, scale, rotation, shear, and translation in x (a2) and y (b2). For 3D and higher, only the matrix form is allowed.

In narrower transforms, such as the Euclidean (only rotation and translation) or Similarity (rotation, translation, and a global scale factor) transforms, it is possible to specify 3D transforms using implicit parameters also.

Parameters
matrix(D+1, D+1) ndarray, optional

Homogeneous transformation matrix. If this matrix is provided, it is an error to provide any of scale, rotation, shear, or translation.

scale{s as float or (sx, sy) as ndarray, list or tuple}, optional

Scale factor(s). If a single value, it will be assigned to both sx and sy. Only available for 2D.

New in version 0.17: Added support for supplying a single scalar value.

rotationfloat, optional

Rotation angle, clockwise, as radians. Only available for 2D.

shearfloat or 2-tuple of float, optional

The x and y shear angles, clockwise, by which these axes are rotated around the origin [2]. If a single value is given, take that to be the x shear angle, with the y angle remaining 0. Only available in 2D.

translation(tx, ty) as ndarray, list or tuple, optional

Translation parameters. Only available for 2D.

dimensionalityint, optional

The dimensionality of the transform. This is not used if any other parameters are provided.

Raises
ValueError

If both matrix and any of the other parameters are provided.

References

1

Wikipedia, “Affine transformation”, https://en.wikipedia.org/wiki/Affine_transformation#Image_transformation

2

Wikipedia, “Shear mapping”, https://en.wikipedia.org/wiki/Shear_mapping

Examples

>>> import cupy as cp
>>> from cucim.skimage import transform
>>> from skimage import data
>>> img = cp.array(data.astronaut())

Define source and destination points:

>>> src = cp.array([[150, 150],
...                 [250, 100],
...                 [150, 200]])
>>> dst = cp.array([[200, 200],
...                 [300, 150],
...                 [150, 400]])

Estimate the transformation matrix:

>>> tform = transform.AffineTransform()
>>> tform.estimate(src, dst)
True

Apply the transformation:

>>> warped = transform.warp(img, inverse_map=tform.inverse)
Attributes
params(D+1, D+1) ndarray

Homogeneous transformation matrix.

property rotation#
property scale#
property shear#
property translation#
class cucim.skimage.transform.EssentialMatrixTransform(rotation=None, translation=None, matrix=None, *, dimensionality=2, xp=<module 'cupy' from '/opt/conda/envs/docs/lib/python3.12/site-packages/cupy/__init__.py'>)#

Essential matrix transformation.

The essential matrix relates corresponding points between a pair of calibrated images. The matrix transforms normalized, homogeneous image points in one image to epipolar lines in the other image.

The essential matrix is only defined for a pair of moving images capturing a non-planar scene. In the case of pure rotation or planar scenes, the homography describes the geometric relation between two images (ProjectiveTransform). If the intrinsic calibration of the images is unknown, the fundamental matrix describes the projective relation between the two images (FundamentalMatrixTransform).

Parameters
rotation(3, 3) ndarray, optional

Rotation matrix of the relative camera motion.

translation(3, 1) ndarray, optional

Translation vector of the relative camera motion. The vector must have unit length.

matrix(3, 3) ndarray, optional

Essential matrix.

References

1

Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.

Examples

>>> import cupy as cp
>>> from cucim.skimage import transform
>>>
>>> tform_matrix = transform.EssentialMatrixTransform(
...     rotation=cp.eye(3), translation=cp.array([0, 0, 1])
... )
>>> tform_matrix.params
array([[ 0., -1.,  0.],
       [ 1.,  0.,  0.],
       [ 0.,  0.,  0.]])
>>> src = cp.array([[ 1.839035, 1.924743],
...                 [ 0.543582, 0.375221],
...                 [ 0.47324 , 0.142522],
...                 [ 0.96491 , 0.598376],
...                 [ 0.102388, 0.140092],
...                 [15.994343, 9.622164],
...                 [ 0.285901, 0.430055],
...                 [ 0.09115 , 0.254594]])
>>> dst = cp.array([[1.002114, 1.129644],
...                 [1.521742, 1.846002],
...                 [1.084332, 0.275134],
...                 [0.293328, 0.588992],
...                 [0.839509, 0.08729 ],
...                 [1.779735, 1.116857],
...                 [0.878616, 0.602447],
...                 [0.642616, 1.028681]])
>>> tform_matrix.estimate(src, dst)
True
>>> tform_matrix.residuals(src, dst)
array([0.42455187, 0.01460448, 0.13847034, 0.12140951, 0.27759346,
       0.32453118, 0.00210776, 0.26512283])
Attributes
params(3, 3) ndarray

Essential matrix.

Methods

estimate(src, dst)

Estimate essential matrix using 8-point algorithm.

estimate(src, dst)#

Estimate essential matrix using 8-point algorithm.

The 8-point algorithm requires at least 8 corresponding point pairs for a well-conditioned solution, otherwise the over-determined solution is estimated.

Parameters
src(N, 2) ndarray

Source coordinates.

dst(N, 2) ndarray

Destination coordinates.

Returns
successbool

True, if model estimation succeeds.

class cucim.skimage.transform.EuclideanTransform(matrix=None, rotation=None, translation=None, *, dimensionality=2, xp=<module 'cupy' from '/opt/conda/envs/docs/lib/python3.12/site-packages/cupy/__init__.py'>)#

Euclidean transformation, also known as a rigid transform.

Has the following form:

X = a0 * x - b0 * y + a1 =
  = x * cos(rotation) - y * sin(rotation) + a1

Y = b0 * x + a0 * y + b1 =
  = x * sin(rotation) + y * cos(rotation) + b1

where the homogeneous transformation matrix is:

[[a0 -b0  a1]
 [b0  a0  b1]
 [0   0    1]]

The Euclidean transformation is a rigid transformation with rotation and translation parameters. The similarity transformation extends the Euclidean transformation with a single scaling factor.

In 2D and 3D, the transformation parameters may be provided either via matrix, the homogeneous transformation matrix, above, or via the implicit parameters rotation and/or translation (where a1 is the translation along x, b1 along y, etc.). Beyond 3D, if the transformation is only a translation, you may use the implicit parameter translation; otherwise, you must use matrix.

Parameters
matrix(D+1, D+1) ndarray, optional

Homogeneous transformation matrix.

rotationfloat or sequence of float, optional

Rotation angle, clockwise, as radians. If given as a vector, it is interpreted as Euler rotation angles [1]. Only 2D (single rotation) and 3D (Euler rotations) values are supported. For higher dimensions, you must provide or estimate the transformation matrix.

translation(x, y[, z, …]) sequence of float, length D, optional

Translation parameters for each axis.

dimensionalityint, optional

The dimensionality of the transform.

References

1

https://en.wikipedia.org/wiki/Rotation_matrix#In_three_dimensions

Attributes
params(D+1, D+1) ndarray

Homogeneous transformation matrix.

Methods

estimate(src, dst)

Estimate the transformation from a set of corresponding points.

estimate(src, dst)#

Estimate the transformation from a set of corresponding points.

You can determine the over-, well- and under-determined parameters with the total least-squares method.

Number of source and destination coordinates must match.

Parameters
src(N, D) ndarray

Source coordinates.

dst(N, D) ndarray

Destination coordinates.

Returns
successbool

True, if model estimation succeeds.

property rotation#
property translation#
class cucim.skimage.transform.FundamentalMatrixTransform(matrix=None, *, dimensionality=2, xp=<module 'cupy' from '/opt/conda/envs/docs/lib/python3.12/site-packages/cupy/__init__.py'>)#

Fundamental matrix transformation.

The fundamental matrix relates corresponding points between a pair of uncalibrated images. The matrix transforms homogeneous image points in one image to epipolar lines in the other image.

The fundamental matrix is only defined for a pair of moving images. In the case of pure rotation or planar scenes, the homography describes the geometric relation between two images (ProjectiveTransform). If the intrinsic calibration of the images is known, the essential matrix describes the metric relation between the two images (EssentialMatrixTransform).

Parameters
matrix(3, 3) ndarray, optional

Fundamental matrix.

References

1

Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.

Examples

>>> import numpy as np
>>> import cucim.skimage as ski
>>> tform_matrix = ski.transform.FundamentalMatrixTransform()

Define source and destination points:

>>> src = np.array([1.839035, 1.924743,
...                 0.543582, 0.375221,
...                 0.473240, 0.142522,
...                 0.964910, 0.598376,
...                 0.102388, 0.140092,
...                15.994343, 9.622164,
...                 0.285901, 0.430055,
...                 0.091150, 0.254594]).reshape(-1, 2)
>>> dst = np.array([1.002114, 1.129644,
...                 1.521742, 1.846002,
...                 1.084332, 0.275134,
...                 0.293328, 0.588992,
...                 0.839509, 0.087290,
...                 1.779735, 1.116857,
...                 0.878616, 0.602447,
...                 0.642616, 1.028681]).reshape(-1, 2)

Estimate the transformation matrix:

>>> tform_matrix.estimate(src, dst)
True
>>> tform_matrix.params
array([[-0.21785884,  0.41928191, -0.03430748],
       [-0.07179414,  0.04516432,  0.02160726],
       [ 0.24806211, -0.42947814,  0.02210191]])

Compute the Sampson distance:

>>> tform_matrix.residuals(src, dst)
array([0.0053886 , 0.00526101, 0.08689701, 0.01850534, 0.09418259,
       0.00185967, 0.06160489, 0.02655136])

Apply inverse transformation:

>>> tform_matrix.inverse(dst)
array([[-0.0513591 ,  0.04170974,  0.01213043],
       [-0.21599496,  0.29193419,  0.00978184],
       [-0.0079222 ,  0.03758889, -0.00915389],
       [ 0.14187184, -0.27988959,  0.02476507],
       [ 0.05890075, -0.07354481, -0.00481342],
       [-0.21985267,  0.36717464, -0.01482408],
       [ 0.01339569, -0.03388123,  0.00497605],
       [ 0.03420927, -0.1135812 ,  0.02228236]])
Attributes
params(3, 3) ndarray

Fundamental matrix.

Methods

__call__(coords)

Apply forward transformation.

estimate(src, dst)

Estimate fundamental matrix using 8-point algorithm.

inverse(coords)

Apply inverse transformation.

residuals(src, dst)

Compute the Sampson distance.

estimate(src, dst)#

Estimate fundamental matrix using 8-point algorithm.

The 8-point algorithm requires at least 8 corresponding point pairs for a well-conditioned solution, otherwise the over-determined solution is estimated.

Parameters
src(N, 2) ndarray

Source coordinates.

dst(N, 2) ndarray

Destination coordinates.

Returns
successbool

True, if model estimation succeeds.

inverse(coords)#

Apply inverse transformation.

Parameters
coords(N, 2) ndarray

Destination coordinates.

Returns
coords(N, 3) ndarray

Epipolar lines in the source image.

residuals(src, dst)#

Compute the Sampson distance.

The Sampson distance is the first approximation to the geometric error.

Parameters
src(N, 2) ndarray

Source coordinates.

dst(N, 2) ndarray

Destination coordinates.

Returns
residuals(N,) ndarray

Sampson distance.

class cucim.skimage.transform.PiecewiseAffineTransform#

Piecewise affine transformation.

Control points are used to define the mapping. The transform is based on a Delaunay triangulation of the points to form a mesh. Each triangle is used to find a local affine transform.

Attributes
affineslist of AffineTransform objects

Affine transformations for each triangle in the mesh.

inverse_affineslist of AffineTransform objects

Inverse affine transformations for each triangle in the mesh.

Methods

__call__(coords)

Apply forward transformation.

estimate(src, dst)

Estimate the transformation from a set of corresponding points.

inverse(coords)

Apply inverse transformation.

estimate(src, dst)#

Estimate the transformation from a set of corresponding points.

Number of source and destination coordinates must match.

Parameters
src(N, D) ndarray

Source coordinates.

dst(N, D) ndarray

Destination coordinates.

Returns
successbool

True, if all pieces of the model are successfully estimated.

inverse(coords)#

Apply inverse transformation.

Coordinates outside of the mesh will be set to - 1.

Parameters
coords(N, D) ndarray

Source coordinates.

Returns
coords(N, D) ndarray

Transformed coordinates.

class cucim.skimage.transform.PolynomialTransform(params=None, *, dimensionality=2, xp=<module 'cupy' from '/opt/conda/envs/docs/lib/python3.12/site-packages/cupy/__init__.py'>)#

2D polynomial transformation.

Has the following form:

X = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i ))
Y = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i ))
Parameters
params(2, N) ndarray, optional

Polynomial coefficients where N * 2 = (order + 1) * (order + 2). So, a_ji is defined in params[0, :] and b_ji in params[1, :].

Attributes
params(2, N) ndarray

Polynomial coefficients where N * 2 = (order + 1) * (order + 2). So, a_ji is defined in params[0, :] and b_ji in params[1, :].

Methods

__call__(coords)

Apply forward transformation.

estimate(src, dst[, order, weights])

Estimate the transformation from a set of corresponding points.

inverse(coords)

Apply inverse transformation.

estimate(src, dst, order=2, weights=None)#

Estimate the transformation from a set of corresponding points.

You can determine the over-, well- and under-determined parameters with the total least-squares method.

Number of source and destination coordinates must match.

The transformation is defined as:

X = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i ))
Y = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i ))

These equations can be transformed to the following form:

0 = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i )) - X
0 = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i )) - Y

which exist for each set of corresponding points, so we have a set of N * 2 equations. The coefficients appear linearly so we can write A x = 0, where:

A   = [[1 x y x**2 x*y y**2 ... 0 ...             0 -X]
       [0 ...                 0 1 x y x**2 x*y y**2 -Y]
        ...
        ...
      ]
x.T = [a00 a10 a11 a20 a21 a22 ... ann
       b00 b10 b11 b20 b21 b22 ... bnn c3]

In case of total least-squares the solution of this homogeneous system of equations is the right singular vector of A which corresponds to the smallest singular value normed by the coefficient c3.

Weights can be applied to each pair of corresponding points to indicate, particularly in an overdetermined system, if point pairs have higher or lower confidence or uncertainties associated with them. From the matrix treatment of least squares problems, these weight values are normalised, square-rooted, then built into a diagonal matrix, by which A is multiplied.

Parameters
src(N, 2) ndarray

Source coordinates.

dst(N, 2) ndarray

Destination coordinates.

orderint, optional

Polynomial order (number of coefficients is order + 1).

weights(N,) ndarray, optional

Relative weight values for each pair of points.

Returns
successbool

True, if model estimation succeeds.

inverse(coords)#

Apply inverse transformation.

Parameters
coords(N, 2) ndarray

Destination coordinates.

Returns
coords(N, 2) ndarray

Source coordinates.

class cucim.skimage.transform.ProjectiveTransform(matrix=None, *, dimensionality=2, xp=<module 'cupy' from '/opt/conda/envs/docs/lib/python3.12/site-packages/cupy/__init__.py'>)#

Projective transformation.

Apply a projective transformation (homography) on coordinates.

For each homogeneous coordinate \(\mathbf{x} = [x, y, 1]^T\), its target position is calculated by multiplying with the given matrix, \(H\), to give \(H \mathbf{x}\):

[[a0 a1 a2]
 [b0 b1 b2]
 [c0 c1 1 ]].

E.g., to rotate by theta degrees clockwise, the matrix should be:

[[cos(theta) -sin(theta) 0]
 [sin(theta)  cos(theta) 0]
 [0            0         1]]

or, to translate x by 10 and y by 20:

[[1 0 10]
 [0 1 20]
 [0 0 1 ]].
Parameters
matrix(D+1, D+1) ndarray, optional

Homogeneous transformation matrix.

dimensionalityint, optional

The number of dimensions of the transform. This is ignored if matrix is not None.

Attributes
params(D+1, D+1) ndarray

Homogeneous transformation matrix.

Methods

__call__(coords)

Apply forward transformation.

estimate(src, dst[, weights])

Estimate the transformation from a set of corresponding points.

inverse(coords)

Apply inverse transformation.

property dimensionality#

The dimensionality of the transformation.

estimate(src, dst, weights=None)#

Estimate the transformation from a set of corresponding points.

You can determine the over-, well- and under-determined parameters with the total least-squares method.

Number of source and destination coordinates must match.

The transformation is defined as:

X = (a0*x + a1*y + a2) / (c0*x + c1*y + 1)
Y = (b0*x + b1*y + b2) / (c0*x + c1*y + 1)

These equations can be transformed to the following form:

0 = a0*x + a1*y + a2 - c0*x*X - c1*y*X - X
0 = b0*x + b1*y + b2 - c0*x*Y - c1*y*Y - Y

which exist for each set of corresponding points, so we have a set of N * 2 equations. The coefficients appear linearly so we can write A x = 0, where:

A   = [[x y 1 0 0 0 -x*X -y*X -X]
       [0 0 0 x y 1 -x*Y -y*Y -Y]
        ...
        ...
      ]
x.T = [a0 a1 a2 b0 b1 b2 c0 c1 c3]

In case of total least-squares the solution of this homogeneous system of equations is the right singular vector of A which corresponds to the smallest singular value normed by the coefficient c3.

Weights can be applied to each pair of corresponding points to indicate, particularly in an overdetermined system, if point pairs have higher or lower confidence or uncertainties associated with them. From the matrix treatment of least squares problems, these weight values are normalised, square-rooted, then built into a diagonal matrix, by which A is multiplied.

In case of the affine transformation the coefficients c0 and c1 are 0. Thus the system of equations is:

A   = [[x y 1 0 0 0 -X]
       [0 0 0 x y 1 -Y]
        ...
        ...
      ]
x.T = [a0 a1 a2 b0 b1 b2 c3]
Parameters
src(N, 2) ndarray

Source coordinates.

dst(N, 2) ndarray

Destination coordinates.

weights(N,) ndarray, optional

Relative weight values for each pair of points.

Returns
successbool

True, if model estimation succeeds.

inverse(coords)#

Apply inverse transformation.

Parameters
coords(N, D) ndarray

Destination coordinates.

Returns
coords_out(N, D) ndarray

Source coordinates.

class cucim.skimage.transform.SimilarityTransform(matrix=None, scale=None, rotation=None, translation=None, *, dimensionality=2, xp=<module 'cupy' from '/opt/conda/envs/docs/lib/python3.12/site-packages/cupy/__init__.py'>)#

Similarity transformation.

Has the following form in 2D:

X = a0 * x - b0 * y + a1 =
  = s * x * cos(rotation) - s * y * sin(rotation) + a1

Y = b0 * x + a0 * y + b1 =
  = s * x * sin(rotation) + s * y * cos(rotation) + b1

where s is a scale factor and the homogeneous transformation matrix is:

[[a0  -b0  a1]
 [b0  a0  b1]
 [0   0    1]]

The similarity transformation extends the Euclidean transformation with a single scaling factor in addition to the rotation and translation parameters.

Parameters
matrix(dim+1, dim+1) ndarray, optional

Homogeneous transformation matrix.

scalefloat, optional

Scale factor. Implemented only for 2D and 3D.

rotationfloat, optional

Rotation angle, clockwise, as radians. Implemented only for 2D and 3D. For 3D, this is given in XZX Euler angles.

translation(dim,) ndarray-like, optional

x, y[, z] translation parameters. Implemented only for 2D and 3D.

Attributes
params(dim+1, dim+1) ndarray

Homogeneous transformation matrix.

Methods

estimate(src, dst)

Estimate the transformation from a set of corresponding points.

estimate(src, dst)#

Estimate the transformation from a set of corresponding points.

You can determine the over-, well- and under-determined parameters with the total least-squares method.

Number of source and destination coordinates must match.

Parameters
src(N, 2) ndarray

Source coordinates.

dst(N, 2) ndarray

Destination coordinates.

Returns
successbool

True, if model estimation succeeds.

property scale#
cucim.skimage.transform.downscale_local_mean(image, factors, cval=0, clip=True)#

Down-sample N-dimensional image by local averaging.

The image is padded with cval if it is not perfectly divisible by the integer factors.

In contrast to interpolation in skimage.transform.resize and skimage.transform.rescale this function calculates the local mean of elements in each block of size factors in the input image.

Parameters
image(M[, …]) ndarray

Input image.

factorsarray_like

Array containing down-sampling integer factor along each axis.

cvalfloat, optional

Constant padding value if image is not perfectly divisible by the integer factors.

clipbool, optional

Unused, but kept here for API consistency with the other transforms in this module. (The local mean will never fall outside the range of values in the input image, assuming the provided cval also falls within that range.)

Returns
imagendarray

Down-sampled image with same number of dimensions as input image. For integer inputs, the output dtype will be float64. See numpy.mean() for details.

Examples

>>> import cupy as cp
>>> a = cp.arange(15).reshape(3, 5)
>>> a
array([[ 0,  1,  2,  3,  4],
       [ 5,  6,  7,  8,  9],
       [10, 11, 12, 13, 14]])
>>> downscale_local_mean(a, (2, 3))
array([[3.5, 4. ],
       [5.5, 4.5]])
cucim.skimage.transform.estimate_transform(ttype, src, dst, *args, **kwargs)#

Estimate 2D geometric transformation parameters.

You can determine the over-, well- and under-determined parameters with the total least-squares method.

Number of source and destination coordinates must match.

Parameters
ttype{‘euclidean’, similarity’, ‘affine’, ‘piecewise-affine’, ‘projective’, ‘polynomial’}

Type of transform.

kwargsndarray or int

Function parameters (src, dst, n, angle):

NAME / TTYPE        FUNCTION PARAMETERS
'euclidean'         `src, `dst`
'similarity'        `src, `dst`
'affine'            `src, `dst`
'piecewise-affine'  `src, `dst`
'projective'        `src, `dst`
'polynomial'        `src, `dst`, `order` (polynomial order,
                                          default order is 2)

Also see examples below.

Returns
tformGeometricTransform

Transform object containing the transformation parameters and providing access to forward and inverse transformation functions.

Examples

>>> import cupy as cp
>>> from cucim.skimage import transform
>>> # estimate transformation parameters
>>> src = cp.array([0, 0, 10, 10]).reshape((2, 2))
>>> dst = cp.array([12, 14, 1, -20]).reshape((2, 2))
>>> tform = transform.estimate_transform('similarity', src, dst)
>>> cp.allclose(tform.inverse(tform(src)), src)
array(True)
>>> # warp image using the estimated transformation
>>> from skimage import data
>>> image = cp.array(data.camera())
>>> transform.warp(image, inverse_map=tform.inverse) 
>>> # create transformation with explicit parameters
>>> tform2 = transform.SimilarityTransform(scale=1.1, rotation=1,
...     translation=(10, 20))
>>> # unite transformations, applied in order from left to right
>>> tform3 = tform + tform2
>>> cp.allclose(tform3(src), tform2(tform(src)))
array(True)
cucim.skimage.transform.integral_image(image, *, dtype=None)#

Integral image / summed area table.

The integral image contains the sum of all elements above and to the left of it, i.e.:

\[S[m, n] = \sum_{i \leq m} \sum_{j \leq n} X[i, j]\]
Parameters
imagendarray

Input image.

Returns
Sndarray

Integral image/summed area table of same shape as input image.

Notes

For better accuracy and to avoid potential overflow, the data type of the output may differ from the input’s when the default dtype of None is used. For inputs with integer dtype, the behavior matches that for numpy.cumsum(). Floating point inputs will be promoted to at least double precision. The user can set dtype to override this behavior.

References

1

F.C. Crow, “Summed-area tables for texture mapping,” ACM SIGGRAPH Computer Graphics, vol. 18, 1984, pp. 207-212.

cucim.skimage.transform.integrate(ii, start, end)#

Use an integral image to integrate over a given window.

Parameters
iindarray

Integral image.

startList of tuples, each tuple of length equal to dimension of ii

Coordinates of top left corner of window(s). Each tuple in the list contains the starting row, col, … index i.e [(row_win1, col_win1, …), (row_win2, col_win2,…), …].

endList of tuples, each tuple of length equal to dimension of ii

Coordinates of bottom right corner of window(s). Each tuple in the list containing the end row, col, … index i.e [(row_win1, col_win1, …), (row_win2, col_win2, …), …].

Returns
Sscalar or ndarray

Integral (sum) over the given window(s).

See also

integral_image

Create an integral image / summed area table.

Examples

>>> arr = np.ones((5, 6), dtype=float)
>>> ii = integral_image(arr)
>>> integrate(ii, (1, 0), (1, 2))  # sum from (1, 0) to (1, 2)
array([3.])
>>> integrate(ii, [(3, 3)], [(4, 5)])  # sum from (3, 3) to (4, 5)
array([6.])
>>> # sum from (1, 0) to (1, 2) and from (3, 3) to (4, 5)
>>> integrate(ii, [(1, 0), (3, 3)], [(1, 2), (4, 5)])
array([3., 6.])
cucim.skimage.transform.matrix_transform(coords, matrix)#

Apply 2D matrix transform.

Parameters
coords(N, 2) ndarray

x, y coordinates to transform

matrix(3, 3) ndarray

Homogeneous transformation matrix.

Returns
coords(N, 2) ndarray

Transformed coordinates.

cucim.skimage.transform.pyramid_expand(image, upscale=2, sigma=None, order=1, mode='reflect', cval=0, preserve_range=False, *, channel_axis=None)#

Upsample and then smooth image.

Parameters
imagendarray

Input image.

upscalefloat, optional

Upscale factor.

sigmafloat, optional

Sigma for Gaussian filter. Default is 2 * upscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution.

orderint, optional

Order of splines used in interpolation of upsampling. See skimage.transform.warp for detail.

mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional

The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’.

cvalfloat, optional

Value to fill past edges of input if mode is ‘constant’.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

Returns
outarray

Upsampled and smoothed float image.

References

1

http://persci.mit.edu/pub_pdfs/pyramid83.pdf

cucim.skimage.transform.pyramid_gaussian(image, max_layer=-1, downscale=2, sigma=None, order=1, mode='reflect', cval=0, preserve_range=False, *, channel_axis=None)#

Yield images of the Gaussian pyramid formed by the input image.

Recursively applies the pyramid_reduce function to the image, and yields the downscaled images.

Note that the first image of the pyramid will be the original, unscaled image. The total number of images is max_layer + 1. In case all layers are computed, the last image is either a one-pixel image or the image where the reduction does not change its shape.

Parameters
imagendarray

Input image.

max_layerint, optional

Number of layers for the pyramid. 0th layer is the original image. Default is -1 which builds all possible layers.

downscalefloat, optional

Downscale factor.

sigmafloat, optional

Sigma for Gaussian filter. Default is 2 * downscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution.

orderint, optional

Order of splines used in interpolation of downsampling. See skimage.transform.warp for detail.

mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional

The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’.

cvalfloat, optional

Value to fill past edges of input if mode is ‘constant’.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

Returns
pyramidgenerator

Generator yielding pyramid layers as float images.

References

1

http://persci.mit.edu/pub_pdfs/pyramid83.pdf

cucim.skimage.transform.pyramid_laplacian(image, max_layer=-1, downscale=2, sigma=None, order=1, mode='reflect', cval=0, preserve_range=False, *, channel_axis=None)#

Yield images of the laplacian pyramid formed by the input image.

Each layer contains the difference between the downsampled and the downsampled, smoothed image:

layer = resize(prev_layer) - smooth(resize(prev_layer))

Note that the first image of the pyramid will be the difference between the original, unscaled image and its smoothed version. The total number of images is max_layer + 1. In case all layers are computed, the last image is either a one-pixel image or the image where the reduction does not change its shape.

Parameters
imagendarray

Input image.

max_layerint, optional

Number of layers for the pyramid. 0th layer is the original image. Default is -1 which builds all possible layers.

downscalefloat, optional

Downscale factor.

sigmafloat, optional

Sigma for Gaussian filter. Default is 2 * downscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution.

orderint, optional

Order of splines used in interpolation of downsampling. See skimage.transform.warp for detail.

mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional

The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’.

cvalfloat, optional

Value to fill past edges of input if mode is ‘constant’.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

Returns
pyramidgenerator

Generator yielding pyramid layers as float images.

References

1

http://persci.mit.edu/pub_pdfs/pyramid83.pdf

2

http://sepwww.stanford.edu/data/media/public/sep/morgan/texturematch/paper_html/node3.html

cucim.skimage.transform.pyramid_reduce(image, downscale=2, sigma=None, order=1, mode='reflect', cval=0, preserve_range=False, *, channel_axis=None)#

Smooth and then downsample image.

Parameters
imagendarray

Input image.

downscalefloat, optional

Downscale factor.

sigmafloat, optional

Sigma for Gaussian filter. Default is 2 * downscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution.

orderint, optional

Order of splines used in interpolation of downsampling. See skimage.transform.warp for detail.

mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional

The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’.

cvalfloat, optional

Value to fill past edges of input if mode is ‘constant’.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

Returns
outarray

Smoothed and downsampled float image.

References

1

http://persci.mit.edu/pub_pdfs/pyramid83.pdf

cucim.skimage.transform.rescale(image, scale, order=None, mode='reflect', cval=0, clip=None, preserve_range=False, anti_aliasing=None, anti_aliasing_sigma=None, *, channel_axis=None)#

Scale image by a certain factor.

Performs interpolation to up-scale or down-scale N-dimensional images. Note that anti-aliasing should be enabled when down-sizing images to avoid aliasing artifacts. For down-sampling with an integer factor also see skimage.transform.downscale_local_mean.

Parameters
image(M, N[, …][, C]) ndarray

Input image.

scale{float, tuple of floats}

Scale factors for spatial dimensions. Separate scale factors can be defined as (m, n[, …]).

Returns
scaledndarray

Scaled version of the input.

Other Parameters
orderint, optional

The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail.

mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional

Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

clipbool, optional

Whether to clip the output to the range of values of the input image. If order > 1, this will be enabled by default, since higher order interpolation may produce values outside the given input range.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

anti_aliasingbool, optional

Whether to apply a Gaussian filter to smooth the image prior to down-scaling. It is crucial to filter when down-sampling the image to avoid aliasing artifacts. If input image data type is bool, no anti-aliasing is applied.

anti_aliasing_sigma{float, tuple of floats}, optional

Standard deviation for Gaussian filtering to avoid aliasing artifacts. By default, this value is chosen as (s - 1) / 2 where s is the down-scaling factor.

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

New in version 22.02.00: channel_axis was added in 22.02.00.

Notes

Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2].

Examples

>>> from skimage import data
>>> from cucim.skimage.transform import rescale
>>> image = cp.array(data.camera())
>>> rescale(image, 0.1).shape
(51, 51)
>>> rescale(image, 0.5).shape
(256, 256)
cucim.skimage.transform.resize(image, output_shape, order=None, mode='reflect', cval=0, clip=None, preserve_range=False, anti_aliasing=None, anti_aliasing_sigma=None)#

Resize image to match a certain size.

Performs interpolation to up-size or down-size N-dimensional images. Note that anti-aliasing should be enabled when down-sizing images to avoid aliasing artifacts. For downsampling with an integer factor also see skimage.transform.downscale_local_mean.

Parameters
imagendarray

Input image.

output_shapetuple or ndarray

Size of the generated output image (rows, cols[, …][, dim]). If dim is not provided, the number of channels is preserved. In case the number of input channels does not equal the number of output channels a n-dimensional interpolation is applied.

Returns
resizedndarray

Resized version of the input.

Other Parameters
orderint, optional

The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail.

mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional

Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

clipbool, optional

Whether to clip the output to the range of values of the input image. If order > 1, this will be enabled by default, since higher order interpolation may produce values outside the given input range.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

anti_aliasingbool, optional

Whether to apply a Gaussian filter to smooth the image prior to downsampling. It is crucial to filter when downsampling the image to avoid aliasing artifacts. If not specified, it is set to True when downsampling an image whose data type is not bool. It is also set to False when using nearest neighbor interpolation (order == 0) with integer input data type.

anti_aliasing_sigma{float, tuple of floats}, optional

Standard deviation for Gaussian filtering used when anti-aliasing. By default, this value is chosen as (s - 1) / 2 where s is the downsampling factor, where s > 1. For the up-size case, s < 1, no anti-aliasing is performed prior to rescaling.

Notes

Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2].

Examples

>>> from skimage import data
>>> from cucim.skimage.transform import resize
>>> image = cp.array(data.camera())
>>> resize(image, (100, 100)).shape
(100, 100)
cucim.skimage.transform.resize_local_mean(image, output_shape, grid_mode=True, preserve_range=False, *, channel_axis=None)#

Resize an array with the local mean / bilinear scaling.

Parameters
imagendarray

Input image. If this is a multichannel image, the axis corresponding to channels should be specified using channel_axis.

output_shapetuple or ndarray

Size of the generated output image. When channel_axis is not None, the channel_axis should either be omitted from output_shape or the output_shape[channel_axis] must match image.shape[channel_axis]. If the length of output_shape exceeds image.ndim, additional singleton dimensions will be appended to the input image as needed.

grid_modebool, optional

Defines image pixels position: if True, pixels are assumed to be at grid intersections, otherwise at cell centers. As a consequence, for example, a 1d signal of length 5 is considered to have length 4 when grid_mode is False, but length 5 when grid_mode is True. See the following visual illustration:

| pixel 1 | pixel 2 | pixel 3 | pixel 4 | pixel 5 |
     |<-------------------------------------->|
                        vs.
|<----------------------------------------------->|

The starting point of the arrow in the diagram above corresponds to coordinate location 0 in each mode.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

Returns
resizedndarray

Resized version of the input.

Notes

This method is sometimes referred to as “area-based” interpolation or “pixel mixing” interpolation [1]. When grid_mode is True, it is equivalent to using OpenCV’s resize with INTER_AREA interpolation mode. It is commonly used for image downsizing. If the downsizing factors are integers, then downscale_local_mean should be preferred instead.

References

1

http://entropymine.com/imageworsener/pixelmixing/

Examples

>>> import cupy as cp
>>> from skimage import data
>>> from cucim.skimage.transform import resize_local_mean
>>> image = cp.array(data.camera())
>>> resize_local_mean(image, (100, 100)).shape
(100, 100)
cucim.skimage.transform.rotate(image, angle, resize=False, center=None, order=None, mode='constant', cval=0, clip=True, preserve_range=False)#

Rotate image by a certain angle around its center.

Parameters
imagendarray

Input image.

anglefloat

Rotation angle in degrees in counter-clockwise direction.

resizebool, optional

Determine whether the shape of the output image will be automatically calculated, so the complete rotated image exactly fits. Default is False.

centeriterable of length 2

The rotation center. If center=None, the image is rotated around its center, i.e. center=(cols / 2 - 0.5, rows / 2 - 0.5). Please note that this parameter is (cols, rows), contrary to normal skimage ordering.

Returns
rotatedndarray

Rotated version of the input.

Other Parameters
orderint, optional

The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail.

mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional

Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

clipbool, optional

Whether to clip the output to the range of values of the input image. If order > 1, this will be enabled by default, since higher order interpolation may produce values outside the given input range.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

Notes

Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2].

If image.ndim > 2, the rotation occurs for the first two dimensions of the array. Unlike the scikit-image implementation, more than one additional axis may be present on the array.

Examples

>>> from skimage import data
>>> from cucim.skimage.transform import rotate
>>> image = cp.array(data.camera())
>>> rotate(image, 2).shape
(512, 512)
>>> rotate(image, 2, resize=True).shape
(530, 530)
>>> rotate(image, 90, resize=True).shape
(512, 512)
cucim.skimage.transform.swirl(image, center=None, strength=1, radius=100, rotation=0, output_shape=None, order=None, mode='reflect', cval=0, clip=None, preserve_range=False)#

Perform a swirl transformation.

Parameters
imagendarray

Input image.

center(column, row) tuple or (2,) ndarray, optional

Center coordinate of transformation.

strengthfloat, optional

The amount of swirling applied.

radiusfloat, optional

The extent of the swirl in pixels. The effect dies out rapidly beyond radius.

rotationfloat, optional

Additional rotation applied to the image.

Returns
swirledndarray

Swirled version of the input.

Other Parameters
output_shapetuple (rows, cols), optional

Shape of the output image generated. By default the shape of the input image is preserved.

orderint, optional

The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail.

mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional

Points outside the boundaries of the input are filled according to the given mode, with ‘reflect’ used as the default. Modes match the behaviour of numpy.pad.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

clipbool, optional

Whether to clip the output to the range of values of the input image. If order > 1, this will be enabled by default, since higher order interpolation may produce values outside the given input range.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

cucim.skimage.transform.warp(image, inverse_map, map_args=None, output_shape=None, order=None, mode='constant', cval=0.0, clip=None, preserve_range=False)#

Warp an image according to a given coordinate transformation.

Parameters
imagendarray

Input image.

inverse_maptransformation object, callable cr = f(cr, **kwargs), or ndarray

Inverse coordinate map, which transforms coordinates in the output images into their corresponding coordinates in the input image.

There are a number of different options to define this map, depending on the dimensionality of the input image. A 2-D image can have 2 dimensions for gray-scale images, or 3 dimensions with color information.

  • For 2-D images, you can directly pass a transformation object, e.g. skimage.transform.SimilarityTransform, or its inverse.

  • For 2-D images, you can pass a (3, 3) homogeneous transformation matrix, e.g. skimage.transform.SimilarityTransform.params.

  • For 2-D images, a function that transforms a (M, 2) array of (col, row) coordinates in the output image to their corresponding coordinates in the input image. Extra parameters to the function can be specified through map_args.

  • For N-D images, you can directly pass an array of coordinates. The first dimension specifies the coordinates in the input image, while the subsequent dimensions determine the position in the output image. E.g. in case of 2-D images, you need to pass an array of shape (2, rows, cols), where rows and cols determine the shape of the output image, and the first dimension contains the (row, col) coordinate in the input image. See scipy.ndimage.map_coordinates for further documentation.

Note, that a (3, 3) matrix is interpreted as a homogeneous transformation matrix, so you cannot interpolate values from a 3-D input, if the output is of shape (3,).

See example section for usage.

map_argsdict, optional

Keyword arguments passed to inverse_map.

output_shapetuple (rows, cols), optional

Shape of the output image generated. By default the shape of the input image is preserved. Note that, even for multi-band images, only rows and columns need to be specified.

orderint, optional
The order of interpolation. The order has to be in the range 0-5:
  • 0: Nearest-neighbor

  • 1: Bi-linear (default)

  • 2: Bi-quadratic

  • 3: Bi-cubic

  • 4: Bi-quartic

  • 5: Bi-quintic

Default is 0 if image.dtype is bool and 1 otherwise.

mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional

Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad.

cvalfloat, optional

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

clipbool, optional

Whether to clip the output to the range of values of the input image. If order > 1, this will be enabled by default, since higher order interpolation may produce values outside the given input range.

preserve_rangebool, optional

Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html

Returns
warpeddouble ndarray

The warped input image.

Notes

  • The input image is converted to a double image.

  • In case of a SimilarityTransform, AffineTransform and ProjectiveTransform and order in [0, 3] this function uses the underlying transformation matrix to warp the image with a much faster routine.

Examples

>>> from cucim.skimage.transform import warp
>>> from skimage import data
>>> image = cp.array(data.camera())

The following image warps are all equal but differ substantially in execution time. The image is shifted to the bottom.

Use a geometric transform to warp an image (fast):

>>> from cucim.skimage.transform import SimilarityTransform
>>> tform = SimilarityTransform(translation=(0, -10))
>>> warped = warp(image, tform)

Use a callable (slow):

>>> def shift_down(xy):
...     xy[:, 1] -= 10
...     return xy
>>> warped = warp(image, shift_down)

Use a transformation matrix to warp an image (fast):

>>> import cupy as cp
>>> matrix = cp.asarray([[1, 0, 0], [0, 1, -10], [0, 0, 1]])
>>> warped = warp(image, matrix)
>>> from cucim.skimage.transform import ProjectiveTransform, warp
>>> warped = warp(image, ProjectiveTransform(matrix=matrix))

You can also use the inverse of a geometric transformation (fast):

>>> warped = warp(image, tform.inverse)

For N-D images you can pass a coordinate array, that specifies the coordinates in the input image for every element in the output image. E.g. if you want to rescale a 3-D cube, you can do:

>>> cube_shape = (30, 30, 30)
>>> cube = cp.random.rand(*cube_shape)

Setup the coordinate array, that defines the scaling:

>>> scale = 0.1
>>> output_shape = tuple(int(scale * s) for s in cube_shape)
>>> coords0, coords1, coords2 = cp.mgrid[:output_shape[0],
...                    :output_shape[1], :output_shape[2]]
>>> coords = cp.asarray([coords0, coords1, coords2])

Assume that the cube contains spatial data, where the first array element center is at coordinate (0.5, 0.5, 0.5) in real space, i.e. we have to account for this extra offset when scaling the image:

>>> coords = (coords + 0.5) / scale - 0.5
>>> warped = warp(cube, coords)
cucim.skimage.transform.warp_coords(coord_map, shape, dtype=<class 'numpy.float64'>)#

Build the source coordinates for the output of a 2-D image warp.

Parameters
coord_mapcallable like GeometricTransform.inverse

Return input coordinates for given output coordinates. Coordinates are in the shape (P, 2), where P is the number of coordinates and each element is a (row, col) pair.

shapetuple

Shape of output image (rows, cols[, bands]).

dtypenp.dtype or string

dtype for return value (sane choices: float32 or float64).

Returns
coords(ndim, rows, cols[, bands]) array of dtype dtype

Coordinates for scipy.ndimage.map_coordinates, that will yield an image of shape (orows, ocols, bands) by drawing from source points according to the coord_transform_fn.

Notes

This is a lower-level routine that produces the source coordinates for 2-D images used by warp().

It is provided separately from warp to give additional flexibility to users who would like, for example, to reuse a particular coordinate mapping, to use specific dtypes at various points along the the image-warping process, or to implement different post-processing logic than warp performs after the call to ndi.map_coordinates.

Examples

Produce a coordinate map that shifts an image up and to the right:

>>> import cupy as cp
>>> from cucim.skimage.transform import warp_coords
>>> from skimage import data
>>> from cupyx.scipy.ndimage import map_coordinates
>>>
>>> def shift_up10_left20(xy):
...     return xy - cp.array([-20, 10])[None, :]
>>>
>>> image = cp.array(data.astronaut().astype(cp.float32))
>>> coords = warp_coords(shift_up10_left20, image.shape)
>>> warped_image = map_coordinates(image, coords)
cucim.skimage.transform.warp_polar(image, center=None, *, radius=None, output_shape=None, scaling='linear', channel_axis=None, **kwargs)#

Remap image to polar or log-polar coordinates space.

Parameters
image(M, N[, C]) ndarray

Input image. For multichannel images channel_axis has to be specified.

center2-tuple, optional

(row, col) coordinates of the point in image that represents the center of the transformation (i.e., the origin in Cartesian space). Values can be of type float. If no value is given, the center is assumed to be the center point of image.

radiusfloat, optional

Radius of the circle that bounds the area to be transformed.

output_shapetuple (row, col), optional
scaling{‘linear’, ‘log’}, optional

Specify whether the image warp is polar or log-polar. Defaults to ‘linear’.

channel_axisint or None, optional

If None, the image is assumed to be a grayscale (single channel) image. Otherwise, this parameter indicates which axis of the array corresponds to channels.

New in version 22.02.00: channel_axis was added in 22.02.00.

**kwargskeyword arguments

Passed to transform.warp.

Returns
warpedndarray

The polar or log-polar warped image.

Examples

Perform a basic polar warp on a grayscale image:

>>> from skimage import data
>>> from cucim.skimage.transform import warp_polar
>>> image = cp.array(data.checkerboard())
>>> warped = warp_polar(image)

Perform a log-polar warp on a grayscale image:

>>> warped = warp_polar(image, scaling='log')

Perform a log-polar warp on a grayscale image while specifying center, radius, and output shape:

>>> warped = warp_polar(image, (100,100), radius=100,
...                     output_shape=image.shape, scaling='log')

Perform a log-polar warp on a color image:

>>> image = cp.array(data.astronaut())
>>> warped = warp_polar(image, scaling='log', channel_axis=-1)

util#

cucim.skimage.util.crop(ar, crop_width, copy=False, order='K')#

Crop array ar by crop_width along each dimension.

Parameters
ararray-like of rank N

Input array.

crop_width{sequence, int}

Number of values to remove from the edges of each axis. ((before_1, after_1),(before_N, after_N)) specifies unique crop widths at the start and end of each axis. ((before, after),) or (before, after) specifies a fixed start and end crop for every axis. (n,) or n for integer n is a shortcut for before = after = n for all axes.

copybool, optional

If True, ensure the returned array is a contiguous copy. Normally, a crop operation will return a discontiguous view of the underlying input array.

order{‘C’, ‘F’, ‘A’, ‘K’}, optional

If copy==True, control the memory layout of the copy. See np.copy.

Returns
croppedarray

The cropped array. If copy=False (default), this is a sliced view of the input array.

cucim.skimage.util.dtype_limits(image, clip_negative=False)#

Return intensity limits, i.e. (min, max) tuple, of the image’s dtype.

Parameters
imagendarray

Input image.

clip_negativebool, optional

If True, clip the negative range (i.e. return 0 for min intensity) even if the image dtype allows negative values.

Returns
imin, imaxtuple

Lower and upper intensity limits.

cucim.skimage.util.img_as_bool(image, force_copy=False)#

Convert an image to boolean format.

Parameters
imagendarray

Input image.

force_copybool, optional

Force a copy of the data, irrespective of its current dtype.

Returns
outndarray of bool (bool_)

Output image.

Notes

The upper half of the input dtype’s positive range is True, and the lower half is False. All negative values (if present) are False.

cucim.skimage.util.img_as_float(image, force_copy=False)#

Convert an image to floating point format.

This function is similar to img_as_float64, but will not convert lower-precision floating point arrays to float64.

Parameters
imagendarray

Input image.

force_copybool, optional

Force a copy of the data, irrespective of its current dtype.

Returns
outndarray of float

Output image.

Notes

The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0].

cucim.skimage.util.img_as_float32(image, force_copy=False)#

Convert an image to single-precision (32-bit) floating point format.

Parameters
imagendarray

Input image.

force_copybool, optional

Force a copy of the data, irrespective of its current dtype.

Returns
outndarray of float32

Output image.

Notes

The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0].

cucim.skimage.util.img_as_float64(image, force_copy=False)#

Convert an image to double-precision (64-bit) floating point format.

Parameters
imagendarray

Input image.

force_copybool, optional

Force a copy of the data, irrespective of its current dtype.

Returns
outndarray of float64

Output image.

Notes

The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0].

cucim.skimage.util.img_as_int(image, force_copy=False)#

Convert an image to 16-bit signed integer format.

Parameters
imagendarray

Input image.

force_copybool, optional

Force a copy of the data, irrespective of its current dtype.

Returns
outndarray of int16

Output image.

Notes

The values are scaled between -32768 and 32767. If the input data-type is positive-only (e.g., uint8), then the output image will still only have positive values.

cucim.skimage.util.img_as_ubyte(image, force_copy=False)#

Convert an image to 8-bit unsigned integer format.

Parameters
imagendarray

Input image.

force_copybool, optional

Force a copy of the data, irrespective of its current dtype.

Returns
outndarray of ubyte (uint8)

Output image.

Notes

Negative input values will be clipped. Positive values are scaled between 0 and 255.

cucim.skimage.util.img_as_uint(image, force_copy=False)#

Convert an image to 16-bit unsigned integer format.

Parameters
imagendarray

Input image.

force_copybool, optional

Force a copy of the data, irrespective of its current dtype.

Returns
outndarray of uint16

Output image.

Notes

Negative input values will be clipped. Positive values are scaled between 0 and 65535.

cucim.skimage.util.invert(image, signed_float=False)#

Invert an image.

Invert the intensity range of the input image, so that the dtype maximum is now the dtype minimum, and vice-versa. This operation is slightly different depending on the input dtype:

  • unsigned integers: subtract the image from the dtype maximum

  • signed integers: subtract the image from -1 (see Notes)

  • floats: subtract the image from 1 (if signed_float is False, so we assume the image is unsigned), or from 0 (if signed_float is True).

See the examples for clarification.

Parameters
imagendarray

Input image.

signed_floatbool, optional

If True and the image is of type float, the range is assumed to be [-1, 1]. If False and the image is of type float, the range is assumed to be [0, 1].

Returns
invertedndarray

Inverted image.

Notes

Ideally, for signed integers we would simply multiply by -1. However, signed integer ranges are asymmetric. For example, for np.int8, the range of possible values is [-128, 127], so that -128 * -1 equals -128! By subtracting from -1, we correctly map the maximum dtype value to the minimum.

Examples

>>> import cupy as cp
>>> img = cp.asarray([[100,  0, 200],
...                     [  0, 50,   0],
...                     [ 30,  0, 255]], np.uint8)
>>> invert(img)
array([[155, 255,  55],
       [255, 205, 255],
       [225, 255,   0]], dtype=uint8)
>>> img2 = cp.asarray([[ -2, 0, -128],
...                      [127, 0,    5]], np.int8)
>>> invert(img2)
array([[   1,   -1,  127],
       [-128,   -1,   -6]], dtype=int8)
>>> img3 = cp.asarray([[ 0., 1., 0.5, 0.75]])
>>> invert(img3)
array([[1.  , 0.  , 0.5 , 0.25]])
>>> img4 = cp.asarray([[ 0., 1., -1., -0.25]])
>>> invert(img4, signed_float=True)
array([[-0.  , -1.  ,  1.  ,  0.25]])
cucim.skimage.util.map_array(input_arr, input_vals, output_vals, out=None)#

Map values from input array from input_vals to output_vals.

Parameters
input_arrarray of int, shape (M[, …])

The input label image.

input_valsarray of int, shape (K,)

The values to map from.

output_valsarray, shape (K,)

The values to map to.

out: array, same shape as `input_arr`

The output array. Will be created if not provided. It should have the same dtype as output_vals.

Returns
outarray, same shape as input_arr

The array of mapped values.

Notes

If input_arr contains values that aren’t covered by input_vals, they are set to 0.

Examples

>>> import cupy as cp
>>> import cucim.skimage as ski
>>> ski.util.map_array(
...    input_arr=cp.array([[0, 2, 2, 0], [3, 4, 5, 0]]),
...    input_vals=cp.array([1, 2, 3, 4, 6]),
...    output_vals=cp.array([6, 7, 8, 9, 10]),
... )
array([[0, 7, 7, 0],
       [8, 9, 0, 0]])
cucim.skimage.util.random_noise(image, mode='gaussian', rng=None, clip=True, *, seed=<DEPRECATED>, **kwargs)#

Function to add random noise of various types to a floating-point image.

Parameters
imagendarray

Input image data. Will be converted to float.

modestr, optional

One of the following strings, selecting the type of noise to add:

‘gaussian’ (default)

Gaussian-distributed additive noise.

‘localvar’

Gaussian-distributed additive noise, with specified local variance at each point of image.

‘poisson’

Poisson-distributed noise generated from the data.

‘salt’

Replaces random pixels with 1.

‘pepper’

Replaces random pixels with 0 (for unsigned images) or -1 (for signed images).

‘s&p’

Replaces random pixels with either 1 or low_val, where low_val is 0 for unsigned images or -1 for signed images.

‘speckle’

Multiplicative noise using out = image + n * image, where n is Gaussian noise with specified mean & variance.

rng{cupy.random.Generator, int}, optional

Pseudo-random number generator. By default, a PCG64 generator is used (see cupy.random.default_rng()). If rng is an int, it is used to seed the generator.

Note: cupy.random.Generator is not yet fully supported. Please use an integer seed instead.

clipbool, optional

If True (default), the output will be clipped after noise applied for modes ‘speckle’, ‘poisson’, and ‘gaussian’. This is needed to maintain the proper image data range. If False, clipping is not applied, and the output may extend beyond the range [-1, 1].

meanfloat, optional

Mean of random distribution. Used in ‘gaussian’ and ‘speckle’. Default : 0.

varfloat, optional

Variance of random distribution. Used in ‘gaussian’ and ‘speckle’. Note: variance = (standard deviation) ** 2. Default : 0.01

local_varsndarray, optional

Array of positive floats, same shape as image, defining the local variance at every image point. Used in ‘localvar’.

amountfloat, optional

Proportion of image pixels to replace with noise on range [0, 1]. Used in ‘salt’, ‘pepper’, and ‘salt & pepper’. Default : 0.05

salt_vs_pepperfloat, optional

Proportion of salt vs. pepper noise for ‘s&p’ on range [0, 1]. Higher values represent more salt. Default : 0.5 (equal amounts)

Returns
outndarray

Output floating-point image data on range [0, 1] or [-1, 1] if the input image was unsigned or signed, respectively.

Other Parameters
seedDEPRECATED

Deprecated in favor of rng.

Deprecated since version 23.12.

Notes

Speckle, Poisson, Localvar, and Gaussian noise may generate noise outside the valid image range. The default is to clip (not alias) these values, but they may be preserved by setting clip=False. Note that in this case the output may contain values outside the ranges [0, 1] or [-1, 1]. Use this option with care.

Because of the prevalence of exclusively positive floating-point images in intermediate calculations, it is not possible to intuit if an input is signed based on dtype alone. Instead, negative values are explicitly searched for. Only if found does this function assume signed input. Unexpected results only occur in rare, poorly exposes cases (e.g. if all values are above 50 percent gray in a signed image). In this event, manually scaling the input to the positive domain will solve the problem.

The Poisson distribution is only defined for positive integers. To apply this noise type, the number of unique values in the image is found and the next round power of two is used to scale up the floating-point result, after which it is scaled back down to the floating-point image range.

To generate Poisson noise against a signed image, the signed image is temporarily converted to an unsigned image in the floating point domain, Poisson noise is generated, then it is returned to the original range.

cucim.skimage.util.view_as_blocks(arr_in, block_shape)#

Block view of the input n-dimensional array (using re-striding).

Blocks are non-overlapping views of the input array.

Parameters
arr_inndarray, shape (M[, …])

Input array.

block_shapetuple

The shape of the block. Each dimension must divide evenly into the corresponding dimensions of arr_in.

Returns
arr_outndarray

Block view of the input array. If arr_in is non-contiguous, a copy is made.

Examples

>>> import cupy as cp
>>> from cucim.skimage.util.shape import view_as_blocks
>>> A = cp.arange(4*4).reshape(4,4)
>>> A
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11],
       [12, 13, 14, 15]])
>>> B = view_as_blocks(A, block_shape=(2, 2))
>>> B[0, 0]
array([[0, 1],
       [4, 5]])
>>> B[0, 1]
array([[2, 3],
       [6, 7]])
>>> B[1, 0, 1, 1]
array(13)
>>> A = cp.arange(4*4*6).reshape(4,4,6)
>>> A  
array([[[ 0,  1,  2,  3,  4,  5],
        [ 6,  7,  8,  9, 10, 11],
        [12, 13, 14, 15, 16, 17],
        [18, 19, 20, 21, 22, 23]],
       [[24, 25, 26, 27, 28, 29],
        [30, 31, 32, 33, 34, 35],
        [36, 37, 38, 39, 40, 41],
        [42, 43, 44, 45, 46, 47]],
       [[48, 49, 50, 51, 52, 53],
        [54, 55, 56, 57, 58, 59],
        [60, 61, 62, 63, 64, 65],
        [66, 67, 68, 69, 70, 71]],
       [[72, 73, 74, 75, 76, 77],
        [78, 79, 80, 81, 82, 83],
        [84, 85, 86, 87, 88, 89],
        [90, 91, 92, 93, 94, 95]]])
>>> B = view_as_blocks(A, block_shape=(1, 2, 2))
>>> B.shape
(4, 2, 3, 1, 2, 2)
>>> B[2:, 0, 2]  
array([[[[52, 53],
         [58, 59]]],
       [[[76, 77],
         [82, 83]]]])
cucim.skimage.util.view_as_windows(arr_in, window_shape, step=1)#

Rolling window view of the input n-dimensional array.

Windows are overlapping views of the input array, with adjacent windows shifted by a single row or column (or an index of a higher dimension).

Parameters
arr_inndarray, shape (M[, …])

Input array.

window_shapeinteger or tuple of length arr_in.ndim

Defines the shape of the elementary n-dimensional orthotope (better know as hyperrectangle [1]) of the rolling window view. If an integer is given, the shape will be a hypercube of sidelength given by its value.

stepinteger or tuple of length arr_in.ndim

Indicates step size at which extraction shall be performed. If integer is given, then the step is uniform in all dimensions.

Returns
arr_outndarray

(rolling) window view of the input array.

Notes

One should be very careful with rolling views when it comes to memory usage. Indeed, although a ‘view’ has the same memory footprint as its base array, the actual array that emerges when this ‘view’ is used in a computation is generally a (much) larger array than the original, especially for 2-dimensional arrays and above.

For example, let us consider a 3 dimensional array of size (100, 100, 100) of float64. This array takes about 8*100**3 Bytes for storage which is just 8 MB. If one decides to build a rolling view on this array with a window of (3, 3, 3) the hypothetical size of the rolling view (if one was to reshape the view for example) would be 8*(100-3+1)**3*3**3 which is about 203 MB! The scaling becomes even worse as the dimension of the input array becomes larger.

References

1

https://en.wikipedia.org/wiki/Hyperrectangle

Examples

>>> import cupy as cp
>>> from cucim.skimage.util.shape import view_as_windows
>>> A = cp.arange(4*4).reshape(4,4)
>>> A
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11],
       [12, 13, 14, 15]])
>>> window_shape = (2, 2)
>>> B = view_as_windows(A, window_shape)
>>> B[0, 0]
array([[0, 1],
       [4, 5]])
>>> B[0, 1]
array([[1, 2],
       [5, 6]])
>>> A = cp.arange(10)
>>> A
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> window_shape = (3,)
>>> B = view_as_windows(A, window_shape)
>>> B.shape
(8, 3)
>>> B
array([[0, 1, 2],
       [1, 2, 3],
       [2, 3, 4],
       [3, 4, 5],
       [4, 5, 6],
       [5, 6, 7],
       [6, 7, 8],
       [7, 8, 9]])
>>> A = cp.arange(5*4).reshape(5, 4)
>>> A
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11],
       [12, 13, 14, 15],
       [16, 17, 18, 19]])
>>> window_shape = (4, 3)
>>> B = view_as_windows(A, window_shape)
>>> B.shape
(2, 2, 4, 3)
>>> B  
array([[[[ 0,  1,  2],
         [ 4,  5,  6],
         [ 8,  9, 10],
         [12, 13, 14]],
        [[ 1,  2,  3],
         [ 5,  6,  7],
         [ 9, 10, 11],
         [13, 14, 15]]],
       [[[ 4,  5,  6],
         [ 8,  9, 10],
         [12, 13, 14],
         [16, 17, 18]],
        [[ 5,  6,  7],
         [ 9, 10, 11],
         [13, 14, 15],
         [17, 18, 19]]]])

Submodule Contents#

skimage#

GPU Image Processing for Python

This module is a CuPy based implementation of a subset of scikit-image.

It is a collection of algorithms for image processing and computer vision.

The main package only provides a few utilities for converting between image data types; for most features, you need to import one of the following subpackages:

Subpackages#

color

Color space conversion.

data

Test images and example data.

exposure

Image intensity adjustment, e.g., histogram equalization, etc.

feature

Feature detection and extraction, e.g., texture analysis corners, etc.

filters

Sharpening, edge finding, rank filters, thresholding, etc.

measure

Measurement of image properties, e.g., region properties and contours.

metrics

Metrics corresponding to images, e.g. distance metrics, similarity, etc.

morphology

Morphological operations, e.g., opening or skeletonization.

restoration

Restoration algorithms, e.g., deconvolution algorithms, denoising, etc.

segmentation

Partitioning an image into multiple regions.

transform

Geometric and other transforms, e.g., rotation or the Radon transform.

util

Generic utilities.

Utility Functions#

img_as_float

Convert an image to floating point format, with values in [0, 1]. Is similar to img_as_float64, but will not convert lower-precision floating point arrays to float64.

img_as_float32

Convert an image to single-precision (32-bit) floating point format, with values in [0, 1].

img_as_float64

Convert an image to double-precision (64-bit) floating point format, with values in [0, 1].

img_as_uint

Convert an image to unsigned integer format, with values in [0, 65535].

img_as_int

Convert an image to signed integer format, with values in [-32768, 32767].

img_as_ubyte

Convert an image to unsigned byte format, with values in [0, 255].

img_as_bool

Convert an image to boolean format, with values either True or False.

dtype_limits

Return intensity limits, i.e. (min, max) tuple, of the image’s dtype.