MultiResUNet

class biapy.models.multiresunet.Conv_batchnorm(conv, batchnorm, num_in_filters, num_out_filters, kernel_size, stride=1, activation='relu')[source]

Bases: Module

Convolutional layers.

Parameters:
  • conv (Torch conv layer) – Convolutional layer to use.

  • batchnorm (Torch batch normalization layer) – Convolutional layer to use.

  • num_in_filters (int) – Number of input filters.

  • num_out_filters (int) – Number of output filters.

  • kernel_size (Tuple of ints) – Size of the convolving kernel.

  • stride (Tuple of ints, optional) – Stride of the convolution.

  • activation (str, optional) – Activation function.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class biapy.models.multiresunet.Multiresblock(conv, batchnorm, num_in_channels, num_filters, alpha=1.67)[source]

Bases: Module

MultiRes Block.

Parameters:
  • conv (Torch conv layer) – Convolutional layer to use.

  • batchnorm (Torch batch normalization layer) – Convolutional layer to use.

  • num_in_channels (int) – Number of channels coming into multires block

  • num_filters (int) – Number of output filters.

  • alpha (str, optional) – Alpha hyperparameter.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class biapy.models.multiresunet.Respath(conv, batchnorm, num_in_filters, num_out_filters, respath_length)[source]

Bases: Module

ResPath.

Parameters:
  • conv (Torch conv layer) – Convolutional layer to use.

  • batchnorm (Torch batch normalization layer) – Convolutional layer to use.

  • num_in_filters (int) – Number of output filters.

  • num_out_filters (int) – Number of filters going out the respath.

  • respath_length (str, optional) – length of ResPath.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class biapy.models.multiresunet.MultiResUnet(ndim, input_channels, alpha=1.67, n_classes=1, z_down=[2, 2, 2, 2], output_channels='BC', upsampling_factor=(), upsampling_position='pre')[source]

Bases: Module

Create 2D/3D MultiResUNet model.

Reference: MultiResUNet : Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation.

Parameters:
  • ndim (int) – Number of dimensions of the input data.

  • input_channels (int) – Number of channels in image.

  • alpha (float, optional) – Alpha hyperparameter (default: 1.67)

  • n_classes (int, optional) – Number of segmentation classes.

  • z_down (List of ints, optional) – Downsampling used in z dimension. Set it to 1 if the dataset is not isotropic.

  • output_channels (str, optional) – Channels to operate with. Possible values: BC, BCD, BP, BCDv2, BDv2, Dv2 and BCM.

  • upsampling_factor (tuple of ints, optional) – Factor of upsampling for super resolution workflow for each dimension.

  • upsampling_position (str, optional) – Whether the upsampling is going to be made previously (pre option) to the model or after the model (post option).

forward(x: Tensor) Tensor | List[Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.