Instance Segmentation Workflow

class biapy.engine.instance_seg.Instance_Segmentation_Workflow(cfg, job_identifier, device, args, **kwargs)[source]

Bases: Base_Workflow

Instance segmentation workflow where the goal is to assign an unique id, i.e. integer, to each object of the input image. More details in our documentation.

Parameters:
  • cfg (YACS configuration) – Running configuration.

  • Job_identifier (str) – Complete name of the running job.

  • device (Torch device) – Device used.

  • args (argpase class) – Arguments used in BiaPy’s call.

define_metrics()[source]

Definition of self.metrics, self.metric_names and self.loss variables.

metric_calculation(output, targets, metric_logger=None)[source]

Execution of the metrics defined in define_metrics() function.

Parameters:
  • output (Torch Tensor) – Prediction of the model.

  • targets (Torch Tensor) – Ground truth to compare the prediction with.

  • metric_logger (MetricLogger, optional) – Class to be updated with the new metric(s) value(s) calculated.

Returns:

value – Value of the metric for the given prediction.

Return type:

float

instance_seg_process(pred, filenames, out_dir, out_dir_post_proc, resolution)[source]

Instance segmentation workflow engine for test/inference. Process model’s prediction to prepare instance segmentation output and calculate metrics.

Parameters:
  • pred (4D/5D Torch tensor) – Model predictions. E.g. (z, y, x, channels) for both 2D and 3D.

  • filenames (List of str) – Predicted image’s filenames.

  • out_dir (path) – Output directory to save the instances.

  • out_dir_post_proc (path) – Output directory to save the post-processed instances.

process_sample(norm)[source]

Function to process a sample in the inference phase.

Parameters:

norm (List of dicts) – Normalization used during training. Required to denormalize the predictions of the model.

after_merge_patches(pred)[source]

Steps need to be done after merging all predicted patches into the original image.

Parameters:

pred (Torch Tensor) – Model prediction.

after_merge_patches_by_chunks_proccess_patch(filename)[source]

Place any code that needs to be done after merging all predicted patches into the original image but in the process made chunk by chunk. This function will operate patch by patch defined by DATA.PATCH_SIZE.

Parameters:

filename (List of str) – Filename of the predicted image H5/Zarr.

after_full_image(pred)[source]

Steps that must be executed after generating the prediction by supplying the entire image to the model.

Parameters:

pred (Torch Tensor) – Model prediction.

after_all_images()[source]

Steps that must be done after predicting all images.

normalize_stats(image_counter)[source]

Normalize statistics.

Parameters:

image_counter (int) – Number of images to average the metrics.

print_stats(image_counter)[source]

Print statistics.

Parameters:

image_counter (int) – Number of images to call normalize_stats.

prepare_instance_data()[source]

Creates instance segmentation ground truth images to train the model based on the ground truth instances provided. They will be saved in a separate folder in the root path of the ground truth.

torchvision_model_call(in_img, is_train=False)[source]

Call a regular Pytorch model.

Parameters:
  • in_img (Tensor) – Input image to pass through the model.

  • is_train (bool, optional) – Whether if the call is during training or inference.

Returns:

prediction – Image prediction.

Return type:

Tensor