Classification Workflow

class biapy.engine.classification.Classification_Workflow(cfg, job_identifier, device, args, **kwargs)[source]

Bases: Base_Workflow

Classification workflow where the goal of this workflow is to assing a label to the input image. More details in our documentation.

Parameters:
  • cfg (YACS configuration) – Running configuration.

  • Job_identifier (str) – Complete name of the running job.

  • device (Torch device) – Device used.

  • args (argpase class) – Arguments used in BiaPy’s call.

define_metrics()[source]

Definition of self.metrics, self.metric_names and self.loss variables.

metric_calculation(output, targets, metric_logger=None)[source]

Execution of the metrics defined in define_metrics() function.

Parameters:
  • output (Torch Tensor) – Prediction of the model.

  • targets (Torch Tensor) – Ground truth to compare the prediction with.

  • metric_logger (MetricLogger, optional) – Class to be updated with the new metric(s) value(s) calculated.

Returns:

value – Value of the metric for the given prediction.

Return type:

float

prepare_targets(targets, batch)[source]

Location to perform any necessary data transformations to targets before calculating the loss.

Parameters:
  • targets (Torch Tensor) – Ground truth to compare the prediction with.

  • batch (Torch Tensor) – Prediction of the model. Not used here.

Returns:

targets – Resulting targets.

Return type:

Torch tensor

load_train_data()[source]

Load training and validation data.

load_test_data()[source]

Load test data.

process_sample(norm)[source]

Function to process a sample in the inference phase.

Parameters:

norm (List of dicts) – Normalization used during training. Required to denormalize the predictions of the model.

torchvision_model_call(in_img, is_train=False)[source]

Call a regular Pytorch model.

Parameters:
  • in_img (Tensors) – Input image to pass through the model.

  • is_train (bool, optional) – Whether if the call is during training or inference.

Returns:

prediction – Image prediction.

Return type:

Tensor

after_all_images()[source]

Steps that must be done after predicting all images.

print_stats(image_counter)[source]

Print statistics.

Parameters:

image_counter (int) – Number of images to call normalize_stats.

after_merge_patches(pred)[source]

Steps need to be done after merging all predicted patches into the original image.

Parameters:

pred (Torch Tensor) – Model prediction.

after_merge_patches_by_chunks_proccess_patch(filename)[source]

Place any code that needs to be done after merging all predicted patches into the original image but in the process made chunk by chunk. This function will operate patch by patch defined by DATA.PATCH_SIZE.

Parameters:

filename (List of str) – Filename of the predicted image H5/Zarr.

after_full_image(pred)[source]

Steps that must be executed after generating the prediction by supplying the entire image to the model.

Parameters:

pred (Torch Tensor) – Model prediction.

normalize_stats(image_counter)[source]

Normalize statistics.

Parameters:

image_counter (int) – Number of images to average the metrics.