Select workflow
In bioimage analysis, the input and output data vary depending on the specific workflow being used. The following are the workflows implemented in BiaPy and the corresponding input and output data they require. Once you’ve identified the one you wish to use, follow the running instructions found on each workflow’s page (under “How to run”).
Semantic segmentation, the input is an image of the area or object of interest, while the output is another image of the same shape as the input, with a semantic label (a numerical value defining its category) assigned to each pixel. During the training phase, the expected label image of the input (i.e. the ground truth) needs to be also provided for the model to learn:
Instance segmentation, the input and output are similar to semantic segmentation, but the output also includes a unique identifier for each individual object of interest. During the training phase, the expected instance label image of the input (i.e. the ground truth) needs to be also provided for the model to learn:
Object detection, the goal is to recognize objects in images without needing a pixel-level accuracy output. The input is an image, while the output is a CSV file containing the coordinates of the center point of each object. During the training phase, the list of coordinates from the input objects (i.e. the ground truth) needs to be also provided for the model to learn:
Additionally, Biapy may output an image with the probability map of each object’s center.
Image denoising, the goal is to remove noise from a given input image. The input is a noisy image, and the output is the denoised image. No ground truth is required as the model uses an unsupervised learning technique to remove noise (Noise2Void).
Single image super-resolution, the goal is to reconstruct high-resolution (HR) images from low-resolution (LR) ones. The input is a LR image, and the output is a HR (usually
×2
or×4
larger) version of the same image. During the training phase, the expected HR image of the input LR image (i.e. the ground truth) needs to be also provided for the model to learn:
Self-supervised pre-training, the model is trained without the use of labeled data. Instead, the model is presented with a so-called pretext task, such as predicting the rotation of an image, which allows it to learn useful features from the data. Once this initial training is complete, the model can be fine-tuned using labeled data for a specific task, such as image classification. The input in this workflow is simply a set of images, and the output is the pre-trained model.
Image classification, the goal is to match a given input image to its corresponding class. The input is a set of images, and the output is a file (usually CSV) containing the predicted class of each input image.
Image to image translation, the purpose of this workflow is to translate or map input images to corresponding target images. Often referred to as “image-to-image,” this process is versatile and can be applied to various goals, including image inpainting, colorization, and even super-resolution (with a scale factor of
x1
). During the training phase, the expected “translated” image of the input image (i.e. the ground truth) needs to be also provided for the model to learn: