Henriques et al. |

CycleGAN

0
Image Analysis
    More
Step 1: Upload your data

Upload Training Images

Drag your file(s) or upload
  • Your file can be in the following formats:zip
  • The training data are the images that train cycleGAN. The training data should consist of a zipped folder named 'train' that contains 2 sub folders named 'source' and 'target'. Images in the source folder are the original images and images in the target folder should contain images within the domain you wish to translate your source images to. This method being unpaired means your source and target images do not have to be corresponding pairs. Images should be .png files
or
Don’t have a file?
Use our demo data to run
Use Demo Data

Upload Test Images

Drag your file(s) or upload
  • Your file can be in the following formats:zip
  • The test data are the images that test your trained model. The training data should consist of a zipped folder named 'train' that contains 2 sub folders named 'source' and 'target'. Images in the source folder are the original images and images in the target folder should contain images within the domain you wish to translate your source images to. This method being unpaired means your source and target images do not have to be corresponding pairs. Images should be .png files
or
Don’t have a file?
Use our demo data to run
Use Demo Data
Step 2: Set Parameters
40
100
300
Grayscale
Step 3: Complete run profile

CycleGAN is a method that can capture the characteristics of one image domain and learn how these characteristics can be translated into another image domain, all in the absence of any paired training examples. Model saving and GPU access available soon.

Example use case: In silico cell painting, semantic segmentation, background removal, style transfer.

Limitations: If your dataset is paired, use the pix2pix app instead, paired training generally provides more information to the deep learning model and so can perform more effectively.

Technology: Two Generative Adversairal Networks that learn to transform images both from the first domain to the second and vice-versa.

Citation:
von Chamier, L., Laine, R.F., Jukkala, J. et al. Democratising deep learning for microscopy with ZeroCostDL4Mic. Nat Commun 12, 2276 (2021). https://doi.org/10.1038/s41467-021-22518-0
Released:
Nov-16-2022
Previous Job Parameters
Your previous job parameters will show up here
so you can keep track of your jobs
Results
Parameters