Unsupervised image-to-image (I2I) translation is to learn image conversion from one domain to another using unpaired training data sets from two domains. Recent works (Zhu et al., 2017a; Liu et al., 2017; Yi et al., 2017; Zhu et al., 2017b; Huang et al., 2018; Lee et al., 2018; Liu et al., 2019; Kim et al., 2020)
have shown impressive results using a generative adversarial network (GAN) framework(Goodfellow et al., 2014)
and their practical applications in various areas such as computer vision(Yuan et al., 2018; Lu et al., 2019; Du et al., 2020), medical imaging (Kang et al., 2019; Oh et al., 2020; Kim et al., 2021), and remote sensing (Song et al., 2020; Luppino et al., 2021), etc. However, most image-to-image translation methods such as cycleGAN (Zhu et al., 2017a) still require centrally collected unpaired datasets that are often difficult to obtain due to privacy and security issues.
For instance, suppose one wants to train a neural network model that can transfer photos to Van Gogh style images, where one client has only photographs while another client (eg. museum) has digital copies of artwork by Van Gogh. However, conventional image translation approaches require centrally collected photos and artwork as shown Fig. 1(a), which requires copyright of artwork and is also vulnerable to data security bleaching even under the copyright agreement.
Recently, federated learning (FL) (McMahan et al., 2017) has drawn a lot of interest because it ensures data privacy by not sharing private data. In federated learning, a central server sends parameters of the global model to multiple clients. Each client trains the local model using its own data and sends current updates to the server. Finally, the server aggregates local updates to train the global model. FedAvg (McMahan et al., 2017) is a representative algorithm for training a global model by averaging local updates from clients and is extended to other variants (Smith et al., 2017; Yurochkin et al., 2019; Li et al., 2020; Wang et al., 2020). Moreover, peer-to-peer direct communication protocol without central server has been also studied He et al. (2018).
Recently, several works (Augenstein et al., 2020; Chen et al., 2020) successfully trained GANs in a federated learning scenario. Unfortunately, the application of federated learning to unsupervised image-to-image translations using CycleGAN is still an open problem. This is because that training image-to-image translations between two domains requires access to both domain data. Specifically, as shown in Fig. 2(a), to translate images from a domain to a different domain , the generator converts images from the domain to the domain so that the discriminator cannot distinguish them with real samples from the domain . To train the discriminator , it requires both real samples from the domain and fake samples synthesized by the generator using images from the domain. However, in a federated learning setting in which one client (domain ) is not allowed to access data from another client (domain ) and a central server also never accesses data, the discriminator of the client (domain ) cannot be trained because it cannot access real samples from the domain .
To address this problem, here we propose a novel federated CycleGAN (FedCycleGAN), which can be trained without sharing local client data. FedCycleGAN is possible thanks to the following key innovation. Specifically, in contrast to the conventional FL where the same form of the total loss is used across clients, our FedCycleGAN decomposes the total CycleGAN loss into the sum of the domain specific local objective of each client that can be computed using only their data. Accordingly, the clients transmit the gradients of their own objective functions so that the server then sums them up to compute the gradient of the total CycleGAN loss. Accordingly, the server trains the globally shared model using local gradients from clients without sharing private data and therefore protects privacy (see Fig. 1 (b)). The proposed local objective decomposition is very scalable in the sense that any number of clients can participate in federated CycleGAN training. Specifically, as shown in Fig. 3, each client transmits local gradient of its domain-specific objective functions, which are then averaged at the server to compute the global gradients.
Yet another innovation is the switchable architecture that significantly reduces the transmission bandwidth with negligible performance degradation. Specifically, inspired by AdaIN-based switchable CycleGAN (Gu and Ye, 2021; Yang et al., 2021), our framework use switchable architecture for generator and discriminator using adaptive instance normalization (AdaIN) (Huang and Belongie, 2017) to reduce the transmission overhead. Thus, we only need to transmit gradients from a single pair of generator and discriminator instead of two. This significantly reduces the bandwidth requirement. The main contribution of this paper can be therefore summarized as following:
In contrast to the conventional federated learning that uses the same loss function across all clients, in our FedCycleGAN clients use their own domain-specific loss functions to compute the local gradient, which are summed at the server to compute the global gradient. In fact, our experimental results demonstrate that our FedCycleGAN can show comparable and even better performance compared to non-federated vanilla CycleGAN thanks to the exact local objective decomposition.
Using AdaIN based switching scheme, the transmission overhead for federated learning can be significantly reduced, which makes the federated CycleGAN more practical.
2 Related work
Federated learning (FL)
FL is a decentralized learning approach where local clients train their local models without transmitting data to a central server, and the global model is updated by aggregating local updates from clients (McMahan et al., 2017). To aggregate local computations from clients, FedAvg McMahan et al. (2017) updates a global model by averaging local updates. (Smith et al., 2017) shows that federated learning can be applied to multi-task learning. (Yurochkin et al., 2019) propose a probabilistic federated learning approach based on Bayesian nonparametric framework. FedProx (Li et al., 2020) is proposed to address the heterogeneous nature of federated learning. FedMA (Wang et al., 2020) improves performance by matching and averaging local updates.
By extending the scope of classical FLs, several recent approaches (Augenstein et al., 2020; Chen et al., 2020) have successfully integrated federated learning into GAN framework. For example, (Augenstein et al., 2020) uses DP-FedAvg-GAN to train GAN with differential privacy guarantees for an image synthesis. Specifically, as shown in Fig. 4(a), a server in DP-FedAvg-GAN has a shared generator and a discriminator that are passed to clients. Each client updates a local discriminator using its own data () and fake images () and then sends a local update to the server. Finally, the server updates the global discriminator using local updates from clients, and then trains the global generator. This process is repeated until the convergence is achieved. GS-WGAN (Chen et al., 2020) employs a gradient-sanitized Wasserstein GAN approach to preserve privacy. As shown in Fig. 4(b), a server has a centralized generator and transmits only synthesized images (). Each client calculates a local update using its own discriminator () and data (), and then transmit a sanitized local gradient to the server. The server trains the generator using local gradients from clients.
CycleGAN for unsupervised image-to-image translation
The goal of an unsupervised image-to-image translation is to learn how to translate a image from one domain () to a corresponding output image in another domain (). Fig. 2 (a) shows the architecture of CycleGAN for this purpose. Suppose that
is a probability distribution of, and is that of . and are images from and , respectively. The generator translates an image from to an output image in . The discriminator distinguishes real samples in and fake samples that are generated by using samples in . Similarly, is the generator that translates an image in into a corresponding output in . The discriminator distinguishes real images in from fake images that are made by using images in .
In CycleGAN, the total loss function constitutes of adversarial loss and cycle-consistency loss. The adversarial loss for the generator and the discriminator is given by
whereas the adversarial loss for and is:
As the adversarial loss does not guarantee the one-to-one mapping between an input and an output, cycle-consistency loss is necessary, which is formulated as follows:
The total loss with the adversarial loss and the cycle-consistency loss is defined as follows:
where controls the weights of each component in the total loss. To train an unsupervised image-to-image translation in the CycleGAN framework, the following problem needs be solved:
By solving the minmax problems, the generators and aim to generate realistic images in order to deceive the discriminator and , respectively, and the discriminators try to distinguish real samples and fake samples that are synthesized by the generator. Additionally, we often use an identity loss in (4) to enforce a generator to retain input that is from the target domain (Kang et al., 2019):
Note that in DP-FedAvg-GAN, clients have an access of both random noise and the true image , of which situation is different from our federated CycleGAN scenario, where none of the clients have an access of images from both and domains. Although GS-WGAN split the random noise and the true image to server and client, respectively, the server only has a generator whereas the discriminator exists only at the client. This asymmetric architecture cannot be used in CycleGAN since every client has its own generator to translate to the other domain. Therefore, CycleGAN in a federated scenario has been an open problem so far.
3 Federated CycleGAN
3.1 Standard form
As shown in Fig. 4(c), to enable CycleGAN training in a federated setting, each client should have two generators ( and ) and two discriminators ( and ). Then, the key question is whether gradient update is possible without accessing the other domain data. Amazingly, this problem can be addressed with a simple observation. Specifically, note that the CycleGAN loss in (4) can be decomposed into domain specific two local objectives:
where and are local objectives that only use data in and domains, respectively:
Accordingly, in our FedCycleGAN, instead of using the original CycleGAN loss , the client with the data in domain uses , whereas the other client in domain employs the loss . Accordingly, clients compute their gradients of their own losses and transmits those gradients to the server, after which the server train the global models by using the local gradients without accessing the data itself. The exact local objective decomposition (7) is indeed the key that enables FedCycleGAN to retain the orignal CycleGAN performance.
3.2 Beyond two clients
Thanks to the use of domain specific loss functions in (8) and (9), the extension to multiple clients is in fact straight forward. Specifically, as shown in Fig. 3, clients having images in domain use as their local training objective, whereas the clients with domain images employ as their losses. Then, the sever randomly selects clients either in and domain to receive their local gradients. Local gradients from clients are averaged as in (McMahan et al., 2017)
and the server uses the averaged gradient to train networks. Note that training FedCycleGAN from multiple clients can be viewed as training with a stochastic gradient using multiple mini-batches made up of specific domains. This training process is iterated until the specified number of epochs. Thanks to this scalability, our federated CycleGAN enables training an unsupervised image-to-image translation models in a multi-client environment without sharing privately sensitive data.
3.3 Switchable form
Note that the local loss and are functions of two generators and two discriminators , so each client transmits the gradients of the four networks. Since the generators and discriminators are usually in complicated structures for the image-to-image translation tasks, the bandwidth requirement for the gradient transmission could be demanding.
Recently, Gu et al (Gu and Ye, 2021) proposed a switchable CycleGAN in which a single generator can transfer an image in to an output image in , and the generator can also be switched to a generator that translates an input in into an output in by changing the AdaIN code. Inspired by this, here we also propose a switchable FedCycleGAN where both generator and discriminator have switchable architecture as shown in Fig. 2(b). Specifically, we apply different AdaIN code to a shared discriminator and a shared generator when training each client’s model.
Formally, our switchable generator and the discriminator can be defined as:
where and denote the AdaIN code generators for generator and discriminators, respectively, with a pre-defined input code index . Fig. 5
(a) shows the architecture of the switchable generator, which is composed of convolution layers, Leaky ReLU, upsampling layers, and AdaIN layers with the AdaIN code generator consisting of fully connected layers. The AdaIN code generator produces mean and variance for each feature map in the generator, and the generator can be switchable by using these mean and variance vectors. The switchable discriminator consists of convolution layers, Leaky ReLU and AdaIN layers with the AdaIN code generator as shown in Fig.5 (b).
Using the switchable generator and discriminator, the local losses for and domains can be simplified as follows:
|network||of params||network||of params|
Therefore, in contrast to the standard form FedCycleGAN, in the switchable form FedCycleGAN, each client can only transmit the gradient with respect to the common generator and discriminator in addition to the AdaIN code generators and . The key point is that the code generators and are very light, so the transmission bandwidth can be significantly reduced. The number of parameters of networks for switchable form is approximately 35 million, while standard form requires approximately 69 million parameters in our experiments as in Table 1.
Then, the training process of our switchable FedCycleGAN in each training round is described in Fig. 3. First, the central server sends the current parameters of the generator, the discriminator, and the AdaIN code generators to the selected clients. Then, the selected clients compute their local stochastic gradients or using fake samples and its own real samples by changing the AdaIN codes. Each client then sends their local gradients to the central server. The central server updates the parameters of the generator, the discriminator, and the AdaIN code generators using local stochastic gradients from clients. This process is repeated until the networks converge.
4 Experimental results
To evaluate the performance of our method, we applied our proposed method to various style transfer tasks (Zhu et al., 2017a) and the low-dose computed tomography (CT) denoising task Kang et al. (2019). We also compared our method with non-federated CycleGAN. Specifically, we generate image-to-image translation results for each task by using three different methods: (1) CycleGAN, (2) FedCycleGAN, and (3) switchable FedCycleGAN. For CycleGAN, we trained networks in a centralized setting by optimizing Eq. (5) using a generator and a discriminator, in which network architectures are basically same as Fig. 5 except for the AdaIN layers (see Appendix). FedCycleGAN is trained by using gradient information without sharing data from two clients as described in Section 3 using the same generator and discriminator used in the CycleGAN. In Switchable FedCycleGAN, a switchable generator and a switchable discriminator are used, and the training was carried out with local gradients from clients as described in Section 3. To train three different methods, we used the same training setting of with identity loss where loss weight is , and used Adam optimizer (Kingma and Ba, 2014) with and
for 200 epochs. During the first 100 epochs, the learning rate was fixed at 0.0002 and then gradually reduced to 0 for the remaining 100 epochs. Pytorch(Paszke et al., 2019) and a NVIDIA GeForce RTX 3090 were utilized for the implementation.
|Style transfer||Low-dose CT denoising|
Style transfer tasks
We conducted various style transfer tasks such as summer to winter, photo to Van Gogh, and horse to zebra by using publicly available datasets from (Zhu et al., 2017a). We assume two clients, and each client has data from other classes for a federated learning scenario. The summer to winter dataset consists of unpaired training sets (1231 summer images and 962 winter images) and test sets (309 summer images, 238 winter images). The photo to Van Gogh dataset is composed of unmatched training images (6287 photo images and 400 Van Gogh images) and test images (751 photo images and 400 Van Gogh images). The horse to zebra dataset is divided by training set (1067 horse and 1334 zerbra) and test images (120 horse and 140 zebra). Images from each class are unpaired. To train style transfer tasks such as summer-to-winter, photo-to-Van Gogh, and horse-to-zebra, we augmented images by resizing them to 286 286 pixels and then cropping them to get patches with the size of 256
256 pixels. We also applied a random horizontal flip to images with a probability of 0.5. The batch size was set to 4.
Low-dose CT denoising
X-ray computed tomography (CT) is one of the most important imaging systems for clinical use. However, the radiation dose exposed by CT scan increases the cancer risk for patients. A low-dose CT scan reduces the risk of radiation, while a high level of noise can be an obstacle to clinical diagnosis, which is why low-dose CT denoising is required. We utilized AAPM CT dataset used in (Kang et al., 2017, 2018) consisting of projection data from the AAPM 2016 Low Dose CT Grand Challenge (McCollough et al., 2017). All data were completely anonymized. For the unsupervised image-to-image translation, we trained our method with unpaired image sets. One client have low-dose CT image while another have routine-dose CT image. The size of CT image slices is 512 512 pixels. In the AAPM low-dose CT challenge, to simulate low-dose CT, Poisson noise was added into the projection data which corresponds to 25 of the full dose CT. We used 8 patients data composed of the routine dose and low dose images for training set, and one patient data is used for the test set. The number of training slices is 3236 while 350 slices are used for the test.
In our two clients experiment, one client have 3236 low-dose CT images while another have 3236 routine-dose CT images from the AAPM CT dataset. We randomly shuffle the order of data in each training round to make an unpaired image set. To train the low-dose CT denoising, we cropped the input image with the size of 512 512 pixels to get samples with the size of 128 128 pixels. We also flipped images horizontally and vertically for data augmentation. The batch size was set to 8. For multi-clients experiment, we consider a scenario where four clients participate the training procedure. Routine-dose data from 8 patients are divided and given two clients (1948 and 1646 slices for each clients), while low-dose CT data from 8 patients are divided and given two other clients (1948 and 1646 slices for each clients). The order of data is shuffled. Other training setting is same as that of the two clients experiment for low-dose CT denoising.
4.2 Style transfer results
Fig. 6 shows results of various style transfer tasks. For all tasks, our federated learning frameworks (FedCycleGAN and Switchable FedCycleGAN) successfully transfer source images to the target domain, which is comparable to the results of non-federated CycleGAN. For the quantitative comparison of sample qualities, we calculated Inception Score (IS) (Salimans et al., 2016) and Frechet Inception Distance (FID) (Heusel et al., 2017). Table 2 shows IS and FID values from three different methods for various style transfer tasks. In summer to winter style transfer task, FedCycleGAN achieves the best FID score. In photo to Van Gogh style transfer task, Switchable FedCycleGAN achieves the best FID and IS compared to other methods. In horse to zebra style transfer task, FedCycleGAN achieves the best FID score and Switchable FedCycleGAN have better FID and IS than CycleGAN which is the non-federated baseline. In summary, our federated methods can achieve comparable and even better performance compared to the non-federated baseline without sharing data in unsupervised style transfer tasks.
4.3 Low-dose CT denoising
To evaluate the performance of denoising results, we calculated the peak signal-to-noise ratio (PSNR) and the structural similarity index metric (SSIM)(Wang et al., 2004) values. Table 2
also lists the evaluation metrics for low-dose CT denoising results on AAPM CT dataset. Notice that FedCycleGAN achieves the highest PSNR and SSIM values. We found that results of non-federated CycleGAN and results of Switchable FedCycleGAN showed small differences in terms of PSNR and SSIM. The difference in PSNR values is 0.045 dB, while the difference in SSIM values is 0.0017. Fig.7 shows denoising results from various methods. We found that our federated methods produce successful denoising results, which are similar to the routine dose target domain images. The results are also comparable to those of non-federated CycleGAN.
Multiple clients experimental results
We also conducted the multi-clients federated CycleGAN experiment in which four clients participate using Switchable FedCycleGAN architecture. Here, two clients have non-overlapping routine-dose CT images while two other clients have different low-dose CT images. At each training round, we randomly select clients to train Switchable FedCycleGAN, in which is the constant number. Table 3 lists quantitative comparison with various values. Note that as increases, the performance increases as the number of accessible mini-batch increases. When we use , the PSNR value was comparable to that of Switchable FedCycleGAN in Table 2 and achieves better SSIM value compared to other methods in Table 2.
|Input||N = 1||N = 2||N = 3||N = 4|
In this paper, we propose a federated CycleGAN and a switchable FedCycleGAN for privacy-preserving image-to-image translation. With our framework, a server does not require any private local data and needs only local gradients from clients. Our experimental results demonstrate that our method can be successfully applied to various unsupervised image translation tasks and show promising results compared to the non-federated counterpart. This was possible thanks to the exact local objective decomposition, which could be extended to other multi-domain federated translation tasks Choi et al. (2020). We believe that our framework gives a new direction for the study of federated and unsupervised image-to-image translation to real-world situations.
Limitation and negative societal impacts
Although our framework does not require local data from clients, the gradient information is necessary to train image translation model. However, if gradient information is known, there is a possibility that hidden images of clients can be reconstructed as studied in (Geiping et al., 2020). This means that the gradient information can be stolen for criminal purposes while our frameworks train the networks. To ensure privacy of gradient information, techniques for privacy guarantee (e.g. differential privacy (Dwork et al., 2014)) need to be applied in the future work.
- Augenstein et al.  S. Augenstein, H. B. McMahan, D. Ramage, S. Ramaswamy, P. Kairouz, M. Chen, R. Mathews, and B. A. y Arcas. Generative models for effective ML on private, decentralized datasets. In International Conference on Learning Representations (ICLR), 2020.
- Chen et al.  D. Chen, T. Orekondy, and M. Fritz. GS-WGAN: A gradient-sanitized approach for learning differentially private generators. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
Choi et al. 
Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha.
StarGAN v2: Diverse image synthesis for multiple domains.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8188–8197, 2020.
- Du et al.  W. Du, H. Chen, and H. Yang. Learning invariant representation for unsupervised image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 14483–14492, 2020.
- Dwork et al.  C. Dwork, A. Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211–407, 2014.
- Geiping et al.  J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller. Inverting gradients - how easy is it to break privacy in federated learning? In Advances in Neural Information Processing Systems (NeurIPS), 2020.
- Goodfellow et al.  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), pages 2672–2680, 2014.
- Gu and Ye  J. Gu and J. C. Ye. AdaIN-based tunable CycleGAN for efficient unsupervised low-dose CT denoising. IEEE Transactions on Computational Imaging, 7:73–85, 2021.
- He et al.  L. He, A. Bian, and M. Jaggi. COLA: Decentralized Linear Learning. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
- Heusel et al.  M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems (NeurIPS), volume 30, 2017.
- Huang and Belongie  X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1501–1510, 2017.
- Huang et al.  X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz. Multimodal unsupervised image-to-image translation. In Proceedings of the European conference on computer vision (ECCV), pages 172–189, 2018.
Kang et al. 
E. Kang, J. Min, and J. C. Ye.
A deep convolutional neural network using directional wavelets for low-dose x-ray CT reconstruction.Medical physics, 44(10):e360–e375, 2017.
- Kang et al.  E. Kang, W. Chang, J. Yoo, and J. C. Ye. Deep convolutional framelet denosing for low-dose CT via wavelet residual network. IEEE transactions on medical imaging, 37(6):1358–1369, 2018.
- Kang et al.  E. Kang, H. J. Koo, D. H. Yang, J. B. Seo, and J. C. Ye. Cycle-consistent adversarial denoising network for multiphase coronary CT angiography. Medical physics, 46(2):550–562, 2019.
- Kim et al.  B. Kim, D. H. Kim, S. H. Park, J. Kim, J.-G. Lee, and J. C. Ye. CycleMorph: Cycle consistent unsupervised deformable image registration. Medical Image Analysis, 71:102036, 2021.
- Kim et al.  J. Kim, M. Kim, H. Kang, and K. H. Lee. U-GAT-IT: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. In International Conference on Learning Representations (ICLR), 2020.
- Kingma and Ba  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Lee et al.  H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang. Diverse image-to-image translation via disentangled representations. In Proceedings of the European conference on computer vision (ECCV), pages 35–51, 2018.
Li et al. 
T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith.
Federated optimization in heterogeneous networks.
Proceedings of Machine Learning and Systems, volume 2, pages 429–450, 2020.
- Liu et al.  M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. arXiv preprint arXiv:1703.00848, 2017.
- Liu et al.  M.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz. Few-shot unsupervised image-to-image translation. In Proceedings of the IEEE international conference on computer vision (ICCV), pages 10551–10560, 2019.
- Lu et al.  B. Lu, J.-C. Chen, and R. Chellappa. Unsupervised domain-specific deblurring via disentangled representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 10225–10234, 2019.
- Luppino et al.  L. T. Luppino, M. Kampffmeyer, F. M. Bianchi, G. Moser, S. B. Serpico, R. Jenssen, and S. N. Anfinsen. Deep image translation with an affinity-based change prior for unsupervised multimodal change detection. IEEE Transactions on Geoscience and Remote Sensing, 2021.
- Mao et al.  X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley. Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (ICCV), pages 2794–2802, 2017.
- McCollough et al.  C. H. McCollough, A. C. Bartley, R. E. Carter, B. Chen, T. A. Drees, P. Edwards, D. R. Holmes III, A. E. Huang, F. Khan, S. Leng, et al. Low-dose CT for the detection and classification of metastatic liver lesions: results of the 2016 low dose CT grand challenge. Medical physics, 44(10):e339–e352, 2017.
- McMahan et al.  B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273–1282. PMLR, 2017.
Oh et al. 
G. Oh, B. Sim, H. Chung, L. Sunwoo, and J. C. Ye.
Unpaired deep learning for accelerated MRI using optimal transport driven cycleGAN.IEEE Transactions on Computational Imaging, 6:1285–1296, 2020.
- Paszke et al.  A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS), pages 8024–8035, 2019.
- Ronneberger et al.  O. Ronneberger, P.Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI), volume 9351, pages 234–241, 2015.
- Salimans et al.  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen. Improved techniques for training GANs. In Advances in Neural Information Processing Systems (NeurIPS), volume 29, 2016.
- Smith et al.  V. Smith, C.-K. Chiang, M. Sanjabi, and A. S. Talwalkar. Federated multi-task learning. In Advances in Neural Information Processing Systems (NeurIPS), volume 30, 2017.
- Song et al.  J. Song, J.-H. Jeong, D.-S. Park, H.-H. Kim, D.-C. Seo, and J. C. Ye. Unsupervised denoising for satellite imagery using wavelet directional CycleGAN. IEEE Transactions on Geoscience and Remote Sensing, 2020.
- Wang et al.  H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, and Y. Khazaeni. Federated learning with matched averaging. In International Conference on Learning Representations (ICLR), 2020.
- Wang et al.  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
- Yang et al.  S. Yang, E. Y. Kim, and J. C. Ye. Continuous conversion of CT kernel using switchable cyclegan with AdaIN. IEEE Transactions on Medical Imaging, 2021.
- Yi et al.  Z. Yi, H. Zhang, P. Tan, and M. Gong. DualGAN: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (ICCV), pages 2849–2857, 2017.
Yuan et al. 
Y. Yuan, S. Liu, J. Zhang, Y. Zhang, C. Dong, and L. Lin.
Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 701–710, 2018.
- Yurochkin et al.  M. Yurochkin, M. Agarwal, S. Ghosh, K. Greenewald, N. Hoang, and Y. Khazaeni. Bayesian nonparametric federated learning of neural networks. In Proceedings of the 36th International Conference on Machine Learning (ICML), volume 97 of Proceedings of Machine Learning Research, pages 7252–7261. PMLR, 09–15 Jun 2019.
- Zhang et al.  K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7):3142–3155, 2017.
- Zhu et al. [2017a] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (ICCV), pages 2223–2232, 2017a.
- Zhu et al. [2017b] J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman. Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems (NeurIPS), pages 465–476. 2017b.
Appendix A Appendix
a.1 Algorithm 1: Switchable FedCycleGAN
Algorithm 1 describes the procedure of training a swtichable FedCycleGAN. First, a central server initializes parameters of switchable generator , switchable discriminator , and AdaIN code generators ( and ). For each training round , each client calculates a local gradient using its own data (in or ) and current parameters of networks from the server (, , , ), and then gradient information is transmitted to the server. GetLocalGradient in Algorithm 1 describes how to calculate a local gradient in each client. Each client computes a domain specific local objective ( or ) using a batch from its own data. Then, a stochastic gradient for each network can be calculated and transmitted to the server. After clients transmit their local gradient information to the server, the server sums the gradients from clients and updates parameters of networks using the summed gradients by a gradient descent with learning rate . The training round is repeated up to the total number of rounds (epochs) .
a.2 Network Architecture
Fig. 9 shows the architecture of the generator for CycleGAN and FedCycleGAN models. The generator is based on U-net [Ronneberger et al., 2015] structure and consists of several convolution layers, Leaky ReLU, upsampling layers, and instance normalization layers. For upsampling layers, we used nearest neighbour upsampling. Note that we use the same architecture in Fig. 9 for style transfer tasks, while we added a skip connection between the input and output layer in the low-dose CT denoising task for residual learning that leads to better performance on denoising problems [Zhang et al., 2017].
The architecture of the discriminator for CycleGAN and FedCycleGAN models is shown in Fig. 10. It consists of several convolution layers, Leaky ReLU, and instance normalization layers.
a.3 Convergence of GAN loss
Similar to the vanilla GAN loss [Goodfellow et al., 2014], the LSGAN [Mao et al., 2017] can also be decomposed into domain-specific local objectives so we used LSGAN loss for all experiments to stabilize training. Fig 8 shows GAN losses during training of a low-dose CT denoising task. Similar to the centralized traininng, our federated cycleGAN training converges to the optimal state where the LSGAN loss is 0.25.
a.4 Additional results
We present additional results for style transfer tasks and low-dose CT denoising task. Fig 11 shows image translation results for summer-to-winter. Fig 12 shows image style transfer results for photo-to-Van Gogh. Fig 13 shows denoising results for low-dose CT images. Note that our methods produce comparable results compared to non-federated CycleGAN.