XCloud: Design and Implementation of AI Cloud Platform with RESTful API Service

by   Lu Xu, et al.

In recent years, artificial intelligence (AI) has aroused much attention among both industrial and academic areas. However, building and maintaining efficient AI systems are quite difficult for many small business companies and researchers if they are not familiar with machine learning and AI. In this paper, we first evaluate the difficulties and challenges in building AI systems. Then an cloud platform termed XCloud, which provides several common AI services in form of RESTful APIs, is constructed. Technical details are discussed in Section 2. This project is released as open-source software and can be easily accessed for late research. Code is available at https://github.com/lucasxlu/XCloud.git.




Industrial Artificial Intelligence

AI is a cognitive science to enables human to explore many intelligent w...

CLAI: A Platform for AI Skills on the Command Line

This paper reports on the open source project CLAI (Command Line AI), ai...

Implementation of an Automated Learning System for Non-experts

Automated machine learning systems for non-experts could be critical for...

Skilled and Mobile: Survey Evidence of AI Researchers' Immigration Preferences

Countries, companies, and universities are increasingly competing over t...

Trinity: A No-Code AI platform for complex spatial datasets

We present a no-code Artificial Intelligence (AI) platform called Trinit...

CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness

We present CAISAR, an open-source platform under active development for ...

Building AI Innovation Labs together with Companies

In the future, most companies will be confronted with the topic of Artif...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Recent years have witnessed many breakthroughs in AI (He et al., 2016; LeCun et al., 2015; Silver et al., 2016), especially computer vision (Krizhevsky et al., 2012), speech recognition (Amodei et al., 2016)

and natural language processing 

(Johnson et al., 2017). Deep learning models have surpassed human on many fields, such as image recognition (He et al., 2015) and skin cancer diagnosis (Esteva et al., 2017)

. Face recognition has been widely used among smart phones (such as iPhone X FaceID 

111https://support.apple.com/en-us/HT208109) and security entrance. Recommendation system (such as Alibaba, Amazon and ByteDance) helps people easily find information they want. Visual search system allows us to easily get products by just taking a picture with cellphone (Zhang et al., 2018; Yang et al., 2017).

However, building an effective AI system is quite challenging (Sculley et al., 2015). Firstly, the developers should collect, clean and annotate raw data to ensure a satisfactory performance, which is quite time-consuming and takes lots of money and energy. Secondly, experts in machine learning should formulate the problems and develop corresponding computational models. Thirdly, computer programmars should train models, fine-tune hyper-parameters, and develop SDK or API for later usage. Bad case analysis is also required if the performance of baseline model is far from satifaction. Last but not least, the above procedure should be iterated again and again to meet the rapid change of requirements (see Figure 1). The whole development procedure may fail if any step mentioned above fails.

Figure 1. Pipeline of building production-level AI service

Facing so many difficulties, cloud services (such as Amazon Web Service (AWS) 222https://aws.amazon.com/, Google Cloud 333https://cloud.google.com/, AliYun 444https://www.aliyun.com/ and Baidu Yun 555https://cloud.baidu.com/) are getting increasingly popular among market. Nevertheless, these platforms are developed for commercial production. Researchers only have limited access to existing APIs, and cannot know the inner design architecture of the systems. So it is difficult for researchers to bridge the gap between research models and production applications.

Aiming at solving problems mentioned above. In this paper, we construct an AI cloud platform termed EXtensive Cloud (XCloud) with common recognition abilities for both research and production fields. XCloud is freely accessible and open-sourced on github 666https://github.com/lucasxlu/XCloud.git to help researchers build production application with their proposed models.

2. XCloud

In this section, we will give a detailed description about the design and implementation of XCloud. XCloud

is implemented based on PyTorch 

(Paszke et al., 2019) and Django 777https://www.djangoproject.com. The development of machine learning models are derived from published models (He et al., 2016; Huang et al., 2017; Xu et al., 2018, 2019a, 2019b), which is beyond the scope of this paper. The architecture of XCloud is shown in Figure 2. Users can upload image and trigger relevant JavaScript code, the controller of XCloud receive HTTP request and call corresponding recognition APIs with the uploaded image as input. Then XCloud will return recognition results in form of JSON. By leveraging RESTful APIs, the developers can easily integrate existing AI services into any type of terminals (such as PC web, android/iOS APPs and WeChat mini program). The overall framework of XCloud is shown in Figure 3.

Figure 2. Architecture of XCloud
Figure 3. Framework of XCloud

2.1. Services

XCloud is composed of 4 modules, namely, computer vision (CV), data mining (DM) and research (R). We will briefly introduce the following services by module.

2.1.1. Computer Vision

In CV module, we implement and train serveral models to solve the following common vision problems.

  • Plants recognition is popular among plant enthusiasts and botanists. It can be treated as a fine-grained visual classification problem, since a bunch of samples of different categories have quite similar appearance. We train ResNet18 (He et al., 2016) to recognize over 998 plants.

  • Plant disease recognition can provide efficient and effective tools in intelligent agriculture. Farmers can know disease category and take relevant measures to avoid huge loss. ResNet50 (He et al., 2016) is trained to recognize over 60 plant diseases.

  • Face analysis model can predict serveral facial attributes from a given portrait image. We take HMTNet (Xu et al., 2019a) as computational backbone model. HMTNet is a multi-task deep model with fully convolutional architecture, which can predict facial beauty score, gender and race simultaneously from a unique model. Details can be found from (Xu et al., 2019a).

  • Food recognition is popular among health-diet keepers and is widely used in New Ratailing fields. DenseNet169 (Huang et al., 2017) is adopted to train food recognition model.

  • Skin lesion analysis gains increased attention in medical AI areas. We train DenseNet121 (Huang et al., 2017) to recognize 198 common skin diseases.

  • Pornography image recognition models provide helpful tools to filter sensitive images on Internet. We also integrate this feature into XCloud. We train DenseNet121 (Huang et al., 2017) to recognize pornography images.

  • Garbage Classification has been a hot topic in China recently 888http://www.xinhuanet.com/english/2019-07/03/c_138195992.htm

    , it is an environment-friendly behavior. However, the majority of the people cannot tell different garbage apart. By leveraging computer vision and image recognition technology, we can easily classify diverse garbage. The dataset is collected from HUAWEI Cloud 

    999https://developer.huaweicloud.com/competition/competitions/1000007620/introduction. We split 20% of the images as test set, and the remaining as training set. We train ResNet152 (He et al., 2016) with 90.12% accuracy on this dataset.

  • Insect Pet Recognition plays a vital part in intelligent agriculture, we train DenseNet121 (Huang et al., 2017) on IP102 dataset (Wu et al., 2019) with 61.06% accuracy, which is better than Wu et al. (Wu et al., 2019) with an improvement of 10.6%.

2.1.2. Data Mining

In data mining module, we provide useful toolkit (Xu et al., 2019b) related to an emerging research topic–online knowledge quality evaluation (like Zhihu Live 101010https://www.zhihu.com/lives/). This API will automatically calculate Zhihu Live’s score within a range of 0 to 5, which can provide useful information for customers.

2.1.3. Research

In this module, we provide the source code for training and test machine learning models mentioned above. Researchers can use the code provided to train their own models. Furthermore, we also reimplement several models (such as image quality assessment (Kang et al., 2014; Bosse et al., 2016; Talebi and Milanfar, 2018; Kang et al., 2015), facial beauty analysis (Xu et al., 2018, 2019a)

, image retrieval 

(Liu et al., 2017; Wen et al., 2016), etc.) in computer vision, which makes it easy for users to integrate these features into XCloud APIs.

2.2. Performance Metric

The performance of the above models are listed in Table 1. We adopt accuracy as the performance metric to evaluate classification services (such as plant recognition, plant disease recognition, food recognition, skin lesion analysis and pornography image recognition), and Pearson Correlation (PC) is utilized as the metric in facial beauty prediction task. Mean Absolute Error (MAE) is adopted as the metric in ZhihuLive quality evaluation task.


where and represent predicted score and groundtruth score, respectively. denotes the number of data samples. and stand for the mean of and , respectively. A larger PC value represents better performance of the computational model.

Service Model Dataset Performance Result
Plant Recognition ResNet18 (He et al., 2016) FGVC5 Flowers 111111https://sites.google.com/view/fgvc5/competitions/fgvcx/flowers Acc=0.8909 Plant category and confidence
Plant Disease Recognition ResNet50 (He et al., 2016) PDD2018 Challenge 121212https://challenger.ai/dataset/pdd2018 Acc=0.8700 Plant disease category and confidence
Face Analysis HMTNet (Xu et al., 2019a) SCUT-FBP5500 (Liang et al., 2018) PC=0.8783 Facial beauty score within
Food Recognition DenseNet161 (Huang et al., 2017) iFood 131313https://sites.google.com/view/fgvc5/competitions/fgvcx/ifood Acc=0.6689 Food category and confidence
Garbage Classification ResNet152 (He et al., 2016) HUAWEI Cloud Acc=0.9012 Garbage category and confidence
Insect Pet Recognition DenseNet121 (Huang et al., 2017) IP102 (Wu et al., 2019) Acc=0.6106 Insect pet category and confidence
Skin Disease Recognition DenseNet121 (Huang et al., 2017) SD198 (Sun et al., 2016) Acc=0.6455 Skin disease category and confidence
Porn Image Recognition DenseNet121 (Huang et al., 2017) nsfw_data_scraper 141414https://github.com/alexkimxyz/nsfw_data_scraper.git Acc=0.9313 Image category and confidence
Zhihu Live Rating MTNet (Xu et al., 2019b) ZhihuLiveDB (Xu et al., 2019b) MAE=0.2250 Zhihu Live score within
Table 1. Performance of Computational Models on Relevant Datasets

2.3. Design of RESTful API

Encapsulating RESTful APIs is regarded as standard in building cloud platform. With RESTful APIs, related services can be easily integrated into terminal devices such as PC web, WeChat mini program, android/iOS APPs, and HTML5, without considering compatibility problems. The RESTful APIs provided are listed in Table 2.

API Description HTTP Methods Param
cv/mcloud/skin skin disease recognition POST imgraw/imgurl
cv/fbp facial beauty prediction POST imgraw/imgurl
cv/nsfw pornography image recognition POST imgraw/imgurl
cv/pdr plant disease recognition POST imgraw/imgurl
cv/food food recognition POST imgraw/imgurl
cv/plant plant recognition POST imgraw/imgurl
cv/facesearch face retrieval POST imgraw/imgurl
dm/zhihuliveeval Zhihu Live rating GET Zhihu Live ID
Table 2. Definition of RESTful API

2.4. Backend Support

The backend of XCloud is developed based on Django 151515https://www.djangoproject.com/. We follow the MVC (Leff and Rayfield, 2001) design pattern which represents that the view, controller and model are separately developed and can be easily extended in later development work. In order to record user information produced on XCloud, we construct 2 relational tables in MySQL which is listed in Table 3 and Table 4, to store relevant information.

Attribute Type Length Is Null?
username varchar 16 False
api_name varchar 20 False
api_elapse float 10 False
api_call_datetime datetime - False
terminal_type int 3 False
img_path varchar 100 False
Table 3. API calling details table. The primary key is decorated with underline.
Attribute Type Length Is Null?
username varchar 16 False
register_datetime datetime - False
register_type int 11 False
user_organization varchar 100 False
email varchar 50 False
userkey varchar 20 False
password varchar 12 False
Table 4. User information table. The primary key is decorated with underline.

In addition, we also provide simple and easy-to-use script to convert original PyTorch models to TensorRT 161616https://developer.nvidia.com/tensorrt models for faster inference. TensorRT is a platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. With TensorRT, we are able to run DenseNet169 (Huang et al., 2017) with 97.63 FPS on two 2080TI GPUs, which is significantly faster than its counterpart PyTorch naive inference engine (29.45 FPS).

2.5. Extensibility

As shown by the name of XCloud (EXtensive Cloud), it is also quite easy to integrate new abilities. Apart from using existing AI technology provided by XCloud, developers can also easily build their own AI applications by referring to the model training code contained in research module 171717https://github.com/lucasxlu/XCloud/tree/master/research. Hence, the developers only need to prepare and clean dataset. After training your own models, your AI interface is automatically integrated into XCloud by just writing a new controller class and adding a new Django view.

2.6. API Stress Testing

The performance and stability play key roles in production-level service. In order to ensure the stability of XCloud, Nginx 181818http://nginx.org/ is adopted for load balancing. In addition, we use JMeter 191919https://jmeter.apache.org/ to test all APIs provided by XCloud. The results of stress testing can be found in Table 5.

cv/mcloud/skin 16 20 0
cv/fbp 25 36 0
cv/nsfw 16 21 0
cv/pdr 16 23 0
cv/food 17 23 0
cv/plant 18 25 0
dm/zhihuliveeval 5 8 0
Table 5. Stress Testing Results on NVIDIA 2080TI GPU

From Table 5 we can conclude that the performance and stability of XCloud are quite satisfactory under current software and hardware condition. We believe the performance could be heavily improved if stronger hardware is provided. The test environment with 2080TI GPUs and Intel XEON CPU is enough to support 20 QPS (query per second). By deploying XCloud on your machine and running server, you will get the homepage as Figure 4.

Figure 4. Homepage of XCloud

3. Conclusion and Future Work

In this paper, we construct an AI cloud platform with high performance and stability which provides common AI service in form of RESTful API, to ease the development of AI projects. In our future work, we will integrate more service into XCloud and develop better models with advanced performance.


  • [1] D. Amodei, S. Ananthanarayanan, R. Anubhai, J. Bai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, Q. Cheng, G. Chen, et al. (2016) Deep speech 2: end-to-end speech recognition in english and mandarin. In International conference on machine learning, pp. 173–182. Cited by: §1.
  • [2] S. Bosse, D. Maniry, T. Wiegand, and W. Samek (2016)

    A deep neural network for image quality assessment

    In 2016 IEEE International Conference on Image Processing (ICIP), pp. 3773–3777. Cited by: §2.1.3.
  • [3] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542 (7639), pp. 115. Cited by: §1.
  • [4] K. He, X. Zhang, S. Ren, and J. Sun (2015)

    Delving deep into rectifiers: surpassing human-level performance on imagenet classification

    In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §1.
  • [5] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 770–778. Cited by: §1, 1st item, 2nd item, 7th item, Table 1, §2.
  • [6] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: 4th item, 5th item, 6th item, 8th item, §2.4, Table 1, §2.
  • [7] M. Johnson, M. Schuster, Q. V. Le, M. Krikun, Y. Wu, Z. Chen, N. Thorat, F. Viégas, M. Wattenberg, G. Corrado, et al. (2017)

    Google’s multilingual neural machine translation system: enabling zero-shot translation

    Transactions of the Association for Computational Linguistics 5, pp. 339–351. Cited by: §1.
  • [8] L. Kang, P. Ye, Y. Li, and D. Doermann (2014) Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1733–1740. Cited by: §2.1.3.
  • [9] L. Kang, P. Ye, Y. Li, and D. Doermann (2015)

    Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks

    In 2015 IEEE international conference on image processing (ICIP), pp. 2791–2795. Cited by: §2.1.3.
  • [10] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
  • [11] Y. LeCun, Y. Bengio, and G. Hinton (2015) Deep learning. nature 521 (7553), pp. 436. Cited by: §1.
  • [12] A. Leff and J. T. Rayfield (2001) Web-application development using the model/view/controller design pattern. In Proceedings fifth ieee international enterprise distributed object computing conference, pp. 118–127. Cited by: §2.4.
  • [13] L. Liang, L. Lin, L. Jin, D. Xie, and M. Li (2018) SCUT-fbp5500: a diverse benchmark dataset for multi-paradigm facial beauty prediction. In 2018 24th International Conference on Pattern Recognition (ICPR), pp. 1598–1603. Cited by: Table 1.
  • [14] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song (2017) SphereFace: deep hypersphere embedding for face recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.3.
  • [15] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: §2.
  • [16] D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, M. Young, J. Crespo, and D. Dennison (2015) Hidden technical debt in machine learning systems. In Advances in neural information processing systems, pp. 2503–2511. Cited by: §1.
  • [17] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484. Cited by: §1.
  • [18] X. Sun, J. Yang, M. Sun, and K. Wang (2016) A benchmark for automatic visual classification of clinical skin disease images. In European Conference on Computer Vision, pp. 206–222. Cited by: Table 1.
  • [19] H. Talebi and P. Milanfar (2018) Nima: neural image assessment. IEEE Transactions on Image Processing 27 (8), pp. 3998–4011. Cited by: §2.1.3.
  • [20] Y. Wen, K. Zhang, Z. Li, and Y. Qiao (2016) A discriminative feature learning approach for deep face recognition. In European conference on computer vision, pp. 499–515. Cited by: §2.1.3.
  • [21] X. Wu, C. Zhan, Y. Lai, M. Cheng, and J. Yang (2019) IP102: a large-scale benchmark dataset for insect pest recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8787–8796. Cited by: 8th item, Table 1.
  • [22] L. Xu, H. Fan, and J. Xiang (2019) Hierarchical multi-task network for race, gender and facial attractiveness recognition. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 3861–3865. Cited by: 3rd item, §2.1.3, Table 1, §2.
  • [23] L. Xu, J. Xiang, Y. Wang, and F. Ni (2019) Data-driven approach for quality evaluation on knowledge sharing platform. arXiv preprint arXiv:1903.00384. Cited by: §2.1.2, Table 1, §2.
  • [24] L. Xu, J. Xiang, and X. Yuan (2018) CRNet: classification and regression neural network for facial beauty prediction. In Pacific Rim Conference on Multimedia, pp. 661–671. Cited by: §2.1.3, §2.
  • [25] F. Yang, A. Kale, Y. Bubnov, L. Stein, Q. Wang, H. Kiapour, and R. Piramuthu (2017) Visual search at ebay. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2101–2110. Cited by: §1.
  • [26] Y. Zhang, P. Pan, Y. Zheng, K. Zhao, Y. Zhang, X. Ren, and R. Jin (2018) Visual search at alibaba. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 993–1001. Cited by: §1.