A Novel Methodology For Crowdsourcing AI Models in an Enterprise

03/22/2021
by   Parthasarathy Suryanarayanan, et al.
ibm
16

The evolution of AI is advancing rapidly, creating both challenges and opportunities for industry-community collaboration. In this work, we present a novel methodology aiming to facilitate this collaboration through crowdsourcing of AI models. Concretely, we have implemented a system and a process that any organization can easily adopt to host AI competitions. The system allows them to automatically harvest and evaluate the submitted models against in-house proprietary data and also to incorporate them as reusable services in a product.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

06/24/2020

A Methodology for Creating AI FactSheets

As AI models and services are used in a growing number of highstakes are...
03/05/2020

Towards Effective Human-AI Collaboration in GUI-Based Interactive Task Learning Agents

We argue that a key challenge in enabling usable and useful interactive ...
02/09/2018

Crowdsourcing: a new tool for policy-making?

Crowdsourcing is rapidly evolving and applied in situations where ideas,...
02/07/2021

Supporting Serendipity: Opportunities and Challenges for Human-AI Collaboration in Qualitative Analysis

Qualitative inductive methods are widely used in CSCW and HCI research f...
09/20/2018

Federated AI for building AI Solutions across Multiple Agencies

The different sets of regulations existing for differ-ent agencies withi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

From finance to healthcare, many businesses are turning to artificial intelligence (AI) to grow and optimize their business operations. However, there is a shortage of AI expertise in the market 

[Gartner, 2019]. At the same time, AI technologies continue to evolve rapidly [Perrault et al., 2019]. Industries are increasingly relying on crowdsourcing in order to meet their AI needs [Vaughan, 2018].

Competitions are a popular method for crowdsourcing AI models. In order to develop product features, major technology firms such as Microsoft [Ronen et al., 2018], Google [Lee et al., 2018] and Facebook [Dolhansky et al., 2019] have resorted to AI competitions that often draw a vast number of participants. These competitions are usually hosted in platforms such as Kaggle, Codalab [CodaLab, 2017, Liang and Viegas, ], Eval AI [Yadav et al., 2019], etc. Often these platforms are tightly coupled with a proprietary cloud infrastructure. These platforms cannot be used by organizations with stringent data regulations that require on-premise data retention. In addition, the terms and conditions governing the use of such platforms may not be amiable to the organization seeking to host a competition.

In this work, we propose a novel method and system that any organization can easily adopt to host AI competitions. The system allows them to automatically harvest and evaluate the submitted models against in-house proprietary data and also to incorporate them as reusable services in a product.

2 Proposed solution

In order to efficiently crowdsource AI models, our proposed approach consists of three key ideas.

  1. A public-facing AI Competition Portal operated by the organization, where key AI needs of its products are presented as competitions in the portal. It is important that the portal is hosted in an environment that is completely managed by the organization.

  2. Each of these competitions is set up using a common source code template such that the participant’s submission consists of a model and code to invoke the model using a standard interface. This ensures interoperability, security and the ability to tune the submission source code.

  3. A Model Harvester that converts each submitted model into a runnable microservice and also provides a dashboard of all available models.

This is depicted in the Figure 1. In the rest of this section, each of these ideas is explained.

Figure 1: Sequence of steps in the proposed method for leveraging crowdsourcing of AI models.

AI Competition Portal: Large organizations have many products serving different markets. The portal with well-defined competitions across product boundaries offers a centralized repository that maps and aligns AI research activities with the business needs of the organization. Organizational AI needs are formulated as competitions in the portal, inviting participation from outside researchers and data scientists. Once the competition is formulated, it is hosted in the portal with a call for public participation. External participation is encouraged through monetary rewards. Activities 1 through 5 illustrate this workflow. Each competition is set up using a code template that the participant teams must use for developing their model training and inference routines. Participant teams submit models together with the code and not just predictions. The portal can be implemented using any opensource framework such as EvalAI [Yadav et al., 2019] or CodaLab [CodaLab, 2017].

Model Harvester: Submissions are automatically pulled from the portal into a catalog of models called Model Registry. The registry also tracks the model hyper-parameters, metrics and models binaries over time. The system includes utilities to turn each model into a micro-service after static code analysis (e.g. vulnerability scans) called Model Serving. Product management teams can access these microservices via Swagger APIs and further evaluate them using additional proprietary datasets. An application feature can be quickly built by assembling these microservices into the production service orchestration. Activities 6 through 9 illustrate this workflow. The overall Model Harvester system can be built on top of an AI-lifecycle management framework like MLflow [Zaharia et al., 2018] or ProvDB [Miao and Deshpande, 2018].

3 Conclusion

The methodology described in this work facilitates rapid commercialization of crowdsourced models by providing a streamlined maturity process from problem definition to asset creation. A reference implementation of the system described is available in the form of AI Leaderboard  [IBM, 2020]

. Based on the open source EvalAI 

[Yadav et al., 2019] system, AI Leaderboard supports source code template based submissions. The integrated Model Harvester subsystem is based on MLflow [Zaharia et al., 2018]. Currently the system is being used for two academic AI competitions, EMNLP 2020 and ICDAR 2021. We are planning to use this platform for hosting industry challenges in healthcare domain. As part of the future work, we plan to publish the impact study from this crowdsourcing exercise.

The authors would like to acknowledge Ansu Varghese, Abhishek Malvankar, Ching-Huei Tsou, Jian Min Jiang, Jian Wang, Michele Payne and Sreeram Joopudi for their contributions.

References

  • CodaLab (2017) CodaLab. Note: https://github.com/codalab/codalab-competitions Cited by: §1, §2.
  • B. Dolhansky, R. Howes, B. Pflaum, N. Baram, and C. C. Ferrer (2019) The deepfake detection challenge (dfdc) preview dataset. arXiv preprint arXiv:1910.08854. Cited by: §1.
  • Gartner (2019) 2019 CIO Survey: CIOs Have Awoken to the Importance of AI. Note: https://www.gartner.com/document/3897266 Cited by: §1.
  • IBM (2020) AI leaderboard. Note: https://ibm.biz/ai-leaderboard Cited by: §3.
  • J. Lee, W. Reade, R. Sukthankar, G. Toderici, et al. (2018) The 2nd youtube-8m large-scale video understanding challenge. In

    Proceedings of the European Conference on Computer Vision (ECCV)

    ,
    pp. 0–0. Cited by: §1.
  • [6] P. Liang and E. Viegas CodaLab worksheets for reproducible, executable papers, december 2015. In URL: https://nips. cc/Conferences/2015/Schedule, Cited by: §1.
  • H. Miao and A. Deshpande (2018) ProvDB: provenance-enabled lifecycle management of collaborative data analysis workflows.. IEEE Data Eng. Bull. 41 (4), pp. 26–38. Cited by: §2.
  • R. Perrault, Y. Shoham, E. Brynjolfsson, J. Clark, J. Etchemendy, B. Grosz, T. Lyons, J. Manyika, S. Mishra, and J. Niebles (2019) Artificial intelligence index report 2019. Human-Centered AI Institute Stanford University. https://hai. stanford. edu …. Cited by: §1.
  • R. Ronen, M. Radu, C. Feuerstein, E. Yom-Tov, and M. Ahmadi (2018) Microsoft malware classification challenge. arXiv preprint arXiv:1802.10135. Cited by: §1.
  • J.W. Vaughan (2018)

    Making better use of the crowd: how crowdsourcing can advance machine learning research

    .
    Journal of Machine Learning Research 18, pp. 1–46. Cited by: §1.
  • D. Yadav, R. Jain, H. Agrawal, P. Chattopadhyay, T. Singh, A. Jain, S. B. Singh, S. Lee, and D. Batra (2019) Evalai: towards better evaluation systems for ai agents. arXiv preprint arXiv:1902.03570. Cited by: §1, §2, §3.
  • M. Zaharia, A. Chen, A. Davidson, A. Ghodsi, S. A. Hong, A. Konwinski, S. Murching, T. Nykodym, P. Ogilvie, M. Parkhe, et al. (2018) Accelerating the machine learning lifecycle with mlflow.. IEEE Data Eng. Bull. 41 (4), pp. 39–45. Cited by: §2, §3.