Vijay Arya

is this you? claim profile


  • One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

    As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, present different requirements for explanations. Toward addressing these needs, we introduce AI Explainability 360 (, an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics. Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability. For data scientists and other users of the toolkit, we have implemented an extensible software architecture that organizes methods according to their place in the AI modeling pipeline. We also discuss enhancements to bring research innovations closer to consumers of explanations, ranging from simplified, more accessible versions of algorithms, to tutorials and an interactive web demo to introduce AI explainability to different audiences and application domains. Together, our toolkit and taxonomy can help identify gaps where more explainability methods are needed and provide a platform to incorporate them as they are developed.

    09/06/2019 ∙ by Vijay Arya, et al. ∙ 36 share

    read it

  • Model Extraction Warning in MLaaS Paradigm

    Cloud vendors are increasingly offering machine learning services as part of their platform and services portfolios. These services enable the deployment of machine learning models on the cloud that are offered on a pay-per-query basis to application developers and end users. However recent work has shown that the hosted models are susceptible to extraction attacks. Adversaries may launch queries to steal the model and compromise future query payments or privacy of the training data. In this work, we present a cloud-based extraction monitor that can quantify the extraction status of models by observing the query and response streams of both individual and colluding adversarial users. We present a novel technique that uses information gain to measure the model learning rate by users with increasing number of queries. Additionally, we present an alternate technique that maintains intelligent query summaries to measure the learning rate relative to the coverage of the input feature space in the presence of collusion. Both these approaches have low computational overhead and can easily be offered as services to model owners to warn them of possible extraction attacks from adversaries. We present performance results for these approaches for decision tree models deployed on BigML MLaaS platform, using open source datasets and different adversarial attack strategies.

    11/20/2017 ∙ by Manish Kesarwani, et al. ∙ 0 share

    read it

  • Blockchain Enabled Trustless API Marketplace

    There has been an unprecedented surge in the number of service providers offering a wide range of machine learning prediction APIs for tasks such as image classification, language translation, etc. thereby monetizing the underlying data and trained models. Typically, a data owner (API provider) develops a model, often over proprietary data, and leverages the infrastructure services of a cloud vendor for hosting and serving API requests. Clearly, this model assumes complete trust between the API Provider and cloud vendor. On the other hand, a malicious/buggy cloud vendor may copy the APIs and offer an identical service, under-report model usage metrics, or unfairly discriminate between different API providers by offering them a nominal share of the revenue. In this work, we present the design of a blockchain based decentralized trustless API marketplace that enables all the stakeholders in the API ecosystem to audit the behavior of the parties without having to trust a single centralized entity. In particular, our system divides an AI model into multiple pieces and deploys them among multiple cloud vendors who then collaboratively execute the APIs. Our design ensures that cloud vendors cannot collude with each other to steal the combined model, while individual cloud vendors and clients cannot repudiate their input or model executions.

    12/05/2018 ∙ by Vijay Arya, et al. ∙ 0 share

    read it