Robust Artificial Intelligence and Robust Human Organizations

11/27/2018 ∙ by Thomas G. Dietterich, et al. ∙ Oregon State University 0

Every AI system is deployed by a human organization. In high risk applications, the combined human plus AI system must function as a high-reliability organization in order to avoid catastrophic errors. This short note reviews the properties of high-reliability organizations and draws implications for the development of AI technology and the safe application of that technology.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

References

  • [1] Bresina, J. L., Morris, P. H. (2007). Mixed-Initiative Planning in Space Mission Operations. AI Magazine, 28. 75-88.
  • [2] Chow, Y., Tamar, A., Mannor, S., and Pavone, M. (2015). Risk-Sensitive and Robust Decision-Making: a CVaR Optimization Approach. Advances in Neural Information Processing Systems (NIPS) 2015.
  • [3] Perrow, C. (1984). Normal accidents: Living with high-risk technologies. Princeton University Press.
  • [4] Scharre, P. (2018). Army of None: Autonomous weapons and the future of war. W. W. Norton.
  • [5] South Wales Police (2018). https://www.south-wales.police.uk/en/advice/facial-recognition-technology/ Accessed November 12, 2018.
  • [6] Weick, K. E., Sutcliffe, K. M., Obstfeld D. (1999). Organizing for High Reliability: Processes of Collective Mindfulness. In R.S. Sutton and B.M. Staw (Eds.), Research in Organizational Behavior, Volume 1 (Stanford: Jai Press, 1999), Chapter 44, pp. 81–123.