DeepAI AI Chat
Log In Sign Up

A method for ethical AI in Defence: A case study on developing trustworthy autonomous systems

by   Tara Roberson, et al.

What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system - Athena AI - within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation readiness levels.


page 1

page 2

page 3

page 4


Operationalising Responsible AI Using a Pattern-Oriented Approach: A Case Study on Chatbots in Financial Services

Responsible AI is the practice of developing and using AI systems in a w...

Training Ethically Responsible AI Researchers: a Case Study

Ethical oversight of AI research is beset by a number of problems. There...

Autonomous Rollator: A Case Study in the Agebots Project

In this paper, we present an iterative development process for a functio...

Learnings from Frontier Development Lab and SpaceML – AI Accelerators for NASA and ESA

Research with AI and ML technologies lives in a variety of settings with...

Regulating trusted autonomous systems in Australia

Australia is a leader in autonomous systems technology, particularly in ...

Dance of the DAOs: Building Data Assets as a Use Case

Decentralised Autonomous Organisations (DAOs) have recently piqued the i...