Conflict and Cooperation: AI Research and Development in terms of the Economy of Conventions
Artificial Intelligence (AI) and its relation with societies is increasingly becoming an interesting object of study from the perspective of sociology and other disciplines. Theories such as the Economy of Conventions (EC) are usually applied in the context of interpersonal relations but there is still a clear lack of studies around how this and other theories can shed light on interactions between human an autonomous systems. This work is focused into studying a preliminary step that is a key enabler for the subsequent interaction between machines and humans: how the processes of researching, designing and developing AI related systems reflect different moral registers, represented by conventions within the EC. Having a better understanding of those conventions guiding the advances in AI is considered as the first and required advance to understand the conventions afterwards reflected by those autonomous systems in the interactions with societies. For this purpose, we develop an iterative tool based on active learning to label a data set from the field of AI and Machine Learning (ML) research and present preliminary results of a supervised classifier trained on these conventions. To further demonstrate the feasibility of the approach, the results are contrasted with a classifier trained on software conventions.
READ FULL TEXT