By Gunther Teubner and Anna Beckers
Is a uniform liability regime required for algorithmic misconduct, or should fragmentation along sectoral rules prevail? The article argues for a middle path between the Scylla of one-size-fits-all and the Charybdis of situationism. For an appropriate diversity of liability regimes, this article draws on a typology of machine behavior developed in IT-studies and simultaneously on sociological and philosophical theories which suggest identifying the foundations for three emerging socio-legal institutions in (1) personification of non-human actors, (2) human-machine association as an emergent social system with the qualities of a collective actor, and (3) distributed cognition in the interconnectivity of algorithms.
![Algorithmic Misconduct essay page 1](/sites/default/files/styles/content_full_width/public/area/center/isp/images/beckerspage1.jpg?itok=J_NPK6iH)