Algorithmic accountability will lead to a better world

Algorithms will maintain bias unless subject to a regime of rigorous transparency and accountability.


Algorithms are created on data, data that has already happened. In this sense, algorithms perpetuate the past. This perpetuation is highly scalable and potentially dangerous. Algorithms can be WMD – Widespread (and important), Mysterious (and opaque) and Destructive.

The cases of algorithms getting it wrong are numerous. Teachers were fired by an algorithm employed to accentuate accountability; but which proved to be little more than a random number generator. Algorithms employed in HR processes of many big companies were proved to be undermined by bias. From healthcare to credit scores, algorithms have been shown to get it wrong.

None of this is the fault of the algorithm per se. The algorithm is not smart, it does not understand the context it is being used in. If algorithms are failing, it is because, knowingly or unknowingly, their owners and creators have failed to fully consider its implications.

 

Algorithms can be WMD – widespread, mysterious and destructive.
Cathy O'Neil , Founder , ORCAA, algorithmic auditing company

There are ways to make things better. The first is to identify for whom the algorithm is working; and for whom it is failing. The definition of failure should be a broad one. If there is bias in a credit rating, it will harm the credit applicant; but equally, could create imbalances on balance sheets, uneven risk profiles and could cause reputational risk. Big companies in particular have to think: what will the newspapers print if this algorithm has not worked?

This conceptualisation provides the basis for a methodology to assess the algorithm and its potential basis for harm or bias. A practical ethical approach can be found in a matrix, cross-referencing stakeholders and their potential concerns. Each point on this matrix can be assessed for the potential damage it could do and colour coded accordingly. This revised ethical matrix can take account of – say with a credit facility – the equality of access granted to those of different skin colour and how this can ultimately equate with company profit. An ethical matrix shows customers and stakeholders you care.

Such a matrix is available for those wishing to ensure transparency and fairness in their algorithm. Not all do. There are companies who will game the system, such as US mortgage approvals around 2005-8. There are problems with bad methodologies and problems with secret methodologies underlying algorithms. There are some algorithms with so many stakeholders in play – such as Facebook's news feeds – that it becomes impossible to create an ethical risk matrix.

An ethical matrix shows customers and stakeholders you care.
Cathy O'Neil, Founder, ORCAA, algorithmic auditing company

Nonetheless, those who do engage in the transparency and responsibility of algorithmic accountability will benefit from long term trust of their customers and stakeholders.

Summary based on Swiss Re Institute event: Algorithms for hope.

Algorithms for hope

Case studies with a positive global impact as we move from human to augmented intelligence. Swiss Re Institute in partnership with Gottlieb Duttweiler Institute and IBM Research.

Discover