Bias Insurance in Artificial Intelligence

At Talla, we've had a lot of interesting overtures from large enterprises and have been thinking about the requirements to sell to them.  Big companies will almost always pay more to offset liability, and they expect even small startups to have good data access and data privacy processes in place.  This led to a lively discussion about A.I. algorithms and bias and whether we could be legally liable for any discriminatory biases in our algorithms.

Issues like this can sometimes slow the progress of innovation.  Part of the reason startups often best large companies isn't because large companies always miss the opportunities.  They often see them but can't go after them because they aren't willing to wade into murky legal territory.

This made me wonder if we will see a business model emerge where a third party testing house bears the risk of algorithm bias.  How would it work?  Say you are BigCo and  you want to put out a product that uses machine learning, but it isn't always 100% predictable as to what the outcome of using the product will be and you don't know exactly what data it may be exposed to once out in the wild.  You feel good that it won't end up like Tay but, all it takes is one person to sue you and say your algorithm is biased and the odds you have to settle are pretty high.  So you turn to a third party who indemnifies you.

This third party would be AlgorithmCertCo, and for a very high initial fee, and an ongoing yearly re-evaluation fee, they will certify your app to be free from bias.  They would run tests on specially compiled data sets that would provide "proof" that your algorithms don't provide whatever kind of bias might be legally harmful for your potential use case.  Thing of it like a credit rating for algorithms.

There is currently a small field of machine learning called adversarial machine learning,  which looks at ways in which to hack/break/fool neural networks and other learning algorithms.  It is an easy step to think about this for use cases of breaking neural nets in ways that show they are biased across something like gender, race, or sexual orientation.  Now if an algorithm passes these tests, BigCo can stamp it "bias free" and, if someone sues for bias, AlgorithmCertCo is on the hook.  And companies will pay lots of money for that.