This week we’re wondering how an algorithm might be able to explain itself. And we’re joined by David Watson. David is a Doctoral candidate at the Oxford Internet Institute. He focusses on the epistemological foundations of machine learning and used to be a data scientist at Queen Mary’s Centre for Translational Bioinformatics.
Before we’ve spoken about the ethics of different automated systems making decisions, whether that’s decisions that relate to policing, healthcare, justice or finance. How can we understand that decision? How can we ensure that the decision was fair and unbiased? There is both a legal and a technical aspect of explainability. The legal aspect asks how we audit systems and uphold the algorithms that organisations build. The technical aspect asks how we build explainability into our systems.
Links mentioned whilst we chatted
We talked about FATML – the organisation that looks into fairness, accountability and transparency in Machine Learning. Here is their website.
We also spoke about Sandra Wachter who does loads on this stuff. Her Twitter can be found here.