The dead may well outnumber the living on Facebook within the next 5 decades. These people could be tagged in a photo, friend requested or wished a ‘happy birthday’. What kind of rules should be set here? Should your account be deactivated? Can someone else claim your username after you die?
So there are platform specific issues around death. But there are also a whole host of services cropping up which Carl calls ‘the digital afterlife industry’. These include information management services, which help families cope with how digital assets (I assume like photographs, messages etc) are handled after death. Then we have posthumous messaging services. The company might send you an email, and if you don’t reply, they assume you’ve died. You can then arrange for other people to get specific messages when you’re dead. Online memorial services commemorate the deceased’s life in many interesting ways. ‘Re-creation services’ look to recreate the person who has died using their digital footprint.
Carl Öhman is completing his doctorate at the Oxford Internet Institute. His research looks at the ethical challenges regarding commercial management of ‘digital human remains’. This is the data left by deceased users on the Internet. He is an expert when it comes to digital death.
Sam Johnston is an ex-technology consultant who is writing a book about freedom of speech. I’m hoping to recruit him to become a regular twaaat.
Interesting articles mentioned in the podcast
Carl’s research can be found here in Nature, and the study he mentioned on the growing number of dead people online can be found here. If you’re after something a bit more academic then take a look at his paper here.
If you liked the idea of bloody others and non-bloody others, please check out: are you in a relationship with a machine?
Please talk to me and leave a nice review
Email me or contact me – alice [at] twaaats [dot] com or @alicelthwaite on Twitter. Sam is @samuelbjohnston and Carl can be stalked on the internet (his words, my paraphrase).
I really appreciate the emails I get from you guys. And also that the listener body seems to be growing, and I have no idea why. So thanks for spreading the word!
Today we’re talking about facial recognition systems. These are technologies which can recognise and analyse your face. Used for both identification (i.e. on Facebook in photos, on security doors with restricted access) and for reading emotions (i.e this film makes you happy, this product makes you sad), they are becoming more and more common, and perhaps without you even noticing it.
I’m joined by Dr. Lisa Murphy, a clinical fellow at Public Health England, who wants to use technology to improve population health, clinical care and NHS operations. She is also a fellow at Newspeak House, which is a community in East London which we all make fairly regular use of. I’m also with Rula Awad, the founder of AIPoli. She is an advocate for information and algorithmic transparency, particularly when it comes to political and social content. Rula has built facial recognition systems in the past, and so can give us insiders knowledge on how bias is perpetuated throughout these systems.
Facial Recognition and Medicine
Diagnosing genetic systems – read more here.
Read this medical journal article about using facial recognition systems to predict cardiovascular disease.
Shopping and Marketing
We mentioned an article on how shops are using facial recognition to improve their marketing, product placement and advertising. Read more here.
Rula mentioned the Gender Shades study. Read more about it here.
We still find bias in these algorithms – in January 2019 it was found that facial recognition systems still find it hard to identify coloured and female faces. Read more here.
Wanna contact us?
Email me or contact me – alice [at] twaaats [dot] com or @alicelthwaite on Twitter. Lisa is @LisaMurphy107 and Rula can be found here @rula_awad.
Thanks for all your continued support! Go ooonnn give us a nice review on your favourite podcasting network OR just tell all your friends whenever you see them.
We interact a lot with machines. Whether we’re using our mobile phones, giving Alexa commands or even playing with a (sex) robot), we’re now giving these devices more intimate information than ever before. In a way, this means that we might feel like we are in a private relationship with them, but the reality is that a lot of corporations hold this data.
But we’ve always had a relationship with machines. We’ve found dildos that are 28,000 years old, we’ve fantasised about cars and statues, and sometimes our social status depends on the machines we own and interact with. So what really has changed?
For this episode I’m joined by Tulsi Parida and Nika Mahnič. Tulsi cares about reducing digital inequality and promoting responsible/inclusive technology. She has worked in startups in India and the States that look at the intersection of literacy and tech. Tulsi recently completed an MSc at the Oxford Internet Institute, where she studied mobile learning in emerging markets through a gender and political economy lens, and is currently pursuing an MBA at Saïd Business school. Nika is an interdisciplinary researcher, writer and dancer. She wrote her BA thesis on sex robots at the Faculty of Social Sciences, University of Ljubljana, and obtained her MA in Big Data at King’s College, London. Her MA thesis focused on the first proactive companion robot and manager of IoT devices employing emotional artificial intelligence. She is a co-founder of Danes je nov dan / Today is a new day institute and an activist at the Campaign Against Sex Robots.
Useful links for the podcast
Here are some interesting articles and videos from Nika. Watch her talk about sex robots here, and read her articles on ‘bloody others’ here.
We spoke about Alexa and misogyny in the episode. There are a couple of articles on this as listed below, but I couldn’t find anything about whether using Alexa the machine increases misogyny towards real women. Consequently, I’m loathed to say this is true. If you have any more information please shoot it across!
Here’s a little pic of us recording the podcast. My flat is quite light and so we look dark in the foreground. But it’s still nice to ‘know’ that we might be real people and not machines 🙂
Spread the word!
This is a new series so please do help us spread the word that we’re back! Feel free to give us a 5* review on your platform of choice. Email me or contact me – alice [at] twaaats [dot] com or @alicelthwaite on Twitter.
Til next time!
WE’RE BACK! HAPPY NEW YEAR! This week I’m joined by two very special guests – Jazza John and James Young. We’re taking about gaming and data collection with a focus on Netflix’ latest Black Mirror episode entitled: Bandersnatch.
Last year we woke up to the consequences of Facebook collecting huge amounts of data about us. So for the first episode of the new series I wanted to take a look at the data collection that happens within games and games companies. Whereas 20 years ago, all your gaming data was stored on a local drive attached to your PlayStation or Nintendo, today that data is fed back to the gaming industry. Netflix launched a new game over Christmas as part of the new Black Mirror series called “Bandersnatch”. Should we be concerned about whether this behavioural data will be fed into their recommendation algorithm? Or instead is the biggest concern about the creativity of game designers? Will data collection mean that there are fewer innovative ideas and games as every developer seeks to make the next blockbuster?
James Young is a documentary maker. His recent documentaries have appeared on BBC3; Can Robots Love Us? and Sex Robots & Us: The Future of Sex. Jazza John is a Youtuber, podcaster and a professional internet show off. We all know a fair bit about technology and love a good game.
Interesting links mentioned in “Data, Games and Bandersnatch”
My article in Quartz called: “Black Mirror isn’t just predicting the future – it’s causing it” This pretty much summarises my point of view on Netflix and Bandersnatch 🙂
We talked a bit about #Gamergate and narrative-driven gaming. My favourite article on Gamergate is in Vox and written by Todd VanDerWerff – check it out here.
One of my favourite mobile and console games is called Kingdom. Check it out.
Last year we recorded an episode with Jazza on Cambridge Analytica and the scandal that erupted. Check out that ep here.
There is NO LINK between playing violent video games and increased real-life violence
This is a really important point, and I feel we didn’t hammer this home enough in the episode. For more information check out the Oxford Internet Institute’s research here: “Poor behaviour ‘linked to time spent gaming not types of games'”
Tell all your friends and family and enemies about us
This is a new series so please do help us spread the word that we’re back! Feel free to give us a 5* review on your platform of choice, drop me a line telling me you love me (alice [at] twaaats [dot] com or @alicelthwaite on Twitter) or sharing this ep online.
Til next time!
In these last two episodes of TWAAATS we interview each other about our specialist subject areas. In this episode, Andrew Strait chats to Alice Thwaite about her work on echo chambers. This is an area that Alice has spent a number of years researching, through her work on The Echo Chamber Club. She came to the Oxford Internet Institute with the goal of updating and adapting the theory associated with echo chambers. It seemed that previous incarnations had a number of faults. For example, no one had seemed to differentiate between groups that are different, that disagree and that are polarised. No one had sorted out the link between access to information and polarisation. Listen now to the conclusions she reached.
Get in touch
Email Alice here, and Andrew here. We can also be found on Twitter: @alicelthwaite and @agstrait.
To end this series the TWAAATS are going to chat with each other about our specialist subject areas. First up is Andrew who specialises in content moderation. Andrew wanted to understand how content is moderated on social media platforms. How do companies choose which comments to remove and which articles to block from the site? In other words, how do companies ensure that the content they show to users is not harmful? He interviewed a variety of different companies that specialise in moderation to learn more. Here he details his findings…
Email Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait.
This week we’re joined by Carl Miller, who is the director of research at the Centre of Analysis of Social Media at Demos. His book: “The Death of the Gods: The New Global Power Grab” is out and we wanted to speak with him about the key ideas he writes about.
You can buy the book on Amazon here.
We’re so sad to say that Andrew is leaving the show and so we are ending Season 1. He is moving onto a great new role in a fab new company and so won’t have the time to continuing TWAAATing. I have to say that he’s been such a great co-host and can’t wait to collaborate with him on another project soon.
There will be a couple of specials coming up next week so stay tuned for those! And I’m hoping to announce a new development very soon!
If you’ve got questions…
Contact Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait.
This week we’re wondering how an algorithm might be able to explain itself. And we’re joined by David Watson. David is a Doctoral candidate at the Oxford Internet Institute. He focusses on the epistemological foundations of machine learning and used to be a data scientist at Queen Mary’s Centre for Translational Bioinformatics.
Before we’ve spoken about the ethics of different automated systems making decisions, whether that’s decisions that relate to policing, healthcare, justice or finance. How can we understand that decision? How can we ensure that the decision was fair and unbiased? There is both a legal and a technical aspect of explainability. The legal aspect asks how we audit systems and uphold the algorithms that organisations build. The technical aspect asks how we build explainability into our systems.
Links mentioned whilst we chatted
David mentioned some papers about medical applications. He suggests the following papers to take a look at  and 
We talked about FATML – the organisation that looks into fairness, accountability and transparency in Machine Learning. Here is their website.
Books we like: Cathy O’Neil with Weapons of Math Destruction, Safiya Noble with Algorithms of Oppression and Virginia Eubanks with Automating Inequality.
We also spoke about Sandra Wachter who does loads on this stuff. Her Twitter can be found here.
Subscribe now on iTunes here
Subscribe now on Acast here
Today we’re looking at the new alt-right identities forming online. What is new about new digital identities in an internet era compared to before? How should we taxonomise and think about them. Are there specific identities which we should give attention to and not?
We couldn’t have recorded this episode without Sid. Sid Venkataramakrishnan is a Master’s student studying the digital identity of the far-right at the OII. He has an MS in Journalism from Columbia, likes Old Irish, and once survived a flash flood.
There was so much covered in this week’s episode – as discussed at the end we’ve decided not to link to too much of it. However, here are some links that we’re happy to share.
Links to topics covered in the podcast
If you wanna learn more about Old Irish then here’s an FAQ I found about how you can woo your partner of choice in this medieval language.
This article by Vox on Gamergate is fab. I’m also going to self-promote and say you should read this article by me from a couple of years ago on how I learned about the alt-right.
Here’s another piece of self-promotion from Alice about The Crisis of Meaning. I’ll be speaking about this a lot more in the coming months.
Show us some love!
Pleasecontact Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait. Sid can be contacted via Twitter here: @SVR13
Please give us all the stars on iTunes or your podcast player of choice. Tell all your friends about us too!
See you next time!
Too many times we’ve heard that Reddit is the ‘most ethical’ social media site. So we wanted to explore what that really means with Jack La Violette. Jack’s background is in linguistics and anthropology. At the OII where he researches the language of masculinity on reddit.com from a computational perspective.
Reddit is known as the ‘front page of the internet’. It’s effectively a link aggregator which is divided into different topics called ‘sub-reddits’. When you sign up to Reddit there are about 50 subreddits that you’re automatically signed up to which are liked r/news and r/sport and then there are about 90,000 other active subreddits. Most accounts are anonymous and it has an upvoting system which determines which articles you’re more likely to see.
We discussed Gamergate. My (Alice’s) favourite guide to Gamergate comes from Vox. Read it here. Andrew mentioned an article by Adrienne Massanari: “#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures”
Doxxing is potentially one of the worst things that you can do to a person. It’s when you make public incredibly personal information about an individual, usually to shame them. I’ve heard of medical health records being amended by doxxers so a person cannot get a job. Here’s a bit more info from The Conversation.
Aaron Swartz was a genius who founded Reddit, and was an activist for freedom of information. He committed suicide in January 2013 after the US authorities threatened to fine him for $1 million and throw him in prison for 35 years. Here’s an obituary.
Here’s a brief guide to the rise and fall of Ken Bone.
Finally, here’s an excellent article by Alissa Centivany that gives an overview of the values of Reddit.
We don’t just want to speak at you
Pleasecontact Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait.
Subscribe now on iTunes here
Subscribe now on Acast here
Catch you next time