SPECIAL: Alice Thwaite on Echo Chambers

In these last two episodes of TWAAATS we interview each other about our specialist subject areas. In this episode, Andrew Strait chats to Alice Thwaite about her work on echo chambers. This is an area that Alice has spent a number of years researching, through her work on The Echo Chamber Club. She came to the Oxford Internet Institute with the goal of updating and adapting the theory associated with echo chambers. It seemed that previous incarnations had a number of faults. For example, no one had seemed to differentiate between groups that are different, that disagree and that are polarised. No one had sorted out the link between access to information and polarisation. Listen now to the conclusions she reached.

Get in touch

Email Alice here, and Andrew here. We can also be found on Twitter: @alicelthwaite and @agstrait.

SPECIAL: Andrew Strait on Content Moderation

Content Moderation Andrew Strait

To end this series the TWAAATS are going to chat with each other about our specialist subject areas. First up is Andrew who specialises in content moderation. Andrew wanted to understand how content is moderated on social media platforms. How do companies choose which comments to remove and which articles to block from the site? In other words, how do companies ensure that the content they show to users is not harmful? He interviewed a variety of different companies that specialise in moderation to learn more. Here he details his findings…

Any comments?

Email Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait.

Power and Technology

Power and Technology with Carl Miller

This week we’re joined by Carl Miller, who is the director of research at the Centre of Analysis of Social Media at Demos. His book: “The Death of the Gods: The New Global Power Grab” is out and we wanted to speak with him about the key ideas he writes about.

You can buy the book on Amazon here.

Bye Andrew!

We’re so sad to say that Andrew is leaving the show and so we are ending Season 1. He is moving onto a great new role in a fab new company and so won’t have the time to continuing TWAAATing. I have to say that he’s been such a great co-host and can’t wait to collaborate with him on another project soon.

There will be a couple of specials coming up next week so stay tuned for those! And I’m hoping to announce a new development very soon!

If you’ve got questions…

Contact Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait.

 

Explainability and AI

Explain Yourself Algorithm!

This week we’re wondering how an algorithm might be able to explain itself. And we’re joined by David Watson. David is a Doctoral candidate at the Oxford Internet Institute. He focusses on the epistemological foundations of machine learning and used to be a data scientist at Queen Mary’s Centre for Translational Bioinformatics.

Before we’ve spoken about the ethics of different automated systems making decisions, whether that’s decisions that relate to policing, healthcare, justice or finance. How can we understand that decision? How can we ensure that the decision was fair and unbiased? There is both a legal and a technical aspect of explainability. The legal aspect asks how we audit systems and uphold the algorithms that organisations build. The technical aspect asks how we build explainability into our systems.

Links mentioned whilst we chatted

David mentioned some papers about medical applications. He suggests the following papers to take a look at [1] and [2]

We talked about FATML – the organisation that looks into fairness, accountability and transparency in Machine Learning. Here is their website.

Books we like: Cathy O’Neil with Weapons of Math Destruction, Safiya Noble with Algorithms of Oppression and Virginia Eubanks with Automating Inequality.

We also spoke about Sandra Wachter who does loads on this stuff. Her Twitter can be found here.

Subscribe!

Subscribe now on iTunes here

Subscribe now on Acast here

Identity and the Alt-Right

Digital Identity and the Alt-Right

Today we’re looking at the new alt-right identities forming online. What is new about new digital identities in an internet era compared to before? How should we taxonomise and think about them. Are there specific identities which we should give attention to and not?

We couldn’t have recorded this episode without Sid. Sid Venkataramakrishnan is a Master’s student studying the digital identity of the far-right at the OII. He has an MS in Journalism from Columbia, likes Old Irish, and once survived a flash flood.

There was so much covered in this week’s episode – as discussed at the end we’ve decided not to link to too much of it. However, here are some links that we’re happy to share.

Links to topics covered in the podcast

If you wanna learn more about Old Irish then here’s an FAQ I found about how you can woo your partner of choice in this medieval language.

This article by Vox on Gamergate is fab. I’m also going to self-promote and say you should read this article by me from a couple of years ago on how I learned about the alt-right.

Here’s another piece of self-promotion from Alice about The Crisis of Meaning. I’ll be speaking about this a lot more in the coming months.

Show us some love!

Pleasecontact Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait. Sid can be contacted via Twitter here: @SVR13

Please give us all the stars on iTunes or your podcast player of choice. Tell all your friends about us too!

See you next time!

The Ethics of Reddit

The Ethics of Reddit

Too many times we’ve heard that Reddit is the ‘most ethical’ social media site. So we wanted to explore what that really means with Jack La Violette. Jack’s background is in linguistics and anthropology. At the OII where he researches the language of masculinity on reddit.com from a computational perspective.

Reddit is known as the ‘front page of the internet’. It’s effectively a link aggregator which is divided into different topics called ‘sub-reddits’. When you sign up to Reddit there are about 50 subreddits that you’re automatically signed up to which are liked r/news and r/sport and then there are about 90,000 other active subreddits. Most accounts are anonymous and it has an upvoting system which determines which articles you’re more likely to see.

Interesting Links

We discussed Gamergate. My (Alice’s) favourite guide to Gamergate comes from Vox. Read it here. Andrew mentioned an article by Adrienne Massanari: “#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures

Doxxing is potentially one of the worst things that you can do to a person. It’s when you make public incredibly personal information about an individual, usually to shame them. I’ve heard of medical health records being amended by doxxers so a person cannot get a job. Here’s a bit more info from The Conversation.

Aaron Swartz was a genius who founded Reddit, and was an activist for freedom of information. He committed suicide in January 2013 after the US authorities threatened to fine him for $1 million and throw him in prison for 35 years. Here’s an obituary.

Here’s a brief guide to the rise and fall of Ken Bone.

Finally, here’s an excellent article by Alissa Centivany that gives an overview of the values of Reddit.

We don’t just want to speak at you

Pleasecontact Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait.

Subscribe now on iTunes here

Subscribe now on Acast here

Catch you next time

Can the internet be safe?

Can the internet be safe?

Deputy Director of the Oxford Internet Institute, Vicki Nash, joins us this week to discuss internet safety and internet regulation. Vicki is responsible for presenting OII research as evidence at government policy meetings, so she has a unique insight into how UK Parliament is approaching internet governance.

Internet Safety

We spoke about this Internet Safety Green Paper published by the Government. Key points as follows:

  • What is unacceptable offline should be unacceptable online;
  • All users should be empowered to manage online risks and stay safe;
  • Technology companies have a responsibility to their users.

Internet Regulation

Vicki recently gave evidence to this inquiry: “The internet: to regulate or not to regulate?

Interesting Links

More information about the Australian Cyber Abuse Initiative can be found here.

Here’s a link to the Santa Clara Principles on Content Moderation.

Vicki mentioned Jonathan Zittrain’s views on Private Sheriffs. Watch a lecture he gave on this subject at the OII.

We mentioned video platform Musical.ly. Take a look here.

Our contact details

Please email Alice here, and Andrew here. Or tweet at us! We’re at: @alicelthwaite and @agstrait. Vicki Nash can be found on twitter here: @VickiNashOII

Subscribe now on iTunes here

Subscribe now on Acast here

Please give us FIVE STARS and a REVIEW on iTunes. It doesn’t matter if you don’t mean it!!

Til next time!

Information Control in the Arab Region

Information Control in the Arab Region

We tend to focus on Western democracies on TWAAATS, but this week we wanted to understand what information control is being placed on citizens in the Arab region. Mona Elswah joins us this week to educate us on the impact social media is having on this region and the governments’ response post Arab Spring. She is a researcher at Computer Propaganda project and a DPhil student at the OII.

What is information control?

For this episode, we mean the interference that governments have over what information is shared and censored amongst their citizens. In 2011, the Arab Spring resulted in the toppling of the Tunisian dictator. The organisation of this protest was (at least in part) attributed to the power granted to the citizen by social media sites like Facebook. The leaders and dictators in this region are now aware of the power of social media. Consequently, they seek to control it.

Send us some love!

Please email Alice here, and Andrew here. Or tweet to us! We’re at: @alicelthwaite and @agstrait. Mona  can be found on twitter here: @monaelswah

Subscribe now on iTunes here

Subscribe now on Acast here

Catch you next time

 

Principled Artificial Intelligence

Google's Artificial Intelligence Principles (AI

Nahema Marchal joins us this week to discuss Google’s newly published ‘AI principles‘. Nahema is a colleague of ours at the OII, she is the co-chair of the Connected Life conference and is interested in online drivers of partisan hostility.

Interesting Links and Definitions

Artificial Intelligence definition: the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. (Ironically, this has been taken directly from Google).

Google and Warfare: Google employees protest company involvement in warfare technology Read more in the i.

Here’s the PAI – the Partnership on AI.

Here’s AI Now.

Finally, here’s Data & Society’s website.

We spoke about science fiction writer Arthur C. Clarke and his third law: “Any sufficiently advanced technology is indistinguishable from magic”. Read the wiki.

Slide into our DMs 😊

You may contact Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait. Nahema can be found on twitter @nahema_marchal.

Subscribe now on iTunes here

Subscribe now on Acast here

Apologies for the slight delay in uploading this podcast. We’ve had a backlog of work to get through – but we promise we’ve got some really exciting guests and topics coming up in the next few weeks! So bear with us! 🙂

Catch you next time

GDPR – the Aftermath

Finally, we’ve received our last email from a company that we didn’t even realise had our data, pleading for us to opt-in to their newsletter. So how do we feel about the GDPR (General Data Protection Regulation) post-25th May? Alice and Andrew discuss.

Links and People

Helen Nissenbaum’s paper ‘Privacy in Context’

James Williams book ‘Stand Out of Our Light

Sandra Wachter’s twitter is here and Brent Mittelstadt’s is here. Their paper on algorithmic explanation via counterfactuals can be found here.

Contact us 😊

Contact Alice here, and contact Andrew here. These are our Twitters: @alicelthwaite and @agstrait.

Subscribe now on iTunes here

Subscribe now on Acast here

If you’re in the business of giving reviews on iTunes – then please give a positive review for us!