What is impeding truth-telling in our post-truth world?

I came back to the UK after a month in India in November 2016, to find that the Oxford English Dictionary had added the word ‘post-truth’ to its growing list of words. In this article, we explore and expose the kinds of lies which seem to have gone without comment in the past.

Doing one thing, and saying another at work places. Obfuscation of motive – is one of the most common ways to lie.

This kind of lie is told by people whose words or actions give offence, but who flatly deny what they said or did when challenged – others who saw them, don’t come forward to support the victim of these actions or words. This kind of input in places of work can be hugely demoralising.

No mechanisms exist to stop workplace harassment, a problem encountered by many women, all over the world.

Here is Julia Shaw (an academic who I’d met in Dartington talking about ‘false memory’) telling us how in the future, it will be possible to pinpoint those people who make inappropriate remarks at work – they made an app, three colleagues together.

Shaw lists five ways that organizations can better support victims of harassment, by supporting the witnesses to harassment and discrimination. Like Julia, I too was targeted – I left that workplace. The world is changing in such a positive way.

Lies buried in places you wouldn’t expect

Algorithms fuelled by ‘machine learning’ on (often biased) ‘big data’ increasingly play a role in life-changing decisions, whether financial, legal, or medical. This is because original templates of information based on which machines learn, are themselves subject to human bias.

Here is young Joy Buolamwini sitting in the Massachusets Institute of Technology finding it hard to be recognised as a human by a robot, because all robots were programmed by white men, and the initial template for machine learning is a white face!

Lies plain to see sitting in the ‘small print’

Social media is selling our data to the advertisers, after telling us they will do this – the more open heartedly we communicate online, the more we leave ourselves open to being targeted by advertisers – using an ever more finely honed profile of us – made available by the algorithms. I wrote about this profiling in an article about the Cambridge Analytica incident on the Women’s Business Network.

Here is Katie Couric discussing the Netflix film The Social Dilemna. You get a flavour of the issues from who attends and what is discussed – none of it is reassuring.

https://tinyurl.com/KatieCinterview

Lies by politicians feeding you advertising on social platforms

A set of seminars (John Chadfield, who I met a while back in Oxford is giving the first talk on ‘who tracks you’) starting on Weds 14th Oct which is the first of the formal activities of this new initiative to collaborate, taking place at the Oxford Internet Institute. This is about political advertising – not surprisingly, here is a home grown app for it now.

Legitimising particular lies, as ‘academic viewpoint’

There is a particular kind of bias that tries to enter the collective conversation – it is plain dishonest – it sits and says “what I am saying is an ‘opinion’ and you should therefore entertain it as such.” What in fact they are saying is something even they know to be a direct lie.  They want it discussed, because it will serve their own ends, whether political, or personal.

Here is an example of such a project – the perpetrator of this particular one even went to court to try and prove that their ‘viewpoint’ ought to be a part of the academic debate.

Having lived near Belsen, in Germany, I was appalled at the barefaced nature of this particular lie. However, Deborah Lippstadt stopped him by a court ruling. She looked at the references and the footnotes – they were all rubbish, of course.

Hurray for far sighted historians !

So what are we doing about the lies which we encounter?

It turns out that we in the UK are lagging behind in considering the AI related biases. I couldn’t see any UK representation in this Global forum.

Oxford is creating an Institute for AI Ethics, to promote broad conversation between relevant researchers and students ‘across the whole University’. The LSE’s view is narrower, and law based, but already exists.

But we have to walk alone, in our day to day lives – and we need to fight bias in what we do, and who we believe, says young Alex Edmans.

In this talk he talks about CONFIRMATION BIAS. Critical thinking it used to be called – detecting confirmation bias can be taught systematically, as his talk illustrates.

We must create a new habit of looking critically at the evidence we find in the world around us, and hold it up to careful scrunity, before we decide that something is true.

Are we teaching this in any part of our curriculum?

Leave A Comment

Your email address will not be published. Required fields are marked *