Why Facebook’s Data Ethics Is One of the Biggest Threats to our Future
It’s the 4th decade of the internet and the landscape has shifted dramatically.
Connectivity and data capturing have exploded to a size never seen before, and with it, both terrifying and amazing opportunities have surfaced.
We’ve recently seen a wave of criticism of Facebook and big tech companies and data custodians, amid scandals around data usage during political referendums (Brexit) and presidential elections (2016 Trump presidential election), as well as accusations of monopolisation and anti-competition laws (see Facebook’s latest legal battles).
Transparency and communication of data capture and usage are at the forefront of cultural and political discourse. With Facebook’s global influence (almost 40% of the world population is an active user), much is at stake, as we seek to continue searching for better privacy practises, while still allowing using data for ground-breaking and positive societal change and growth.
The Benefits of Facebook
For the User
If you had told me 15 years ago that soon I could video call my grandparents for free while walking down the streets of Chelsea in Manhattan (with no delay!), transfer and tap my phone to pay for my meal or drinks (at any hour of the day!), or use phones to map the spread of a highly contagious disease… I’d no doubt be skeptical.
Distopian books and movies were always quick to focus on the existential threats of such technologies, but I’m going to start with the less popular part right now. The benefits of Facebook (and a little from similar tech companies too).
As a user, you get access to anyone in your family or friends circle for free instantly — you can see videos, photos and the words they share or messages that they send to you. You can meet your sister you never knew you had, learn about a skill in facebook groups, get directions to a store, find out about an event, or watch a million cute dog videos (I highly recommend joining Dogspotting if you haven’t already).
The app is constantly being updated and redesigned to improve the experience and make it more personal for you. Don’t like cat videos? Don’t want to see posts from racist uncle Craig? All these things you can tell Facebook you’re not interested in, and it will remove them from your experience.
If you are interested in seeing things like ticket offers to your favourite band touring your city, or finding out about online courses in data science or coding (yes please!), or to hear about a sale from a brand you like, you are now more likely than ever to be informed about these.
The next big winner benefitting from Facebook is businesses!
Coca Cola, Mazda, Commbank — all the big businesses are spending millions there.
But with a self-serve ad platform like Facebook’s, you can spend as little as $1 (or a few hundred) on presenting ad creative to relevant audiences.
Sure, I could have read the NY Times 50 years ago and been advertised to. Knowing basic things like the fact I probably lived in NY, was somewhat educated, and maybe I leaned a little left on the political spectrum meant brands could have targeted products toward me.
But do they know if I purchased? If I engaged? If that bump in sales was from that ad or the TV ad they were running?
As John Wanamaker, an american pioneer in marketing once said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” Facebook and internet direct marketing is allowing this question to be addressed for the first time ever.
You can now put ads in front of people who are interested in similar products, have engaged with your page already, have consented to marketing newsletters or even visited your store.
You can measure how many saw the ad, engaged with it, clicked on it, added a product to cart, and purchased. You can segment audience by these things and serve another ad or offer in the future too. The power this puts into the hands of a businesses, even very small, allows for a more democratised access for people in new or existing businesses all over the world.
The problems with Facebook
For the everyday user, the legal terminology within the ‘terms and conditions’ around data usage and capture is hard to read and usually hard to find.
These companies have teams of user experience design experts who work hard to remove any friction as users sign up, get acquainted with the app, and continually use it. With no plain language or emphasis around privacy and app usage, this would seem deliberate.
The resulting lack of trust is leading many to delete the apps or turn off data sharing altogether (where possible).
An example of this kind of behaviour is a weather app, WeatherBug, which highlights using location data to personalise your ad experience (https://www.nytimes.com/interactive/2018/12/10/business/location-data-privacy-apps.html ), but is less clear at how micro that detail goes (precise location instead of regional) and how long it is stored for (an app like this probably only needs current location).
User Safety and Privacy
There’s a stark division between countries, cultures, racial and gender demographics, and socio-economic classes on trust and complacency around data sharing and privacy.
If you are attending pro-democratic protests in Hong Kong, you would likely have very different levels of fear around government surveillance with the use of face-identifying technology, than the Trump insurrectionists did when storming the Capitol only a few weeks ago.
For members of minorities, women or children, online security can be key to your vulnerability to prevent physical and sexual threats both online and offline (ie. stalkers). Strong privacy practises can reduce the possibility for deanonymization should the datasets fall into the wrong hands.
Big tech need to take responsibility for education on data literacy around their products. Them taking advantage of consumer ignorance has helped lead consumers toward less-credible sources, which are dangerously uninformed. Trust is the seed from which many internet conspiracy theories grow and we fear what we don’t know more than what we do (https://www.theguardian.com/technology/2020/apr/07/how-false-claims-about-5g-health-risks-spread-into-the-mainstream ).
Facebook has little fact-checking incorporated in their product, when the reality is, their scale has the power to distort our view of culture, and in fact, reflects their ability to shift it entirely, should it be allowed or incentivized to use it in that way.
Even within a free market and democratic society, these giants have proven to be posing massive risks and hard to regulate.
It’s taken a long time for governments and traditional press and media laws to evolve, balancing free speech and unbiased information with profitability. These laws are ill-equipped to deal with these new mediums.
The traditional media companies aren’t devoid of bias, but now with Facebook being the biggest media gatekeeper in the world, their internal mantras being things like ‘Move fast and break things’ instead of ‘Democracy dies in the darkness’ (https://www.wired.com/story/the-solution-to-facebook-overload-isnt-more-facebook/) come under greater scrutiny.
They operate under a different set of rules and incentives and we now have an editor-in-chief whose allegiance is to making money and is devoid of societal responsibility.
Looking to the future
If we lose trust in the internet and big data we may lose access entirely to these powerful datasets, and their potential use for positive change in the world (addressing inequality, stopping the spreading of disease, global access to education).
As people migrate to apps like Duck-Duck-Go (a private internet browser) or Signal (a WhatsApp competitor that stores only anonymised basic profile information), they can opt-out of future data-capturing processes and reducing access to valuable insights.
Data laws and regulations are playing catch-up and vary wildly across different countries. There have been positive developments with Europe’s GDPR and the Californian CCPA to regulate and limit data capture and usage, but this needs to be addressed world-wide to increase accountability within these multinational organisations. There are also additional challenges for countries operating under different governments, especially more authoritarian regimes.
Data Transparency and Education
Since the Cambridge Analytica scandal, Facebook have been forced to address their transparency and communication around user data practises. And while they may feel confident in the steps they are taking, there are many questioning whether we should limit the amount of data captured about users.
The Facebook data and privacy section is much more robust than it ever was before, but with teams of the best user experience designers in the world at their disposal, this area could be much more ably assisted by using more visual communication, and increased visibility/accessibility.
Stricter guidelines around selling and use of data
Outlining all potential future uses of consumer data will be a key part of companies building trust. You may consent to more personalised ads now, within the platform you use, but you likely have no idea how that data may be used in the future or whose hands it will end up in.
Regulation and communication can greatly improve this.
Online advertising has also experienced an overhaul since Cambridge Analytica, introducing stricter ad guidelines and requiring verification to be able to run political ads. These are all important steps.
As with any powerful new tool, we are still learning how to match its capability with our societal responsibilities and expectations.
There are big questions around what data is being captured and stored? Who has access? What is their incentive? Can I opt out? What can happen if the database is breached?
Broken trust from data breaches and exploitations has led to an overall distrust of data capturing, and yet, from a practical standpoint, the idea of a contact tracing app to combat Covid is within our available technologies and a potentially very effective tool for the task.
The issue lies in what data is being stored, who is owning that data, what else can it be used for (if there is a breach or the company sells it or just uses it for other purposes), and whether personal freedoms and personal privacy can be infringed upon.
The data revolution is still only in it’s infancy. Without a proper ethical framework, distrust could spread further and access could be severely limited along with opportunities for progress alongside it.