Dear Ditto,

As social technologies get more influential, how do we prevent harm?

Advice from:
Graziella Jackson
Partner & CEO

By now, most people have seen, heard, or read about Congresswoman Alexandria Ocasio-Cortez questioning Facebook CEO Mark Zuckerberg on the company’s fact-checking and advertising practices. 

This sparked a flurry of debates about free speech vs. regulation of social media platforms, which extend into most of our lives. In the past several years, social media has significantly increased our exposure to misinformation and disinformation — not to mention security breaches and inadequate data privacy — to the point where government regulation appears inevitable.

What are the arguments for or against social media regulation?

The typical argument against regulation says that it is the responsibility of social media users to view and judge social media content for themselves, without censorship, similarly to consumers of print media products. It views this as a fundamental aspect of upholding free speech. In this view, the consumer is seen as the ultimate decision-maker and has control over their experience — choosing what content to engage with and believe. By acting together, consumers regulate their experience with a social media company or tool. In turn the company needs to abstain from “unreasonably restrictive and oppressive conduct” (American Bar Association) in controlling, limiting, or censoring content .

The typical argument for regulation says that in our “attention-driven” economy, social media companies rely on invasive data practices and behavioral and cognitive manipulation to cause predictable, habitual usage over time. And research shows that most Facebook users are unaware that the platform gathers insights from their “personal interests and traits” for the company’s — and their advertisers’ — benefit. In this view, the user is the product and is being sold to advertisers and content producers in order to create more company profits. Attention manipulation is an essential component of this model and relies on mood-altering content to control behavior. Content is produced by a virtually unlimited number of users and monitored by company staff and technologies. In this view, the belief is that this open, largely unmeditated marketplace of content coupled with behavioral surveillance and embedded psychological manipulation has led to harm on an individual and societal level. This is why social media has been compared with substance abuse and addiction and raised concerns about its influence on psychological health.

So the big question is — who is right? We were interested in this and whether or not we could apply design thinking could answer this question. 

First, can we EMPATHIZE with both perspectives?

When we started to look into both perspectives, we decided it was important to start by reviewing the ways in which people engage with media content. 

Not all media is handled equally:

  • Some types of media have intentional separation between primary content and advertising content (ex: Traditional newspapers).

  • Some types of media separate primary content and advertising content into different formats, but do not have intentional separation of the messaging across those formats (ex: SuperBowl ads shown during the televised event, Political ads shown during debates, Special print advertising supplements).

  • Some types of media intentionally place advertising messages within the context of the primary content (ex: Product placement in movies, Sponsored content in magazines).

  • Some types of media do not clearly distinguish between primary content and advertising content and manipulate both in order to increase views and revenue. (ex: Facebook sponsored posts.)

When we quickly researched people’s reactions to the these distinctions and the challenge of social media content (the fourth kind), we uncovered five primary concerns:

  • I know content manipulation is there, but I don’t have time to spot and counteract it.
    “I probably engage with all four types of media almost daily, but more often my experiences are centered on the fourth type. While I get that there’s a lot of manipulation happening, I don’t always have the time to figure out what is real and what isn’t.”

  • If content comes from a trusted source or is very popular, I usually just attach that trust / credibility to the content itself.
    “It’s easy when content seems to come from a credible institution. If I see The New York Times and I like The New York Times, I know how to engage with that content. When I don’t know the producer, I might be a little bit more wary. But, if a friend shares it with me or it has 1,000 likes, I probably will just automatically trust it because it came from a friend and had high popularity. This can be confusing.”

  • There may be distinctions between paid-for content and organic content that seem important, but on social media they seem like two different forms of the same thing.
    “I can ignore advertising when I see something listed as ‘sponsored.’ But what’s the difference between sponsored content and someone organically publishing a piece of content that goes viral and gets tons of views and then ends up in my feed because an algorithm put it there? Isn’t that kind of a form of sponsoring content? And that content might actually have the same misinformation in it, but I’ll likely be less critical of the second type of content because it wasn’t officially listed as ‘sponsored.’ Bottom line is, it’s hard to tell what you’re working with and the labels we use for content don’t seem to fit.”

  • The notion of freedom of speech or freedom of choice only truly exists in what you post and who you choose to follow. The rest is mostly controlled on your behalf.
    “I don’t really have any understanding of how Facebook and other platforms are controlling the content I see and the content that my network sees. I don’t think there is a lot of consumer freedom in the platform as it is, because there’s no real transparency in what’s being shown to me and why. When I go on a newspaper, I see a byline. I can research that writer. I know there are editors. I can research them. I know what I’m working with. Same on television and radio, for the most part. On social media — no clue. Who’s behind that content? What’s more, I often feel like I'm being penalized by Facebook for not being a more active user. That takes away my freedom and control. The concept of ‘free speech’ on Facebook — I don’t think it actually exists.”
  • I can sense a constant potential for harm, yet continue interacting.
    “When we’re off Facebook, we feel pretty good. When we’re on Facebook we feel pretty bad. But we can’t stop getting on Facebook. That may be the most damning evidence. In this case, ‘I know it when I see it’ is still relevant.”

How can we use this perspective to better DEFINE the problem?

Ultimately, our findings came down to:

  • People do want to have more control over their social media experiences. They feel they have some control today, but to a large degree they have little transparency into how social media works and feel their control is limited. 

  • People want confidence that they are not being harmed or contributing to harm. Because of the lack of transparency into content policies and practices, it is hard to build this confidence. People generally feel guilty of having shared misinformation at some point in time and lack confidence that they will be able to prevent themselves and others from doing it in the future. This speaks to distrust of the way content is governed by social media companies.

  • Most people feel confident in their ability to critically judge content in general, but that confidence decreases as the volume of manipulated content increases. They often lack the time to intentionally evaluate everything presented to them, to figure out how and why it’s being presented and prevent misinformation from causing harm.

  • People don’t always mind that there is advertising money behind content (advertising, product placement, sponsoring, etc.). They mind when it’s presented in a way that is very hard to tell what the source of the content is, who produced it, and how it was funded. Making this information more transparent would help them take control over their experience.

John Milton once wrote: “Let Truth and Falsehood grapple; whoever knew Truth put to the worse in a free and open encounter?” The challenge in today’s social media landscape is that people feel truth is indistinguishable from falsehood and falsehood is made to be more addictive — rendering the experience of interacting with both types together on social media inherently risky. 

How does this answer the original question?

Back to the original question of who is right on the issue of social media freedom versus social media regulation, our findings reveal that the question itself may be too narrow. 

In our initial, informal, and qualitative discovery, we found users already have a sense that they are subject to excessive interference by social media companies in their day-to-day experience. They don’t feel free, and they want more freedom from manipulation and harm. They see their relationship with social media as related to both content consumption and psychological health.

They are primarily concerned about two things:

  • What content is allowed on the platform and how that content is governed.

  • How often they’re being manipulated in their interactions with that content.

While this can and should be investigated further, it allowed us to hypothesize that:

  • Government regulation will have an important role to play in the first item — defining what is allowed on the platforms and how that content is controlled to prevent issues like invasive surveillance, political manipulation, racial and social profiling and targeting, poor working conditions for social media employees, and national security breaches.

  • On the second item, there are actions social media companies should take right now to reduce the day-to-day interferences and psychological manipulations that are embedded in their technologies, particularly those causing harm.

Our next section focuses on the second bullet — the immediate actions social media companies can take to use better design to reduce harm.

Let’s IDEATE some possible solutions ...

Taking our defined problems, we came up with 12 ideas for how social media companies may be able to tackle these issues:

  • As is the case with political advertising on broadcast media, publish the source of paid content and how it was paid for, particularly related to political content.

  • Introduce more tools that allow users to toggle between content that is curated on their behalf and an unfiltered view of content. 

  • Introduce options for users to engage with the platform without sponsored content (ex: an ad-free subscription model).

  • Create a version of the social media tool that eliminates violent or harmful content and allow users to opt-in to that version of the platform.

  • Allow users to publicly rank content as harmful, with indicators in the user interface that show the number of people who ranked a piece of content as harmful. Have a way to review and remove content based upon a certain threshold.

  • Provide people with a dashboard of their time spent on social media and trends in their usage, for example: data that shows them how much they interact with different types of content. Show them how they engage over time, so they can see and learn from patterns of interactions.

  • Put time limits on engagement or suggest users take a break from the platform. Allow users to set their target daily, weekly, time limits on social media and alert users when they have gone over their limits. Encourage them to spend time offline.

  • Indicate to users what data is being collected on them and how it is being used, stored, and secured. Quickly and openly notify users of data breaches.

  • Indicate to users what common identifying factors are being used to show them content and allow them to set greater privacy controls.

  • Eliminate partnerships with content producers who engage in information distortion or content that incites violent or lawless action.

  • Involve social media users explicitly in deciding what content, features, and functionality will be removed or introduced over time. 

  • Involve researchers, ethicists, public servants, and human rights and racial and social justice advocates in drafting the ethical standards of the platform and how those will influence content and design. Align them to the U.N. sustainable development goals.

While these are just a few ideas — each with varying levels of feasibility — any one of them may merit further investigation, prototyping, and testing. This is how we can apply design thinking as a tool for working with social media companies and consumers to create a future of positive, ethical social media that benefits our communities and our lives.

Do you have an idea for how we can improve our social media platforms? If so, send your idea to connect@echo.co with the subject line: Future of Social Media.

Ask Ditto your questions by emailing ditto@echo.co

(Don’t worry, your questions will remain anonymous.)