If you’ve ever been a user on any internet forum — but let’s be honest, mainly Twitter or Reddit — then you’ve definitely gotten into an argument online. And if not, kudos on your exemplary self-control.
For those of us in the former category, you may have walked away from this online disagreement feeling drained, or otherwise emotional — and confused as to why. For some, the results of these arguments end up being so extreme they result in the destruction of relationships.
Now science has stepped in, with researchers at the University of Washington conducting a study with 260 participants in the hope of understanding these disagreements. Not only this, but aiming to develop “potential design interventions” that could take these discussions — arguments — and making them more productive, and centred around relationship building.
One thing the research team noted was that the features that users leverage during an argument are a “no-road-back approach” — as in, it results in unfollowing, unfriending, or blocking another user. The result? Relationships are cut off, as opposed to repaired or any common ground being found.
Participants in the study shared similarities, regardless of political leanings, when it came to the preference of how to have these conversations. According to the research, they emphasised a preference for having these discussions in private one-on-one chats — using an app like WhatsApp, or Facebook Messenger. That’s opposed to “more comment-heavy”, public platforms.
Lead author, Amanda Baughan, says this replicates what they’d see in real life — people would pull each other aside for a private conversation, as opposed to a shouting match in a public area.
As for how technology can support those having ‘hard conversations’, both the research team and participants brainstormed technological design interventions, and came up with three preferred methods.
The first is democratising, which is where community members use reactions, like upvoting, to boost constructive comments or content — something Reddit currently utilises (not always to great effect).
The second is humanising, which is pretty self-explanatory. This intervention, or the goal of it anyhow, is to remind people that they are, in fact, interacting with other people. This could work by preventing anonymity, increasing the size of users’ profile picture, or even providing details about users including identity, background or mood.
…Because background information definitely would not allow an opportunity for racist users to be more openly hostile.
The final of the three options is channel switching, in which users would be able to move their conversation to a private space. As for how this could work, Baughan says it could move from less of a public area of who’s going to win, to trying to reach an understanding.
As for the next steps? That would involve deploying interventions, as see whether they hurt or help conversations outside of a research setting, out into the real world.
Read more stories from The Latch and subscribe to our email newsletter.