This quarter, we explore culture and content through the lens of the comfort influ...
Read MoreThis was a question posed by Talking Influence for its latest Deep Dive feature and ITB Worldwide’s Senior Account Director Aaron King had some thoughts to share – here’s what he had to say:
“The beauty of social media and the internet at large is being able to get advice and find the stories and content that you want. There are many restrictions on social media and that’s important as we need to be conscious (not just as marketers but as people) about what we’re putting out there and whether it’s content we want young people to consume.
I’m pro moderation – as long as we’re not silencing the voices that matter. The platforms have to move with the times. If we want to have important conversations around sexual health or alcohol abuse, for example, there’s no mechanic to ensure that educational content can still be on there.
Think about the way media works – if a journalist in a newspaper writes an article that is offensive or racist, the newspaper is liable. The challenge with social platforms is scale; where newspapers might have hundreds of journalists, social platforms have billions. If we’re going to treat social media platforms in the same way, then the platform is liable – they need to be responsible and have protections in place to protect their audience, especially when it’s so easy to discover content.
The problem is that current self-imposed restrictions and regulations don’t always allow for the nuances that come around certain words and topics – for example, when disabled creators are flagged for community violations. It’s all about context. Platforms need to look left and right instead of having a laser focus. There has to be a way to understand who is routinely creating valuable content so that they don’t automatically block that content.
It requires a tweak to the approval algorithm and there are a couple of ways to do this:
- ‘Send to moderator’ button: where any content which could be misconstrued as harmful is reviewed by human eyes to determine whether it should or shouldn’t be on the platform, rather than relying solely on a robotic review process.
- Extension of the blue tick verification model: giving the audience a way to navigate and identify sensitive content themselves. For example, a trusted voice who has built their career talking about disability could get a purple tick.
- Self-moderation options: an opportunity for creators themselves to moderate their own content when uploading so it’s made transparent if not suitable for children.”
Some of Aaron’s viewpoints were included in the Deep Dive feature from Talking Influence, along with other perspectives from across the industry. Read more here: https://talkinginfluence.com/2022/08/31/deep-dive-reverse-regulations-could-influencer-marketing-guidance-be-more-lenient/.