Instagram has new safety feature to turn off comments, DMs from non-followers

Embed from Getty Images
This is Bukayo Saka, one of the British soccer players who was subject to racist abuse online

Last month, Black British soccer players Marcus Rashford, Bukayo Saka, and Jadon Sancho were racially harassed on social media after missing their penalty kicks during the Euro 2020, costing England the trophy. Earlier this week Twitter released an analysis showing that the bulk of the racist abuse came from the UK. Other than suspending accounts, Twitter hasn’t done much to address this. They stated that 99% of suspended accounts were identifiable and that ID verification wouldn’t have prevented it. Instagram is also attempting to reduce abusive comments and DMs on its platform. Instagram announced that they recently implemented a feature called “limits” which “limits” those who don’t follow you, or who have recently followed you, from leaving comments on your posts. Instagram already implemented another feature called “hidden words” back in April which allow users to filter DMs with abusive words. Below are a few more details from The Verge:

Among them is something called “limits.” Turning this on prevents anyone who doesn’t follow you, or who recently followed you, from commenting or sending a DM. The feature is available to everyone globally today, and Instagram points out that it’ll likely be most useful to businesses and creators who expect a flurry of responses. Of course, turning comments or DMs off entirely would also work, but Instagram says this is a solution for people who still want the possibility of positively engaging with their community. The company says it’s also “exploring ways” to preemptively suggest that people turn this feature on when it detects a spike in activity.

Additionally, Instagram is building out its hidden words feature that launched in April, which allows people to automatically filter DMs with offensive words, phrases, and emoji, relegating them to a hidden folder. The feature now has a wider list of potentially offensive words, emoji, and hashtags. And finally, the app is issuing sterner warnings to people who try to post offensive comments. (This type of messaging already existed but only appeared if someone attempted to post multiple times.)

“We hope these new features will better protect people from seeing abusive content, whether it’s racist, sexist, homophobic or any other type of abuse,” the company writes in a press release. “We know there’s more to do, including improving our systems to find and remove abusive content more quickly, and holding those who post it accountable.”

[From The Verge]

I swear these social media platforms are doing absolutely f*ck all about abuse on their platforms. Is Instagram joking with these silly features? I’ll admit the features will weed some of the abuse but it really isn’t enough. Every account needs to be verified as a real person and not a bot. Those who are spewing abuse on social media should suffer real consequences, like having their posts turned over to authorities. Folks need to fear behaving poorly in a public forum will lead to terrible outcomes. I personally feel that our laws need to catch up to the digital age. Expecting people on social media to regulate themselves will never lead to real change. Companies make a sh*t ton of profits from the spreading of hate.

These features filter out abuse from people who don’t follow YOU. There should be a setting to limit comments on Instagram to accounts you follow, similar to how Twitter lets you set tweets to only accept comments from people you follow. Of course we can also be bullied by people we follow, but this would be a good start. I guess I should be happy that Instagram has put these two features into place. But I continue to get DMs from people I don’t know digital cat calling me. And I still see way too many abusive messages on Instagram and Twitter. All of these platforms need to do better, full stop.

You can follow any responses to this entry through the RSS 2.0 feed.

13 Responses to “Instagram has new safety feature to turn off comments, DMs from non-followers”

Comments are Closed

We close comments on older posts to fight comment spam.

  1. Snuffles says:

    I find the existence of bots absolutely maddening. I have no doubt they can develop an algorithm that can identify bot behavior and get them flagged and investigated then banned.

    • Becks1 says:

      Bots are so obvious and I refuse to believe that IG and twitter can’t get rid of them. Someone said that YouTube doesn’t allow them, if YT has it figured out surely IG and twitter can as well.

      • Snuffles says:

        I bet Twitter and Instagram are profiting from it in some way. People are buying followers and programming fake followers to comment for a fee. It’s not a stretch that Twitter and Instagram benefit in some way.

      • Mich says:

        @Snuffles They are absolutely profiting. If they got rid of the bots their user numbers would plummet and that would hit their company value. Social media has been responsible for true atrocities including actual genocides (see Myanmar). They don’t care. It is all about money.

  2. Becks1 says:

    In general I see this as a feature that offers temporary protection to an account but I’m not sure its a lasting solution. So with the soccer players, this feature would have stopped an onslaught of abuse from non-followers or new followers but it would not have stopped any attacks from people who already followed them. But maybe its a good first step.

    where I can see this feature being helpful in my own social media realm (so to speak lol) is with Meghan and Harry. An account that you don’t otherwise associate with them posts something related to them and like clockwork the racists and derangers descend to attack. Using this feature would mean that people could still comment but it wouldn’t be the non-followers or the new followers. That means the account loses out on positive engagement from the new followers, but it might be worth it. I’m also thinking of established social media presences like Gabrielle Union who posted something in support of Meghan – I’m not sure if she received attacks for it, if she did her team probably got rid of them in a hurry bc she’s on top of it like that, but a feature like this would allow her millions of followers to interact with the post while blocking out people who just want to attack her for supporting Meghan.

    • Snuffles says:

      What about a feature that can identify bots and immediately block them from posting as well.

  3. Merricat says:

    I left most of social media behind when I was targeted by right wing incels for comments I made regarding the rights of women and girls. It hamstrung my efforts to publicize a theatre event I was involved in, but I slept a lot better. Eff these people.

  4. Annie says:

    I said in a post a few days ago that social media need to have some regulations and consequences. They incite/promote hate and have been a large if not primary factor of spreading misinformation.

    It’s not a tool that is being successfully used by most of the population (globally) and it’s become dangerous for so many reasons.

  5. lanne says:

    Maybe it’s time to end anonymity with social media. Social media is the new “public square.” If you would get shamed/criticized/punished for hate speech in public, you should face those consequences on social media as well. There has to be a way to verify all users, and still keep an option for whistleblowers and victims who need anonymity.

    People policing themselves doesn’t work. And with the new deepfakes I read about, where people can “undress” women wearing clothes in pictures and share the images, (it doesn’t work for men), there has to be public consequences for online behavior.

  6. Ninks says:

    If you post anything remotely Covid related on Instagram, they immediately identify it and flag it. But they can’t do anything about racist abuse unless the recipients of it go throw the abuse that’s been sent to them and report the person or people who sent the abuse individually, block the users and delete the commemts in question. It’s such BS.

    • Becks1 says:

      Someone on my FB posted a screenshot of a power point presentation used at our recent BoE meeting detailing the quarantine timeframes for students based on vaccination status. I don’t think it even said “COVID” in the slide, just mentioned “exposure” and “quarantine” and “masks.” The screenshot was flagged by FB for “misinformation.” So that can be flagged basically immediately but the racist abuse can stay?

  7. teehee says:

    Its always quantity over quality– high number of users, high volume of ads, and just to hell with what happens amongst all those people.
    We need more moderators- more rules in any social spaces and more people who enforce them– their wages will come from the insane profits that these apps generate that are doing nothing but inflating stock values and buying yachts.
    People want to speak their mind- its our nature. Even I try not to anywhere but here- but I just cant help it, sometimes I just want to share something.
    But nobody wants to be irrationally lambasted and definitely not so, just for existing.
    If people don’t like the rules or the attitudes on one group, they can just find another one.
    How is this so hard to implement…
    While I’m on it- I’m all for paid services. 5-50$ a year to continue using a *quality* social media app rather than selling all my data and being tracked everywhere I go- is absolutely fine with me.
    Speed and quantity is killing quality and consent.

  8. SansaL says:

    This is both good and bad, and completely pointless. It can protect people from abuse, but also let’s people post false information with absolutely no pushback.