Should You Monitor the Word Suicide on Twitter?

Ashley Billasano was an eighteen year old who cried for helped many times. Apparently, Twitter may have been the last place she voiced her pleas and frustrations (not only did Twitter (metaphorically) ignore her, so did the very people she entrusted with her accusations of rape and other abuse).

Either nobody was paying any attention to her Twitter stream or nobody knew how to respond or just didn’t care (in this society, that seems to be an emerging trend). Not a single one of her tweets received a reply.

This is remarkable.

It’s an instance of why I continue my own plea to Healthcare Social Media hand-wavers to plumb deeper the ethical ramification of social media.

We can’t prevent all suicides. I get that.

But if nobody was paying attention to her tweets, then one question is: Who should, if not her followers?

Should agencies specialized to handle suicides monitor the public stream for mentions of “suicide” and related keywords? And not just Twitter, but elsewhere?

There’s no linear answer, of course: if you say “Yes”, then there are new arcs of questions which need answering: Is this an invasion of privacy (not in a legal, but ethical sense)?

Another question: what if you are a Healthcare professional or organization and your account follows someone who starts a clearly apparent plea for help (bearing in mind that suicidal pleas take different forms)?

Would a hospital be held legally responsible if it failed to take action (how would the hospital “prove” that it didn’t see the tweet))?

If you set up an alert for suicide keywords (from any or all social media platforms), do you then put yourself in a position of *some* responsibility?

Years ago I asked these sorts of questions – long before there was even the phrase “Healthcare Social Media”. You should raise these questions too if you’re truly passionate about this stuff. (I was more optimistic then. I still am – but I’m not convinced enough people are taking these questions seriously enough.)

At conferences where so many people praise Twitter – especially marketers – I often counter with my own snarky (but closer-the-truth) aphorism: “Nobody reads your tweets.”

Now that I know my sarcastic insights into the marketing potential of Twitter are far clearer and more realistic than most marketers’, I’m heartbroken.

I wish I was wrong.

Phil Baumann




7 Replies to “Should You Monitor the Word Suicide on Twitter?”

  1. These are all excellent questions. Right now, Facebook does have procedures in place where people can ask for assistance in a case where they think a friend on the site may be suicidal ( It’s not easy to find on the site, but it’s there.

    Facebook is an active partner in the National Action Alliance for Suicide Prevention, in which I participate in my role with the Entertainment Industries Council. We’re also working on reaching out to Twitter to put similar policies in place (though it’s amazingly hard to connect with the right person there).

    The National Suicide Prevention Lifeline (@800273TALK) currently appears to respond to people who directly tweet to them, but from what I can tell is not proactively reaching out. It may be a staffing issue, but monitoring and responding would seem to be a role they could play well. (Though I suspect that most posts that intimate suicidal thoughts don’t actually use the word “suicide.”)

    That’s a very sad story. I think all of us, whether we are health professionals or not, have a responsibility to take action — at least by reaching out to ask “Are you okay?” — if we see someone posting things that could indicate suicidal thoughts.


    1. Agree, Nedra

      This is one of those things where social media opens all sorts of doors – for good or ill.

      I don’t know if Twitter will be able to do much (Twitter’s leadership doesn’t appear to be very spritely when it comes to maturing its own service – perhaps that will change, but I’m not very hopeful right now).

      As I said, it may be that we’ll see specialized non-profits get involved here. Monitoring is one thing – making effective decisions about what’s monitored and how to respond would probably be challenging as these media finally become the predominant means of communications.

      In the end, there’s no replacing education, a healthy supportive network and the kindness of good people with good ears.


  2. It would seem obvious to me that all sites would be monitoring the word. However, they don’t. When one of my followers, although not someone I follow, made such a reference, I contacted the authorities in their hometown where this person lived in the UK. The authorities said they would follow up. This person is still posting. You never know when a call for help is real, but is worth following up nonetheless.

    1. Hi Angela

      I don’t know if the platform companies themselves would be monitoring or not (and it wouldn’t just be for the word itself – but also related keywords/phrases).

      The more interesting questions go to the general public’s streams and for organizations.

      There’s no one way to handle these situations – and there are plenty of “false alarms”, which only illustrates the tangled webbing which social media continues to weave in its wake.

  3. Brilliant post, thanks Phil. I have forwarded onto CEO of lifeline in Australia as I am part of an eMental health strategy group that is looking at a national strategies for how we it Isis new technology to improve how people find, access and use eMental health tools and services.
    I know lifeline has done some really interesting stuff in this space and would probably have a lot of valuable experience and knowledge to add to the debate. From memory they have done something specific with localized google searches and also within Facebook. Not sure about twitter.
    I find that people behavior online mimics real world, we avoid conversations we find difficult, in fact it’s easier to “walk away” online and not respond.
    Providing a digital safety net that supports tho crieds for help by directing them to services that can help while still retaining the privacy of all social media users is an area we need to work on.

    Best regards,

  4. This has also come up on social gaming sites where the game includes a chat feed for you to talk with other players. One site I visit fairly regularly has this feature and although I don’t use the chat feed, others picked up on someone’s suicidal comments and reached out to the site’s administrators to step in. Many users on sites with chat rooms or feeds get to know each other to the point where they exchange personal e-mails offline and notice when someone hasn’t logged in for a while.

    Should healthcare orgs monitor these things? I don’t see how they could, but I do think there are ways for healthcare and mental health groups to provide links through Twitter and other social sites to direct people to where they can find help.

Leave a Reply

Your email address will not be published. Required fields are marked *