WEB Notes: Did anyone ask for this? What if a friend or co-worker called “first responders” to your aid because they thought you were depressed or had suicidal thoughts. How would that make you feel? Like your privacy was invaded maybe? Hey, we sign ourselves up for this type of thing so I suppose we cannot complain about it. Just know the more we continue to feed these machines, the more they learn and the more invasive things will become. The whole frog in the pot people, turn up the heat little by little and you will never feel a thing.
This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.
Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.