In May 2017, the LGBT organization GLAAD posted a video on YouTube of actress Debra Messing receiving an award.
In Messing’s acceptance speech, she praised many Americans for supporting one another and reminisced about the show she starred in NBC’s CMCSA, -1.43% “Will and Grace,” and its influence on telling the stories of members of the gay community.
She also called on members of the Trump administration to “do right” by the LGBT community, including removing Steve Bannon (who has since left) from his post as President Donald Trump’s chief strategist. She did not specify what her criticism of Bannon was. She also said in her speech that Ivanka Trump should work for “women’s issues.”
“You can’t just write #womenwhowork and think you’re advancing feminism,” she said, referring to a hashtag Trump often uses when promoting her products, including a book by that title. “You need to be a woman who does good work.”
GLAAD did not have a robust YouTube GOOG, -2.26% following, but the video started trending — largely because of negative comments about Messing and her speech. It was part of a coordinated effort to “attack” the video with “vile hate speech,” said Jim Halloran, GLAAD’s chief digital officer.
That’s why GLAAD announced this week it is working with Google’s parent company Alphabet to change the way artificial intelligence understands LGBT-related content online.
“The internet is such a vital resource for the LGBT community, especially for young people finding connection,” he said. And that extends from YouTube to Google, Facebook FB, -1.56% and Twitter TWTR, -3.92% “That lifeline is under attack.”
Because content related to marginalized or minority groups — gay people, women, people of color and some religions — tends to generate more negative feedback than content not related to those groups does, artificially intelligent algorithms have started to learn in some cases that LGBT-related phrases are “bad,” Halloran said.
To combat this, GLAAD is working with a division of Alphabet, called Jigsaw, to help the company train the artificial intelligence that controls online algorithms, teaching it which phrases are offensive to the LGBT community and which are acceptable.
Once Jigsaw has a better data set of positive LGBT-related content, including stories and videos GLAAD creates, it will be able to “make a value judgment” about which content to surface in the future, rather than suppressing all content related to the LGBT community that might attract negative comments. Halloran hopes it will be easier to find videos and stories online that showcase positive LGBT role models, like YouTubers Tyler Oakley and Hannah Hart.
Of course, Twitter and Facebook both make money partly through advertisements on their sites, and they benefit when more people use their platforms. So it benefits those sites, as well as the users, to have a safe environment to browse and communicate with other users. (Neither company immediately responded to MarketWatch’s request for comment.)
This collaboration isn’t the only work that could make online conversations better for minority groups, though, he said. Social-media networks should still evaluate their own platforms to create better-quality conversation, rather than “toxic” ones, Halloran added.
Networks including Twitter have received criticism from consumer groups and lawmakers for not doing enough to combat online trolls and false information. And Instagram has been linked to poor mental health for young people. “Before we can expect tech companies to be incentivized to do that, we have to have a conversation about what their financial models are and how they’re making money,” Halloran said.