With the COVID-19 pandemic forcing more workers in tech companies to remote work, Twitter’s content moderators are likewise staying home.
Twitter has been a go-to for people to air their thoughts. It’s instant-based platform helped connect the world with little to no barriers. Flagging a tweet or image for its nature is an important task for the platform, and Twitter has been tapping machine learning to filter content.
Just recently, the company announced that it will be increasing their use of artificial intelligence to automate the regulation of potentially abusive content.
As a disclaimer, Twitter still wants to clear that they will not be solely relying on machines for the task. They will use this opportunity as a steppingstone to look for other opportunities where machines and humans can work together to promote friendly online content.
Earlier this year, other platforms like Youtube and Facebook have announced their intention to rely on AI to moderate their platforms. Videos were flagged by AI for potential violation of policy, and Facebook is doing the same with posts shared across their site.
The sudden events that unfolded forced companies to put AI in the loop. Soon enough, automated systems will likely take on this task without needing human review.
Add Comment