- YouTube is implementing crowdsourced fact-checking where viewers can add notes to videos to provide context or additional information.
- Viewers will be able to rate notes as “helpful”, “somewhat helpful”, or “unhelpful”, influencing whether the notes remain visible based on feedback collected from diverse perspectives.
OUR TAKE
Because this feature relies on user reviews and feedback, YouTube needs to set clear standards and processes to deal with possible false positives or abuse. This may require strengthening the intelligence of the algorithm to better distinguish and handle valid and invalid annotations.
–Sissy Li, BTW reporter
What happened?
YouTube’s latest efforts to combat misinformation by introducing crowdsourced fact-checking. The platform will soon allow viewers to contribute notes to videos, offering context or additional information similar to systems on other platform These notes will undergo evaluation by third-party assessors to determine their usefulness. Only those deemed sufficiently helpful will be displayed beneath the video. Viewers will have the opportunity to rate notes as “helpful”, “somewhat helpful”, or “unhelpful”, accompanied by explanations for their choices. An algorithm will then decide whether to keep notes based on feedback from a diverse range of perspectives. YouTube plans to initially test this feature with a selected group of users meeting specific criteria, including account age and community guidelines adherence. However, notes will not be available on videos featuring minors, content designed for children, or private videos. The platform anticipates learning from these tests and user input to refine the system moving forward.
Also read: Can YouTube detect AI-generated content?
Also read: Apple co-founder Steve Wozniak wins fake video lawsuit v YouTube
Why it is important
Crowdsourced fact-checking through features like YouTube’s Annotations is important and can help improve the accuracy of information online by providing additional context and corrections to potentially misleading content. This can counteract the spread of misinformation and promote a more informed audience.
And by involving viewers in the process of fact-checking and adding context, platforms like YouTube can foster community accountability. Viewers become active participants in verifying information and contributing to a healthier online environment.
Viewers can understand how information was evaluated and verified, and it promotes transparency. This transparency holds creators accountable for the content they publish and helps build trust between creators and audiences.
At the same time, platforms can use feedback from viewers and raters to continually improve their fact-checking processes and algorithms. In the long term, this iterative improvement could lead to more effective ways to combat disinformation.
There are some similar cases or features on social media and video sharing platforms, but each platform may implement them slightly differently:
1.Reddit‘s comments and post ratings: Reddit is a community-driven platform that allows users to vote and rate posts and comments. High-rated posts and comments are more likely to be seen, while low-rated posts may be hidden or seen by fewer people. This mechanism helps the community self-monitor and sort content.
2.Facebook‘s comment and report function: Facebook’s comment system allows users to comment on posts and comments, and report bad or inaccurate content through the report function. Facebook also uses community standards and algorithms to filter and process these contents.
The functions of these platforms are not exactly the same, but they all involve user participation in content management and evaluation, which helps to enhance the accuracy of information and community interaction.