We’re constantly experimenting with the open-source algorithms that select which notes to show. So you can follow along, you can now easily see in “Note Details” which model computed the current status of a note. Read more about our open-source models:
@CommunityNotes This is the best app on this whole platform
What needs to be worked on is the fact that if you share a note a sources that are accurate someone can just rate it as not helpful if they don't agree , people can be locked out of community notes for sharing accurate information, this why I don't often write notes I just rate them
@CommunityNotes Oh cool, it’s in python. And formatted with black it appears. Only thing I don’t like is that is uses print() instead of the native logging. Otherwise it’s sexy code.
@CommunityNotes I think with practice we all become better with anything we do. 💚
@CommunityNotes informed you say? okay... lets play
Genius. This could absolutely be a progressive step into unshadow banning actual people who Gaf about important information. Allowing their posts to break the algorithmic barriers. Meaning the only one who can censor someone is the individual themselves. Important information and truth go hand and hand. Bias and conspiracy theory rhetoric would be graded over a period of time only based on the algorithmic process. In order for something to be likely valid information it would have to have a high graded score. This would make engagement and the Twitter experience better for everyone on the platform.