To get you up to speed, a CAPTCHA is a method of discerning whether a comment is being submitted by an actual person behind a keyboard or a bot (an automated program that pretty much does nothing but generate spam). A very common form of the CAPTCHA is presenting a distorted image of a string of letters and numbers and requiring the correct interpretation or translation of said letters/numbers in order the comment in question to be accepted.
The Civil Rights Defenders have put their own spin on this method comment moderation by adding a bit of a twist:
Instead of visually decoding an image of distorted letters, the user has to take a stand regarding facts about human rights. Depending on whether the described situation is positively or negatively charged, the CAPTHA generates three random words from a database. These words describe positive and negative emotions. The user selects the word that best matches how they feel about the situation, and writes the word in the CAPTCHA. Only one answer is correct, the answer showing compassion and empathy.
So this CAPTCHA system would mean presenting a potential commentor with a situation and expecting them to give the right emotional response in order for their comment to be accepted.
How effective do you think this would be in keeping trolls out of a given space?
Is it possible that people who are otherwise perfectly civil and reasonable would be effectively banned from a space because they disagree with what someone else says (the Civil Rights Defenders themselves will maintain the database of situations and correct answers) on a given issue?
Would this protect civil spaces from invasive people, pushing their own agendas?
Could this lead to an even greater division between different schools of thought when unity is so crucial?
What do you think?
Troll image courtesy of Flickr/Goosemouse