Fannin's Place

MMC6612, Fall 2010

More than Crowdsourcing

with 9 comments

Among the 5 cases, I’m very interested in the “Brian Lehrer Show.” How does it make an impact on listeners and how does the result of the project show to us?

First of all, there is a problem of sampling. I found this article “Demystifying Crowdsourcing: An Introduction to Non-Probability Sampling” deployed that though it’s not representative to the whole population, but it’s an cost-efficient way to collect data, which decrease the time and effort to gather all the information. In the case of counting SUV numbers, it does save a lot of effort to build the map, but there is still a doubt that is the result representative enough. However, the drawback of crowdsourcing is sampling.

Second, does the credibility of crowdsourcing be an issue or crowdsourcing can gain credibility? As I looking through comments posted by listeners, I can’t help to doubt that “are these numbers reported being true?” I mean, it is easily for users to mess around, because there aren’t any proof to justify my reporting right? MENG conducted a survey in 2007, “shows companies effectively using crowdsourcing for real-world innovation.” The result showed that “80% of executives considered crowdsourcing was probable that opportunities exist to source this expertise from business and knowledge networks”, and it showed a great interest if these executives that using crowdsourcing to enhence their Research and Development. They had certain expectations to the data that crowdsourcing produced, and “84 percent rating this information as valuable or highly valuable.”

Third, as Muthukumarasway mentioned in the article that “audience attraction is a happy consequence” of crowdsourcing, how does it work? I found an article reported that the Netflix Prize attracted more than 50,000 teams join the contest they held. The contest was all about beating their existing movie prediction system- CinematchSM. I believe one of the reasons why there are so many participants in the contest is because of its Grand Prize– $1,000,000 (USD) Cash. However, the Brain Lehrer Show did not provide such rich prize; there were still an amount of participants. It showed the power of crowdsourcing, even without prize, people still participate!

Muthukumarasway put the Brain Lehrer Show into the category of Wisdom of Crowds in General-interest Reporting by Recruiting a General Audience. I think it’s suitable and appropriate, and it’s also a good example of crowdsourcing. Though it’s  a very “simple” crowdsourcing case, it still has the general concept of an aggregate of people who are in the virtual reality world contributing something all together in order to get the answer of the question, no matter the answer is right or wrong.

Advertisements

Written by fanninchen

November 13, 2010 at 3:10 am

Posted in Uncategorized

9 Responses

Subscribe to comments with RSS.

  1. I think your analysis is just about the same as mine. The first two issues you pointed out are already compelling enough to argue that the crowdsourcing wasn’t so effective. And the third point you made is also one of my concerns about its effectiveness. There were actually more than 400 responses to the crowdsourcing request within just a few days, but just as you said, since seemingly no incentives were available, the results they produced could be very inconsistent.

    Shine Lyui

    November 15, 2010 at 10:23 pm

  2. I think you pointed out several major problems of crowdsourcing. First, the sample is not representative enough. Brain Lehrer also mentioned this when they announced the results of this SUV counting. Second, there is no way of checking the reliability of each entry. We can only hope that the sample is large enough so we could neglect those unreliable results. Third, since in most of the cases, you ask people to help you voluntarily, you really have to find the right motivation.

    tinamomo

    November 18, 2010 at 7:59 pm

  3. You mentioned that it is appropriate for Muthukumarasway to put the Brain Lehrer Show under the category of “Wisdom of Crowds in General-interest Reporting by Recruiting a General Audience.” I will say I do not totally agree with this perspective. I think Muthukumarasway should re-define the meaning of “general audience,” for it is kind of difficult for me to know what “general” means. For example, although I understand the U.S. has the high rate of internet usage; however, I believe not everyone in the U.S. has the access to internet (poor people might has less chance to access internet). Therefore, I think she should explain the term “general audience” more.

    Carol

    November 19, 2010 at 12:50 am

  4. I think that people’s participation depends on the time they have to spend besides money they may gain. If participating something which I am interested in would not cost me a lot of time (or say I can just contribute my opinion in my leisure time), it will become something fun for me to join. However, it cause the result which you brought up that the credibility of crowdsourcing become questionable. Without verification and gatekeeper, can we really believe the information which is contributed from random people?

    chentingchen

    November 19, 2010 at 1:23 am

  5. I chose this case as well!

    Is the problem with sampling related to the fact that you are not taking a random sample? I wonder what a proper statistician would think of Brian Lehrer’s method?

    I think your point of participation is a good one and one the SUV case highlighted well. Listeners of that show are typically left-leaning and already concerned about the environment. Since their motivation to bring attention to this issue was high and the investment of time was low, a lot participated.

    francescalyn

    November 19, 2010 at 9:41 am

  6. I hadn’t thought of how representative crowdsourcing is until I read your post. That’s a good point. Its probably not.
    In general, I felt that Muthukumarasway idealized crowdsourcing as a concept so I’m glad you brought up some of the negatives. Some people are critical of crowdsourcing because of its unreliability, as you mentioned. I wonder if people still found a reward in taking part in the SUV count because they felt they were part of something, a collective, that was bringing together information? Especially if they are loyal to the Brian Lehrer show. Sometimes that can be a reward too.

    paulacunniffe

    November 19, 2010 at 1:08 pm

  7. The question of contributor motivation is a great one when talking about crowdsourcing. I think in a lot of cases where there is no prize people are interested in recognition, but there is also a certain level of anonymity in come crowdsourcing projects. So what then? I personally believe that there is value in being part of something bigger in yourself. Maybe some people also just do it in order to help develop humanity’s well of knowledge.

    makeyourself270

    November 19, 2010 at 3:46 pm

  8. The crowdsourcing idea posts in the weekly reading article doesn’t mention and think of the scpects you said in this post. The one of faults of crowsourcing is the sampling error just you said. Many may think that the larger the sample is the higher credibility it has. But the thing can just happens in the such basic level as the case you wrote.

    morganyang

    November 19, 2010 at 3:53 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: