Danh mục

Building Web Reputation Systems- P21

Số trang: 15      Loại file: pdf      Dung lượng: 599.80 KB      Lượt xem: 15      Lượt tải: 0    
Hoai.2512

Phí tải xuống: 1,000 VND Tải xuống file đầy đủ (15 trang) 0
Xem trước 2 trang đầu tiên của tài liệu này:

Thông tin tài liệu:

Building Web Reputation Systems- P21:Today’s Web is the product of over a billion hands and minds. Around the clock andaround the globe, people are pumping out contributions small and large: full-lengthfeatures on Vimeo, video shorts on YouTube, comments on Blogger, discussions onYahoo! Groups, and tagged-and-titled Del.icio.us bookmarks. User-generated contentand robust crowd participation have become the hallmarks of Web 2.0.
Nội dung trích xuất từ tài liệu:
Building Web Reputation Systems- P21Figure 10-8. Final model: Eliminating the cold-start problem by giving good users an upfrontadvantage as abuse reporters.266 | Chapter 10: Case Study: Yahoo! Answers Community Content ModerationProcess: Is Author Abusive? The inputs and calculations for this process were the same as in the third iteration of the model—the process remained a repository for all confirmed and nonap- pealed user content violations. The only difference was that every time the system executed the process and updated AbusiveContent karma, it now sent an additional message to the Abuse Reporter Bootstrap process.Process: Abuse Reporter Bootstrap This process was the centerpiece of the final iteration of the model. The TrustBoot strap reputation represented the system’s best guess at the reputation of users without a long history of transactions with the service. It was a weighted mixer process, taking positive input from CommunityInvestment karma and weighing that against two negative scores: the weaker score was the connection-based Suspecte dAbuser karma, and the stronger score was the user history–based AbusiveCon tent karma. Even though a high value for the TrustBootstrap reputation implied a high level of certainty that a user would violate the rules, AbusiveContent karma made up only a share of the bootstrap and not all of it. The reason was that the context for the score was content quality, and the context of the bootstrap was reporter reliability; someone who is great at evaluating content might suck at cre- ating it. Each time the bootstrap process was updated, it was passed along to the final process in the model: Update Abuse Reporter Karma.Process: Valued Contributor? The input and calculations for this process were the same as in the second iteration of the model—the process updated ConfirmedRerporter karma to reflect the accu- racy of the user’s abuse reports. The only difference was that the system now sent a message for each reporter to the Update Abuse Reporter Karma process, where the claim value was incorporated into the bootstrap reputation.Process: Update Abuse Reporter Karma This process calculated AbuseReporter karma, which was used to weight the value of a user’s abuse reports. To determine the value, it combined TrustBootstrap in- ferred karma with a verified abuse report accuracy rate as represented by Confir medRerporter. As a user reported more items, the share of TrustBootstrap in the calculation decreased. Eventually, AbuseReporter karma became equal to Confir medRerporter karma. Once the calculations were complete, the reputation state- ment was updated and the model was terminated.Analysis. With the final iteration, the designers had incorporated all the desired features,giving historically trusted users the power to hide spam and troll-generated contentalmost instantly while preventing abusive users from hiding content posted by legiti-mate users. This model was projected to reduce the load on customer care by at least90% and maybe even as much as 99%. There was little doubt that the worst contentwould be removed from the site significantly faster than the typical 12+ hour responsetime. How much faster was difficult to estimate. Objects, Inputs, Scope, and Mechanism | 267In a system with over a dozen processes, more than 20 unproven formulas, and about50 best-guess constant values, a lot could go wrong. But iteration provided a roadmapfor implementation and testing. The team started with one model, developed test dataand testing suites for it, made sure it worked as planned, and then built outward fromthere—one iteration at a time.Displaying ReputationThe Yahoo! Answers example provides clear answers to many of the questions raisedin Chapter 7, where we discussed the visible display of reputation.Who Will See the Reputation?All interested parties (content authors, abuse reporters, and other users) certainly couldsee the effects of the reputations generated by the system at work: content was hiddenor reappeared, and appeals and their results generated email notifications. But the de-signers made no attempt to roll up the reputations and display them back to the com-munity. The reputations definitely were not public reputations.In fact, even showing the reputations only to the interested parties as personal reputa-tions likely would only have given actual those intending harm more information abouthow to assault the system. These reputations were best reserved for use as corporatereputations only.How Will the Reputation Be Used to Modify Your Site’s Output?The Yahoo! Answers system used the reputation information that it gathered for onepurpose only: to make a decision about whether to hide or show content. ...

Tài liệu được xem nhiều: