If you are coming to this article first, you may want to start at Part 1 of this series to be sure you have the full context.
A review of the data collected of those who rated Pandemic Legacy: Season 1 as a “1” provides some insight into some rating and tracking practices. This is a short article to review some of the methods that I saw in the data and some of the difficulty experienced in analyzing the data.
Round 1: The Law of Averages
Game ratings on BGG average out around 7. I have tried to identify communities with specific sentiments and dig out some statistics about them. This is not an evaluation of the “average” Pandemic Legacy player experience – we already have that in the general game stats on BGG. I have tried to locate the criteria that identify a community of sentiment, but have not applied any specific statistical criteria to identify them. I had to go with what “seemed” right and tried to take a conservative approach when making distinctions – if it looked like the line would be somewhere between 80 and 100, I’d use 100 to make sure I was using a more exclusive/distinctive measurement. So, as precise my data is, the data itself has inherent imprecision. The “law of averages” should be in my favor, but these are small populations of data and I have tried to find the edge cases, not the average case.
Round 2: Some Observed Methods
The following are some observed methods of recording and tracking user data on BGG that I saw in the data used for this study.
As we saw, about 18% of the users in this study have fewer than 3 Owned games recorded. It is impossible to guess what the average collection is for all BGG users, but this seems like a relatively small number of users that have not recorded their collection to some degree or at some time. For users who recorded owning between 3 and 1000 games (eliminating those who recorded nothing and those who have extraordinary collections) we get the following average collections:
Average Owned (between 3 and 1000)
Since the data observed in this study is a specific subset of users based on their rating habits, it could be a bit dicey making generaltities based on the data. However, as we dug into the data, we saw some fairly typical practices. It is impossible to know what rationale users incorporate into the rating methods, but a couple patterns were obvious. (I’ve given them names).
- Hater Raters: Rated only 1 or 2 games a “1”.
- Robo Raters: Rate all games a “1” or a “10”. (Since “0” is not available).
- Mathematicians: Rated games have a regular Gaussian distribution centered on their average rating.
- Protectors: Generally rate like Mathematicians, but use the “1” rating to attack specific games.
Recording plays is certainly the most effort for a user and not surprisingly the least used. Basically half of the users recorded at least one play. I discussed my plays tracking practices in Part 1 and acknowledged there that even my own method that is fairly “accurate” has oddities. Although, not much can be deduced here, a couple practices seen in the data were:
- Entering multiple (presumably estimated) plays in a single entry. (as many as 3000)
Entering individual plays.
- Entering more than the date is relatively rare.
Round 3: Whose Data Is This?
The data on BGG is a gold mine, but whose gold is it? Rightfully, each user records and tracks data as suits their needs/wants which may not be consistent with other users or the community at large. So ultimately the data in aggregate is a reflection of all the users’ needs/wants not planned or architected toward a specific outcome. In data sets like this, it is best to have a large volume of data, which wasn't availble in this case (nor did I have the tools available to handle large volumes of data).
Do you track your collection or plays on BGG? Do you rate the games that you play? What methods do you use? Do you consider only your own needs/wants from the data or do you consider what you think the community expects of the data?
Continue with this series or catch up on previous parts:
- Part 1: Know Thyself
- Part 2: Hate Rating – Don’t Mess with #1
- Part 3: Hate Rating – That’s Just Stupid
- Part 4: Disposable Games
- Part 5: Rating and Tracking Data and Methods
- Part 6: Conclusions and the Future
If you find this article interesting, you may want to check out all the articles in the Analysis Paralysis category where we analyze and discuss issues in the tabletop industry and community.