Read, reblog, and resonate!
genius
Ok so as of 21:15 UTC, 21st November, there have been
Of these,
I mean, I guess those numbers are bad enough on their own but believe it or not they're actually an improvement on this afternoon's picture, where the divide was closer to 70-20.
Now obviously, the reception looks bad, doesnt it? Yes? Yes?
Yes.
But what's worth noting is that even dealing strictly with these 2 extremes, & handing the advantage over completely to the positive side (I.e. the 10/10 ratings), you end up with something like this-
But the average rating is 6.6/10. I would've taken the distribution of the ratings directly but I dont see any means to access it on imdb (maybe I'm just dumb).
Which leaves us to infer indirectly from the data above, that the people rating the episode positively, aren't reviewing as much as the people rating it negatively.
From over 7,000 ratings, let's try and estimate the general distribution of responses & see how it compares to that of the subset which reviewed.
Again, dealing strictly with extremes, we can reduce the problems to two variables for which we have 2 linear equations available
Let x be the percentage of 10/10 ratings, & y that of 1/10 ratings, then
I would've gone for a linear programming solution but there simply are not enough constraints without removing the other 8 variables from the equation.
62% said 10/10.
An almost completely inverted image of the reviewer response.
Its
Interesting.
There's really nothing this proves. True to the spirit of statistics, this was mostly pointless. All that can be substantially concluded is that at the very least, the people who were disappointed by the finale felt far more strongly about it than the people who weren't. The people who cared more were the ones left heartbroken. Whatever that's worth.