The voting period is over, however I am not entirety sure of the results. As of writing this post, there is not an “Overall” category on the game jam’s results page. There are only results for each individual category.
There were 33 ratings this time around with an average of 4.7 ratings per submission. More importantly there was a median of 4 votes. That seems like a decent turnaround, but I am still wondering if there will be any external judges weighing in on the final results.
If voting is all done then these are the final results:
(Feel free to check my data entry and math.)
The ratings for Robosses, Robo, and Scratch were all adjusted because their number of ratings were less than the median. I understand why itch includes such a system, but I am not sure if it is really necessary when the total amount of votes is already so low. In case you are wondering how the ratings are adjusted here is the formula:
Note that this formula is only applicable to entries that have a total amount of ratings less than the median. If an entry’s total amount of ratings is greater than or equal to the median then its raw rating is its final rating.
I only bring this up because initially I used the raw ratings to compile the chart and the final results were different.
Ideally everyone should be rated by the same amount of people to avoid this scenario. Considering that there are prizes at stake, we really have to nail the voting portion of the game jam to avoid any controversy.
Besides the idea of having a set amount of external judges that play and rate every game, I am not really sure how to tackle this.
All in all, I think we made great progress in terms of the quality of this game jam by incorporating the new Warehouse feature. Although, it looks like we still have to streamline the final voting process and dealing with the end results.