6. Discussed but Not Implemented

The following are suggestions that were seriously discussed and considered, but unable to be implemented into the proposed  judging system at this time:


Catch/drop ratio for Execution judging

The idea of implementing catches in proportion to drops (catch/drop ratio) was to encourage competitors to do more speed flow and shorter combos – which are currently not rewarded by the judging system (as they usually result in lower Execution scores).  The judging committee members picked up the idea from the FPA forum discussion thread to put Execution scores in relation to the number of combos. The suggestion was for number of combos  to be  defined through the number of throws (or catches/drops) during a routine. To put it in simple words: teams with many catch attempts shall be less penalized (through points) for a drop than teams with fewer catch attempts.

What originally sounds like a viable idea is, in actuality, very hard to implement mathematically. Different approaches observing the possibility of catch/drop ratios were tested on videotaped freestyle routines, but each one led to either awkward scoring results, or were  determined to be too hard to apply or too prone to errors in a real-life tournament situation. The conclusion after looking into the catch/drop ratio as an option was that the disadvantages clearly outweighed the advantages.

 

Dropping Execution as a judging category

Execution is often perceived to be the decisive factor for winning tournaments. This also has been perceived as leading to non-risky play during competitions. To change this, the committee discussed the possibility of dropping Execution as a judging category entirely, as a possible method of encouraging competitive players to attempt their more difficult moves, without feeling as if they are risking too much in terms of Execution.

Advantages:

–      adds freedom to routine design, as the dominating influence of Execution (avoiding drops) is reduced => more risky, creative and varying routines are expected;

–      at major tournaments we can have 5 judges each for AI and Diff, and the top and the low scores for each category get deleted => subjectivity in judging AI and Diff gets reduced

Disadvantages:

–      the weight of Execution gets too little and teams with 6+ drops start winning tournaments. This wouldn’t be understandable for spectators or some competitors;

–      Execution would have to get part of AI and Diff to some extent, but how to do this in an objective way? => subjectivity of judging would increase even more

After a long discussion, dropping Execution was considered too radical a change from the current judging system, and a change that would probably not be supported by the majority of players. Instead, the committee decided that there are other measures that can be implemented to reduce the influence of Execution.

 

 

Ranking system

Statistical analyses show that the scoring weights of the three judging categories (AI, Difficulty, Execution) is not balanced. Artistic Impression (AI) and Execution are more influential judging categories toward the outcome of competition than Difficulty in the current judging system. This is due to the fact that Difficulty judges tend to give scores mainly between 3-7, while the variance for the other two categories is higher, which gives them more weight when tallying up the scores.

One possibility to standardize (and balance out) the variance of the 3 categories is to sum up the ranks of teams instead of the scores, i.e. each judge ranks the teams in his/her category. The ranks of all judges are then added up and the team with the lowest sum of ranks wins the pool.

Another advantage of this approach is the opportunity of a final review: If a judge sees that after counting everything together, Team A has a better score than Team B, but now, after judging the whole pool and having the overall picture, the judge thinks that Team B should be ranked higher than Team A, s/he could still change the rankings. So scores would only be an important criterion for determining ranks by a particular judge, but not necessarily the only one criterion (of course, a judge should have good arguments ready for not ranking teams according to their scores). The ranking system could provide more precise and fair results.

The disadvantage of the ranking system (and the reason that it wasn’t adopted) is that a very small difference in one category is equally important as a huge difference in another category,  e.g. Team A is scoring ahead of Team B in Execution and in AI by .1.  However, in Difficulty Team B is ahead of Team A by 1.5 points. When applying the ranking system, Team A would finish ahead of Team B, which cannot be considered fair given the points scored. The ranking system fails in this scenario because it is not differentiating the team performances enough. Instead, other measures will be implemented to better balance the weight of the judging categories (see according documents).