Re: Winning versus scoring

This argument doesn't seem to make much sense.
The team that wins the tournament (almost always)
gets an automatic bid; the logic of course being that
the winner of a tournament should be rewarded for it.
Should the strategy employed in an attempt to win a
tournament--to take all history and lit, to be a generalist, or
simply to ring in early on every other tossup--fail,
then it is up to the tournament organizers' discretion
to decide how to reward (or penalize) the team in
question.

Beyond that, how teams rank should be a function of
overall performance. However, the question then becomes
how to "rank" overall performance. Were there enough
data points--that is, tournaments--then NAQT would
simply be able to take the average win-loss record of
each team, rank them, and crank out a list of
invitations.

If there were only one tournament to rank, then
everything would be fine. But how, then, does one compare
tournaments? As Mr. Hilleman has mentioned, there are multiple
tournaments whose results have to be compiled together.
Saying that the win-loss record in one tournament can be
compared at all to another result in another tournament is
not sensible. 

Likewise, even within a
tournament, the final standings are not necessarily
indicative of actual quality of teams. Here is my
scenario:

Tournament A has 15 teams, out of which team X is clearly
better than everyone else, and teams Y and Z are of
roughly equal ability, and superior to the remaining 12
teams. So, during the tournament, team X goes 14-0
(average = 400 points/20 TU), team Y goes 12-2 (average =
300 points/20 TU), and team Z goes 13-1 (average =
350 points/20 TU). [All three teams beat everyone
else in the tournament, so X beats Y and Z, and Z
beats Y].

The question then becomes, does Z
deserve to be invited to a tournament before team Y?
Therein lies a dilemma:

1. If you are in the
Hayeslip-McKenzie camp, then you would believe that since Z had a
better win-loss record than Y, Z should get the
invitation first.

2. If you are in the NAQT camp, you
would (probably) prefer team Y, which over the course
of the tournament out-performed team
Z.

Personally, if I see a 50-point per game discrepancy between
two "neighbors" in the win-loss rankings, and a
one-game difference between them, I'd tend to go with the
"statistical fluke" theory and give preference to the team
with the worse record. If the difference were closer
[less than 20 points per game or so], I'd tend to think
that the two teams are roughly comparable, and give
preference to the team with the better record.

So
what's the point of all of this?

I think that
perhaps the solution lies in between the extremes of (1)
and (2). For example, NAQT could consider *adding* a
term representing win-loss record to their formula.
However, even this poses difficulties: how do you compare
tournament sizes--surely it should be worth more to do
better in a larger tournament than in a smaller one
(right?).

So, something to consider might be giving all
tournaments the same base value, then adding extra points for
each team in the field, and then multiplying by the
team's given win-loss record. That way, if two teams are
otherwise very close, the team with the higher win-loss
record should still prevail with a higher ranking; if
not, and the result was a "one-time" occurrence, then
the team with the superior overall statistics will
not be overly punished.

I don't think it would
be too hard to implement such a change, and
moreover, it would be incorporate differences between
different tournaments, making it more "valuable" to do
"above average" in a big tournament than to do well in a
small field.

--AEI

This archive was generated by hypermail 2.4.0: Sat 12 Feb 2022 12:30:43 AM EST EST