Homebrew competitions invite organizers and judges to strike a delicate balancing act: How to honor, on the one hand, the inherently fun and casual nature of the homebrewing community, while at the same time being serious and diligent enough to give entrants their money's worth in terms of feedback and attention.
Those who have been through a quality judging course – as part of BJCP training or otherwise – are familiar with some of the best practices: fill out the entire sheet; comment on all aroma/flavor/etc. characteristics as prompted by the sheet; write legibly; tell the brewer where deducted points went and how to reclaim them.
With this in mind, take a look at the two scoresheets presented here (click the images for a larger view), corresponding to a couple entries of mine from a recent competition. Both were filled out by the same judge (same flight), an "experienced" judge who, based on information included on the scoresheet, seems to have recently taken the BJCP exam and is awaiting his score and rank.
Neither scoresheet is exactly a case study in how to evaluate beer. On the first scoresheet, there is plenty of unused white space, the handwriting is poor, and within each scoring section there are characteristics the judge does not comment on. Still, at the end the judge does provide an evaluative statement and offers a recommendation for improvement.
Consider now the second scoresheet. The handwriting is practically illegible and as your eye moves down the page, it encounters less and less writing, to the point where the final scoring section (the one where, incidentally, the most space is given for comments) is left entirely blank. The brewer is left simply to guess as to how the judge arrived at the assigned score, for there is little besides careless pencil scratches to offer any clues.
It's natural to wonder whether the judge had simply "evaluated" too much beer by this point and was worse off for it. Indeed, and in some measure of fairness, Exhibit A was judged fairly early in the flight; Exhibit B fairly late. Nevertheless, the second judge (usually entries are evaluated by a pair of judges) managed to write perfectly legible and thorough comments on both sheets, and at any rate a brewer should not have to fret over whether his beers will be evaluated by adequately sober judges.
In terms of providing useful feedback on how to improve the beer in question, much less providing a careful analysis of the entry, these scoresheets (the second one especially so) are unfortunate failures. I can only say it's good for my sake that I trust my own evaluative abilities enough that I do not enter competitions, generally, looking for feedback on how to improve my beers. (On this particular go-around I had been experimenting with blending beers and entering off-style; the judges tended not to be terribly impressed and my scores reflected that, as you can see; I had half expected as much.)
As a judge, I know that fatigue can set in near the end of a flight or after a long day of evaluating beers. Nevertheless, I do believe that each entry is entitled to the same thorough critique and feedback as is every other one. To see such woefully inadequate scoresheets is discouraging, but even more so coming from a person just now entering the ranks of the BJCP. Know that I do not write these words out of sour grapes – I am not troubled by the scores nor personally distressed by the sparse comments so much as I am dismayed by what appear to be bad habits in the making and the prospect that the next victim will be a brewer who truly relies on judging feedback to improve his beer.
Competition organizers and the BJCP had better take heed: I don't think it's too much to say that the very credibility of homebrew competitions, the BJCP and my fellow BJCP judges hinges in no small measure on the quality of judging entrants receive in exchange for their time, effort and entry fees. We can, and should, do better.