A new rubric, no sample paper, an upgrade to M-level, memories of DB3, and no idea of even how many marks would be up for grabs. It was perhaps reasonable that there was some trepidation in advance of this morning's exam. What was unexpected, then, was that the marks should have been slashed from 60 to 50 in combination with seemingly arbitrary mark inflation.
Take question 1(c) about Robots exclusion. This year it was worth 4 marks; last year, the very same question (4(a), right down to the wording) was worth 2 marks. Yet the question on parallel crawling (3(e) this year; 5(b)(ii) last year) was worth 6 marks in both cases.
Don't get me wrong: this has to be a good thing, and it didn't seem that anyone was particularly aggrieved by the paper. The new material on language modelling was synthesised neatly into the exam (spread across all three questions); it reflected the tutorial questions about the same, and didn't probe too deeply about the material on which I, for one, was rather shaky. Furthermore, all of the questions seemed to be to the point, rather than piddly 2-mark questions that invited the examinee to discuss some vague topic.
All good stuff, I'm sure you'll agree. But if you're anything like me, you'll be thinking, "Where's the catch*?"
* Our best guess is that catch will appear this Friday in Grid. With what lingering statistical knowledge I have, I can still reckon that the probability of that paper doing us any favours is nil.