I think the analysis of what the cap meant for the forecast may have been the result of confirmation bias—the assumption was that the main failure mode would be too many storms forming, not too few, so people took the stronger cap to mean that the main failure mode had been eliminated rather than seeing it as a potential failure mode in its own right. It didn’t help that none of the models really forecast a cap of that strength and that it wasn’t obvious in the early observations, but I think there was that preconceived idea of what a stronger-than-expected cap would mean.
I feel like there’s probably a debate to be had about over-reliance on CAMs and the tendency of social media to create feedback loops and promote groupthink as well, but that struck me as the most obvious issue with the nowcasting yesterday evening.
I don’t think they were wrong to go with the high risk given the model output and observed conditions yesterday, but I do think they may have communicated more confidence in the forecast than was actually realistic (particularly in the extreme probabilities in the two PDS watches). You’re never going to get it 100% right and you’ll inevitably be criticized whether your forecast underperforms or overperforms, but being transparent about uncertainty and communicating the level of confidence accurately takes some of the sting out of those criticisms. Hopefully this will be a learning experience both scientifically (still mind-blowing that a day with those parameters Atmospheric Anti-Climax) and in terms of communication/public relations.