• Welcome to TalkWeather!
    We see you lurking around TalkWeather! Take the extra step and join us today to view attachments, see less ads and maybe even join the discussion.
    CLICK TO JOIN TALKWEATHER

Svr Wx Event - April 15-17, 2024

I understand the strong verbiage around their initial outline 6/7 days ago because of what the models were showing at the time. But man, the Monday portion of this threat really fell apart fast, questions around Tuesdays ceiling notwithstanding.
Seems like the usual with most big threats around the 6-7 day range here lately
 
It seems even ensemble consensus at the Day 5-7 range is meaningless. This is why I always kind of roll my eyes when I hear people talk about AI "solving" tornado forecasting, at least in the medium range. I just don't believe it's possible. Forecasting a high-end parameter space (shear & instability) seems to be relatively easy. However, the critical things that make or break a tornado outbreak - especially a daylight one with visible tornadoes - namely, will storms exist in that parameter space and what mode they will take - are also the hardest to forecast.
 
It seems even ensemble consensus at the Day 5-7 range is meaningless. This is why I always kind of roll my eyes when I hear people talk about AI "solving" tornado forecasting, at least in the medium range. I just don't believe it's possible. Forecasting a high-end parameter space (shear & instability) seems to be relatively easy. However, the critical things that make or break a tornado outbreak - especially a daylight one with visible tornadoes - namely, will storms exist in that parameter space and what mode they will take - are also the hardest to forecast.
Not to mention, those particular types of machine learning models rely on global model data, like with so much ML-stuff, it relies on other stuff to make stuff.

Note: not saying it has no utility, but like every tool, it has its uses and limitations.
 
Not to mention, those particular types of machine learning models rely on global model data, like with so much ML-stuff, it relies on other stuff to make stuff.

Note: not saying it has no utility, but like every tool, it has its uses and limitations.

That's one of the things that need to kept in mind regarding AI, especially of the machine-learning variety. They're only as good as the data that they ingest will allow them to be. "Garbage in, garbage out" is an idiom in the computing world for a reason--it's long been known that training an AI with biased/inaccurate data will lead to it giving out biased/inaccurate results. In addition, training an AI on AI-generated output will cause it to become "dumber" (it's roughly the same "generational loss" phenomenon that was encountered with VHS tapes--recording a VHS tape's data onto another tape would cause the overall audiovisual quality to deteriorate noticeably due to analog data loss, and after enough such iterations the tapes would be literally unwatchable).

Thus, I would personally recommend keeping that kind of stuff in mind whenever someone talks about AI in basically any context.
 
I'm anxiously excited for this event. I just drove from Fort Lauderdale, FL to St Louis for renovation and property management work in KS, MI, and TN. Scary Drive. The Highways from Chatanooga to Paduca looked like Deer Vietnam. There must have been a hundred deer carcasses on the road, and dozens more of their comrades are literally walking between the trenches on both sides of the highway. Had my headlights on the whole time cause these F###ers would just run out into traffic like suicide bombers.

Anyway, I'll be hooking up security cameras in Lerner KS, Kansas City MO, and NW of Nashville TN. Did you guys know there was a Nashville II in Illinois? Hopefully, I can capture something, and stream it from my underground bunker in Independence, MO. I still have to set up the Internet and Battery Generator before Tuesday Afternoon. Any suggestions would be appreciated.
 
It seems even ensemble consensus at the Day 5-7 range is meaningless. This is why I always kind of roll my eyes when I hear people talk about AI "solving" tornado forecasting, at least in the medium range. I just don't believe it's possible. Forecasting a high-end parameter space (shear & instability) seems to be relatively easy. However, the critical things that make or break a tornado outbreak - especially a daylight one with visible tornadoes - namely, will storms exist in that parameter space and what mode they will take - are also the hardest to forecast.
Right! Heck AI can't even help the GFS. This is why the human side of forecasting will never go away any time soon. Models be models. They are tools not gospel.
 
Right! Heck AI can't even help the GFS. This is why the human side of forecasting will never go away any time soon. Models be models. They are tools not gospel.
I agree. These 4-7 day outlook are beginning to get ridiculous IMO. It seems that especially over the last 2-3 years SPC forecasts haven't verified well, even at Day 1-2 with the recent Moderate didn't become robust for that elevated of a threat. We had what, 8-10 tornadoes, and the highest one was an EF2? Yea the damaged to any one human or family is serious, but the atmospheric environment should be in a very hostile, serious state for a Moderate Risk or higher to be introduced.

A Moderate Risk should really tell the public to prepare and get ready for a serious severe weather threat. EF3+ IMO.
And yes, I know its much more complicated than that after all these years as well.
 
I think many, many, many in the weather community (including here) don't understand the very real limitations of current technology, and also the probabilistic nature of the forecasts. I tried to illustrate this on the last event thread with the simulation of tornadoes.

For instance, let's consider the previous event, on April 10. The probabilities were 15-29% for Tornadoes in the MDT area and 10-15% in the ENH. Additionally, there was a hatched area over most of this region. According to the SPC, these probabilities are the forecasted probability that a given point experiences a tornado within a 25 mile radius. But humans have a notoriously difficult time visualizing or conceptualizing spacial statistics like this. Let's play with this a bit.

Imagine, if it were possible, to divide the MDT and ENH regions into perfect 25-mile radius circles, non-overlapping, so that the entire area was covered, without any extra coverage. Obviously, this is not possible, but is a simplifying assumption (more sophisticated work could hash this out using integrals, for instance, I feel.)

In the MDT area (33,478 square miles), that divides into about 17 such 25-mile radius circles. The ENH area (63,626 square miles) divides into about 32 25-mile radius circles. The ENH area OUTSIDE of the MDT area (30,148 square miles) divides into about 15 circles.

Now, in the ENH if each circle has about 10-15% chance of producing a tornado, that means in the ENH area (excluding the MDT area inside it), you should expect maybe 1 or 2 tornadoes. In the MDT area, with its 15-29% area, you'd expect 3-5 tornadoes.

If we expand this out to the SLGT and MRG areas, you'd additionally expect 0-1 more tornadoes. This brings the entire day's expectancy to 4-8 tornadoes total, with the possibility of a couple/few of those being EF2+.

The verification has 10 tornadoes, one of which was an EF2. In my mind, this verifies very nicely, given the areal size of the alerted area.

But by all means, keep throwing out "Forecasted Convective Amplification Deficiency!" every time there aren't 10-15 EF4+ tornadoes in a day. I think ONE of these approaches only serves to erode confidence in existing forecasts, and spoiler, it's not the SPC's approach.

1713131880598.png1713132794719.png
 
That's one of the things that need to kept in mind regarding AI, especially of the machine-learning variety. They're only as good as the data that they ingest will allow them to be. "Garbage in, garbage out" is an idiom in the computing world for a reason--it's long been known that training an AI with biased/inaccurate data will lead to it giving out biased/inaccurate results. In addition, training an AI on AI-generated output will cause it to become "dumber" (it's roughly the same "generational loss" phenomenon that was encountered with VHS tapes--recording a VHS tape's data onto another tape would cause the overall audiovisual quality to deteriorate noticeably due to analog data loss, and after enough such iterations the tapes would be literally unwatchable).

Thus, I would personally recommend keeping that kind of stuff in mind whenever someone talks about AI in basically any context.
Interesting discussion regarding ML/AI and its limitations on predictions medium-long range. I think part of it is that in a chaotic system like the Earth's weather, it's exceedingly stupidly difficult to predict medium-long range forecasts for micro-events that are usually only 100 or 200 yards in circumference and ingredient setups that largely depend on mesoscale interactions the day of. I think another part of it is the models that are input into the AI/ML models and the extreme sensitivity to initial conditions inherent in any chaotic system. This makes the AI/ML outputs have a high variance from one run to the next. Even if they are consistent, if they are consistently eating garbage, they'll be consistently spitting out worse garbage.

Some of the more recent AI/ML efforts, like STORM-NET, use extra-model inputs, such as radar data in its 1 and 3 hour predictions. Any AI/ML model's accuracy is going to be at least whatever the error input into the model is, this systemic error will propagate through to the output. So if you're using Euro data, for instance, in an AI-assisted model, its accuracy is inherently going to be hampered by the intrinsic error from the Euro feeding it.

And at the end of the day, I still think the global models have basically reached their limit in terms of accuracy, at least for the foreseeable future. The input into these models have wide gaps over desserts, oceans, and sparsely populated areas. Until we can find a cost-effective way to assimilate initial conditions more accurately reflective of global conditions at time t=0, the accuracy of global models will struggle to improve.

I think the SPC is stretching the limits of what is possible, given the chaos in the systems, to predict these mesoscale threats 5-8 days out, and we have to recognize that such forecasting is done with extreme skill but also an unavoidable amount of guesswork. I don't foresee AI/ML making great advances in this space for some years, at least until we can figure out the input-data problems discussed here.
 
I think many, many, many in the weather community (including here) don't understand the very real limitations of current technology, and also the probabilistic nature of the forecasts. I tried to illustrate this on the last event thread with the simulation of tornadoes.

For instance, let's consider the previous event, on April 10. The probabilities were 15-29% for Tornadoes in the MDT area and 10-15% in the ENH. Additionally, there was a hatched area over most of this region. According to the SPC, these probabilities are the forecasted probability that a given point experiences a tornado within a 25 mile radius. But humans have a notoriously difficult time visualizing or conceptualizing spacial statistics like this. Let's play with this a bit.

Imagine, if it were possible, to divide the MDT and ENH regions into perfect 25-mile radius circles, non-overlapping, so that the entire area was covered, without any extra coverage. Obviously, this is not possible, but is a simplifying assumption (more sophisticated work could hash this out using integrals, for instance, I feel.)

In the MDT area (33,478 square miles), that divides into about 17 such 25-mile radius circles. The ENH area (63,626 square miles) divides into about 32 25-mile radius circles. The ENH area OUTSIDE of the MDT area (30,148 square miles) divides into about 15 circles.

Now, in the ENH if each circle has about 10-15% chance of producing a tornado, that means in the ENH area (excluding the MDT area inside it), you should expect maybe 1 or 2 tornadoes. In the MDT area, with its 15-29% area, you'd expect 3-5 tornadoes.

If we expand this out to the SLGT and MRG areas, you'd additionally expect 0-1 more tornadoes. This brings the entire day's expectancy to 4-8 tornadoes total, with the possibility of a couple/few of those being EF2+.

The verification has 10 tornadoes, one of which was an EF2. In my mind, this verifies very nicely, given the areal size of the alerted area.

But by all means, keep throwing out "Forecasted Convective Amplification Deficiency!" every time there aren't 10-15 EF4+ tornadoes in a day. I think ONE of these approaches only serves to erode confidence in existing forecasts, and spoiler, it's not the SPC's approach.

View attachment 25396View attachment 25397
I can’t thank you enough by doing your best to visualize this. I keep telling people that a 15% chance of tornadoes is not a significant number by any stretch of the imagination. It’s almost impossible to be 50% sure if tornadoes are going to occur, which is why 60% hatched is pretty much never used.
Moderate risk does not mean a tornado outbreak is going to occur, it simply highlights the MODERATE chance of a tornado occurring within a general area. Unfortunately, most of the public thinks of these risks as a indicator of how strong and numerous tornadoes are going to be and not just a risk.
 
I think many, many, many in the weather community (including here) don't understand the very real limitations of current technology, and also the probabilistic nature of the forecasts. I tried to illustrate this on the last event thread with the simulation of tornadoes.

For instance, let's consider the previous event, on April 10. The probabilities were 15-29% for Tornadoes in the MDT area and 10-15% in the ENH. Additionally, there was a hatched area over most of this region. According to the SPC, these probabilities are the forecasted probability that a given point experiences a tornado within a 25 mile radius. But humans have a notoriously difficult time visualizing or conceptualizing spacial statistics like this. Let's play with this a bit.

Imagine, if it were possible, to divide the MDT and ENH regions into perfect 25-mile radius circles, non-overlapping, so that the entire area was covered, without any extra coverage. Obviously, this is not possible, but is a simplifying assumption (more sophisticated work could hash this out using integrals, for instance, I feel.)

In the MDT area (33,478 square miles), that divides into about 17 such 25-mile radius circles. The ENH area (63,626 square miles) divides into about 32 25-mile radius circles. The ENH area OUTSIDE of the MDT area (30,148 square miles) divides into about 15 circles.

Now, in the ENH if each circle has about 10-15% chance of producing a tornado, that means in the ENH area (excluding the MDT area inside it), you should expect maybe 1 or 2 tornadoes. In the MDT area, with its 15-29% area, you'd expect 3-5 tornadoes.

If we expand this out to the SLGT and MRG areas, you'd additionally expect 0-1 more tornadoes. This brings the entire day's expectancy to 4-8 tornadoes total, with the possibility of a couple/few of those being EF2+.

The verification has 10 tornadoes, one of which was an EF2. In my mind, this verifies very nicely, given the areal size of the alerted area.

But by all means, keep throwing out "Forecasted Convective Amplification Deficiency!" every time there aren't 10-15 EF4+ tornadoes in a day. I think ONE of these approaches only serves to erode confidence in existing forecasts, and spoiler, it's not the SPC's approach.

View attachment 25396View attachment 25397
I don’t think many on here were doing that.

You can throw statistics and probabilities at us on here all you want, the SPC also has to balance the “social science” of this aspect among a weather illiterate public. Down to even the color of the risks.

A level 4/5 alert yielding 10 tornado reports is going to absolutely dent some of the public’s trust and perception more than some teenage weenie on social media hollering “Forecasted Convective Amplification Deficiency” ever will. I can almost guarantee you there were hot wash/after action meetings on the previous 2 underperformers at the SPC to see exactly what happened. It’s one of the reasons they don’t throw out high risks often, they want the public to know just how serious the event could be.
 
I don’t think many on here were doing that.

You can throw statistics and probabilities at us on here all you want, the SPC also has to balance the “social science” of this aspect among a weather illiterate public. Down to even the color of the risks.

A level 4/5 alert yielding 10 tornado reports is going to absolutely dent some of the public’s trust and perception more than some teenage weenie on social media hollering “Forecasted Convective Amplification Deficiency” ever will. I can almost guarantee you there were hot wash/after action meetings on the previous 2 underperformers at the SPC to see exactly what happened.It’s one of the reasons they don’t throw out high risks often, they want the public to know just how serious the event could be.
I didn't say there were many on here doing that. But some, yes, even in this thread. Sure, there's the social science aspect of it, and we can talk about that, too. But the SPC has opted for a probabilistic framework to do their forecasts, and I'm merely showing what that looks like. Given the SPC's own definitions, this past outbreak was NOT a Forecasted Convective Amplification Deficiency, no matter how much social science is thrown into it. I stand by what I said.
 
I didn't say there were many on here doing that. But some, yes, even in this thread. Sure, there's the social science aspect of it, and we can talk about that, too. But the SPC has opted for a probabilistic framework to do their forecasts, and I'm merely showing what that looks like. Given the SPC's own definitions, this past outbreak was NOT a Forecasted Convective Amplification Deficiency, no matter how much social science is thrown into it. I stand by what I said.
Lol. It most certainly underperformed relative to other moderate risks, historically, and I would bet the SPC would acknowledge that. Say that all you want, but 10 tornado reports for a 4/5 risk is not a verification. The SPC certainly doesn’t need folks white knighting for them on weather hobbyist message boards either. They care more for what the general public thinks.
 
in a nutshell for a event boom/Forecasted Convective Amplification Deficiency its this View attachment 25398
Exactly. This shows the previous event performed as expected. The evidence points to that. The "opposing" side (Forecasted Convective Amplification Deficiency! Forecasted Convective Amplification Deficiency!) just spouts non-cited "historical" trends and provides no real evidence.
 
Exactly. This shows the previous event performed as expected. The evidence points to that. The "opposing" side (Forecasted Convective Amplification Deficiency! Forecasted Convective Amplification Deficiency!) just spouts non-cited "historical" trends and provides no real evidence.
You do know that’s a spreadsheet that user created and has no scientific backing whatsoever if it’s anything similar to his EF scale sheets he posts? Not even looking at it. Again, the SPC doesn’t need you to white knight.
 
Back
Top