• Welcome to TalkWeather!
    We see you lurking around TalkWeather! Take the extra step and join us today to view attachments, see less ads and maybe even join the discussion.
    CLICK TO JOIN TALKWEATHER

2025 Atlantic Hurricane Season

@IdaliaHelene If you aren't going to listen to anything I have to say, then there now isn't really a point to me replying anymore. As a result, this is the last thing I will state regarding this topic: You need to stop downplaying actual, hardworking scientists' forecasts in favor of pushing AI-generated, mystical slop that is not applicable to real-world science in any way, shape, or form. Stop doing it and be better.

Again, this is my last post on this. I'm sorry for derailing the thread.
 
That’s very interesting! I’d love to learn more about your model process!
It uses ADT satellite data compiled globally since 2014 to 2022 (2023 - 2025 served as test set). The model is an ensemble of several different statistical models whose outputs are weighted based on model performance on the training set. I've tested it on the "big" ones like Helene, Milton, etc, but also on weaker storms. Like on the A storm last year (forget the name lol), it has RI probabilities of like 0.2% and such.
 
It uses ADT satellite data compiled globally since 2014 to 2022 (2023 - 2025 served as test set). The model is an ensemble of several different statistical models whose outputs are weighted based on model performance on the training set. I've tested it on the "big" ones like Helene, Milton, etc, but also on weaker storms. Like on the A storm last year (forget the name lol), it has RI probabilities of like 0.2% and such.
The A Storm of last year was Alberto, a sloppy, broad 45 kt (50 mph) TS that struck Mexico after forming in the Bay of Campeche from a CAG.

How well did it do on Hurricane Milton?
 
The A Storm of last year was Alberto, a sloppy, broad 45 kt (50 mph) TS that struck Mexico after forming in the Bay of Campeche from a CAG.

How well did it do on Hurricane Milton?
Ah yes, forgot Alberto.

Here's Milton about 18 hours before it exploded to Cat 5 the first time. Like most models, it greatly undersold Milton's final intensity, but did strongly pick up on possibility of RI. (The model very rarely goes about 50% RI, so percents in the 40s is really high.)

I think most statistical modeling today can't get the intensity right in the extreme cases, and this is one of them too. I think some really outside the box solutions are needed to really answer the RI problem fully.

1752538050487.png
 
|
Ah yes, forgot Alberto.

Here's Milton about 12 hours before it exploded to Cat 5 the first time. Like most models, it greatly undersold Milton's final intensity, but did strongly pick up on possibility of RI. (The model very rarely goes about 50% RI, so percents in the 40s is really high.)

I think most statistical modeling today can't get the intensity right in the extreme cases, and this is one of them too. I think some really outside the box solutions are needed to really answer the RI problem fully.

View attachment 45109
How well did it do on ernesto and Rafael
 
Ah yes, forgot Alberto.

Here's Milton about 18 hours before it exploded to Cat 5 the first time. Like most models, it greatly undersold Milton's final intensity, but did strongly pick up on possibility of RI. (The model very rarely goes about 50% RI, so percents in the 40s is really high.)

I think most statistical modeling today can't get the intensity right in the extreme cases, and this is one of them too. I think some really outside the box solutions are needed to really answer the RI problem fully.

View attachment 45109
I’m very curious now. How did it do on Hurricane Beryl?
 
Haha, lots of requests! I'm getting off the computer in a few, so these will have to wait, but I'll say this: On the test set (2023 - 2025 storms to date), the average 12-hour intensity error was about 7 knots, and this grew to about 12 knots at 48 hours ---> But these errors were not evenly distributed, the RI storms had most of the intensity errors. For RI probabilities, about 50% of the time this model shows > 20% RI it occurred, and about 75% of the time the model showed > 35% RI, it occurred. The problem with the model currently is it can do somewhat okay on predicting RI but doesn't do very well at predicting the outcome of RI -- it undersells it most of the time. This isn't a new statistical problem, but it's frustrating. Part of the issue is that in order to achieve a higher accuracy, I used an ensemble approach, but that "deadens" or softens the peak predictions, so it's a sucky tradeoff. Meanwhile, if I don't ensemble it, some of the models are waaaay too sensitive. So not a good solution yet.

If anyone has any answer to the RI problem, you probably can make a bunch of money lol
 
Muh “ThErE iS a ReAsOn ThE sCiEnTiStS aReN’T pReDiCtInG a HyPeRaCtiVe SeaSon”

 
Back
Top