I think it's an engineering pedantry dominated mindset. The whole thing reminds me of that saying that "an expert is a person who avoids the small errors while sweeping on to the grand fallacy". In this case it's literally true, they try to account for every factor that could compromise the 'load path' while arriving at a figure that other evidence indicates is probably drastically wrong.
I don't know what the incentive is as it's one in that scientific community - not just the engineers, but some of the meteorologists as well. Though if you push them they'll admit that winds are probably much higher. For the engineers it allows blame to be shifted to builders at least.
One thing I've noticed that even though it's often tacitly admitted that EF DI determined winds are often not accurate, they are usually treated as such for the purposes of engineering and climatology. If it was admitted that higher speeds occur this would indicate the risk is higher than EF-speed based assessments would say. On the other hand, if someone obtained evidence of significantly higher winds for a given degree of damage, this could indicate that houses are more resistant than thought, which would be a good thing.
But overall it's a hard question to answer - you can't ask the people involved because they don't believe they are underrating tornadoes. That's why I think it's a mindset problem that will take some fresh minds or really decisive evidence to shift.
As someone who works professionally in both the engineering and scientific communities, I find this slightly offensive for obvious reasons, but I understand the sentiment and wholly love this quote "an expert is a person who avoids the small errors while sweeping on to the grand fallacy". What I will say in response, is that scientist, engineers and meteorologists apply prescribed methods to a problem with the goal of determining a definitive answerer. If they are not approaching a problem in this way, then they are acting neither as a scientist nor engineer.
I know we look like a bunch of idiots running around, pointing fingers whenever let's say a bridge collapses and the media is always excellent at finding an "expert" who will espouse their opinions with zero liability nor ownership for the statements made. So, how can a science, being civil engineering in this case, that's been around for centuries, totally miss the mark with catastrophic results? or how can a survey team come up with a rating that in re-review was obviously grossly underrated?
As an aside, I am not even going to touch on building codes because those are whole other issue to themselves that many times have much more to do with politics as opposed to science or engineering.
Science is a beautiful thing because we are constantly discovering new things but therein lies the problem, in that by making that statement, I have conversely said that previously, we did not have as clear of a picture or may have even had things wrong. It can be easy to pass judgement, even if those people previously applied the correct method but unknowingly came to an erroneous conclusion. Modern meteorology is a very new science when compared to others, so aspects of it may have been incorrect but like other sciences has constantly evolved and will continue to. The 1925 Tri-state tornado is an excellent example. The analysis performed and conclusions arrived at in 1925, 66, 92 and finally in 2013 are substantially different but each of these analyses held up to peer review based on the current scientific understanding at that time.
The rating system is an evidence-based system, which at this point in time is the best we have. There may be a time in the near future where the technology will advance to the point that we can record the instantaneous wind speed of any tornado and have a full picture of its genesis and decay. The method being used to interpret that evidence has constantly evolved since it's introduction and has well known limitations and blind spots. A Guide to F-Scale Damage Assessment, Doswell 2003 is a good breakdown of these limits and problems with the methods used as well as pitfalls a surveyor might encounter. Meteorology, as a science, is many times not able to provide fully definitive answers and still requires subjective input for a qualified answer. Without knowing the actually wind speed, there will always be a tendency to err on the side of being conservative with a subjective conclusion because without strong evidence to the contrary, the conclusion will not hold up to scrutiny. As methods and guidance on how to apply those methods evolves, the accuracy and precision has and will continue to improve.
I doubt I am going to change anyone's mind and I may be blatantly exposing the aforementioned mind-set myself, but I truly do not believe that on the whole, poor survey results have anything to do with nefarious or predisposed mindsets. Whether it be a survey or forecast, sometimes well-meaning professionals just get things wrong, not out of ignorance or being unthorough but because the methods used were flawed or improperly applied.