azure
Final Approach
Thanks.You would think, but that's not how it's done in FAA TERPS. I believe the reasons are historical. By which I mean, if you're manually plotting each obstacle by hand, then it would possibly be very workload intensive to evaluate every single obstacle's height AND accuracy in order to determine the worst one. I mean, there could be hundreds of obstacles in a final segment (now, with LIDAR and other surveying techniques, it's not uncommon for there to be even more). So, back in the manual days, it was a simple expedient to be able to get the work done - find the tallest one (or worst one) FIRST, then apply any associated accuracy adjustments.
Now, of course, computers do most of the analysis, so the reason is no longer as compelling.
However, there is an exception. For RNAV (RNP) procedures, the accuracy of ALL obstacles is considered for determination of the controlling obstacle. This is also how it is done in USAF TERPS if I understand correctly. I do not know the "why" here, other than perhaps because RNAV (RNP) criteria was developed more recently than the others and were never really evaluated manually.
Now, does anyone know why the LNAV/VNAV DA is significantly lower than the LPV DA on the RNAV 16 X at KRNO? Inquiring minds and all...