This is a very good point, and there's no question that climate is orders of magnitude more complex than the kinds of systems engineers need to model. But demanding engineering-level accuracy in climate modeling may be missing the point somewhat. The models aren't intended to predict features that are within the bounds of "natural variability". The paradigm that's usually invoked is that of chaos - the system is so exquisitely sensitive to initial conditions that initial states that are almost identical diverge fairly quickly. It's why we can't accurately predict the weather more than a day or two out, and more than a week or so is no better than guessing. But a climate model doesn't need to predict the weather decades in the future, it only needs to reproduce the *statistics* of the weather over timescales long enough that natural variability averages out. The exact state of a chaotic system may be impossible to predict but it may still be bounded by what chaos theorists call an "attractor" - a limited region of the parameter space. The system never leaves the attractor as long as the boundary conditions don't change and it spends a predictable fraction of its time in each part of the attractor. So the models don't care about predicting whether California has a drought in 2014 or the eastern US gets clobbered with snow this year or next year, they only care about whether California gets 200 inches per decade vs 20,000, and what fraction of the time California has extreme drought conditions, i.e. long term statistics given the known forcings and initial conditions such as landform configuration, ice cover, etc. To my understanding, the forcings are boundary conditions on the models. I don't know whether the known sources of year-to-year and decadal-scale variability (like ENSO, PDO, AMO, etc.) are also put in to tune the models or whether they emerge from the physics -- that's something I've yet to learn. But models that reproduce the long-term climate statistics for a given set of forcings are the ones of interest to try to predict how those statistics change if the forcings change - e.g. if the Sun's output changes, or if more GHGs are added to the atmosphere.
One avenue for skepticism that I think is well-founded (though that might be because I don't yet know enough about the subject) is whether we know enough about the timescales on which natural variability occurs that is not connected with changes in forcings. If you can't tell the difference between forced and unforced variations then you have no way to validate your models against long-term statistics. How long does it take a chaotic system to visit every corner of the attractor? How do you establish a good timescale for telling the difference between "weather" and "climate"? How did the WMO arrive at 30 years as the cutoff? Should it be shorter or maybe much longer? There's an interesting discussion going on about these issues
here.
That depends on the process that is used to evaluate proposals. Based on my experience (I'll admit mostly from 20 years ago), political bias for particular results has little effect on whether individual proposals are funded because the politicians are not the ones deciding merit, scientists are. I would worry more about bias on the part of the scientists themselves - not political bias so much as conceptual bias. As Feynman said, the easiest person to fool is yourself.
The
AGU fall conference at San Francisco wraps up today. I listened to some of it yesterday via live streaming, mostly talks on global environmental change and mitigation. This is a scientific union not a political body, yet I saw not a single paper or lecture that doubted that AGW is real.