How and/or by whom do decisions get made about what patches get included in the software that is installed on users' computers? If someone develops a change that turns out not to be beneficial, how does that get weeded out?
Depends on if you're talking about the kernel (the base functionality of the OS... gets the machine booted and running and talking to the hardware), or the entire Linux "suite" of tools... everything from the start-up scripts to the desktop browser.
Kernel stuff is pretty well guarded by Linux Torvalds and his "minions" he trusts, and tons of discussion over at kernel.org.
Other stuff, each individual developer or team handles everything else.
Then it all gets rolled into a big hopefully-organized ball called a "distribution" and the distro also looks over the code and can make changes or recommend that the "upstream" make them for all distros.
It's collaborative, and really the same thing happens behind closed doors on closed-source stuff... it's just a bit more out in the open on Linux. And even then, we all know there's "powerful" folks who converse privately and figure out the "tough" decisions.
So basically it sounds like security is not guaranteed, but is likely to be much better for a number of reasons.
I'm thinking of switching to Linux for most Internet access purposes, and only using Windows when there's no other way to run the software I need to run.
I'm not sure it's any more secure, really. For SOME things it is, but a Firefox bug is a Firefox bug, often. Depends on how it interacts with the OS after the exploit is done, and privelidge-escalation is privelidge-escalation.
SANS and other organizations seem to publish about the same number of exploits for different OSs in their alert lists these days. Doesn't seem to matter much which one it is.
Most folks who are "security-consious" who have to browse various things on the Net are starting to recommend virtualizing a copy of the OS and doing any "non-trusted" browsing in that virtual machine. If it gets infected with crap, blow it away and start over. Isolation tactics.
Sad, but all OSs are all way too prone to attacks, and updates are usually days/weeks behind the publication of the exploit. The whole security industry hopes and prays that "zero-day" exploits aren't really taken advantage of for just long enough to get patches out.
Windows has a slight disadvantage in that way too many devs are still acting like Admin level privs are needed to do things, and writing their code that way. Unix comes at it from "no one needs to be an admin" and works its way up from there, usually -- so it's MAYBE a little better and the devs heads think that way more.
But I'm not sure that's something to hang your hat on if you're protecting critical data. Only strong encryption of the entire portable machine (with tools like TrueCrypt) and many layers of defense on business networks, ever seems to contain the real-world enough to make it manageable for security folks. Otherwise they'd be overwhelmed quickly.
One of the most shocking faults of the Net and also an interesting example in "Freedom" is that most e-mail today still is un-authenticated, easily-spoofable, and un-encrypted. People don't want to deal with personal encryption certificates, since most implementations are a complete PITA... and until some HUGE entity starts requiring that all incoming e-mail be digitally signed and/or encrypted with PKI or you don't do business with them... it'll never really take off.
Spammers would be gone virtually overnight if mail servers had to authenticate and use TLS between one another with real signed verifiable SSL keys. The problem is, no exec or decision-maker at a giant firm will lead the way and say, "If you can't prove who you are, we won't even accept an e-mail from you."
And the backbone folks get paid a lot to carry the spam traffic, no matter how "hard" they say they work to fight it. It's serious cash for "someone" carrying the traffic.
Authenticated/Encrypted e-mail is just one example of "should be done, but won't be"... we know how to build properly authenticated and identified networks, but the Internet attitude is resistant to such things.
The Internet at large still values anonymity which is a complete fallacy if you have the cooperation of the carrier to track down the source unless obfuscation tools like Tor Onion Router, and other similar things are utilized. Thus, why the NSA had fiber taps in AT&T POP's on the West Coast and presumably in a lot more places than just that one where they got caught...