Linux Questions

Palmpilot

Touchdown! Greaser!
PoA Supporter
Joined
Apr 1, 2007
Messages
22,764
Location
PUDBY
Display Name

Display name:
Richard Palm
Am I likely to have trouble finding device drivers if I try Linux on one of my computers?

How do the people who support free Linux distribution, such as Ubuntu, pay the grocer and their mortgages?
 
Am I likely to have trouble finding device drivers if I try Linux on one of my computers?

Maybe. It's better than it once was. Googling specific models you're interested in will typically answer the question, and there are sites (too many of them, actually) that catalogue various machines and what works and what doesn't.

It's pretty rare that things don't work since there are repositories of non-free drivers from manufacturers that refuse to give details on how to access the hardware they sell. NVIDIA comes to mind. Loading the non-free NVIDIA drivers works almost all the time.

How do the people who support free Linux distribution, such as Ubuntu, pay the grocer and their mortgages?

Depends on the distribution. Something like 80% of the code updates to the main kernel were by paid programmers paid by their employers to make the kernel work with their hardware and/or increase the performance of the kernel, during a recent study. Sure there are unpaid programmers that find it interesting to patch major things for their own use and share them, the quintessential "basement programmer" but most of Linux development is mainstream, paid programmers these days.

Ubuntu as one example is created by Canonical, who sells support and other things to make their money -- started by a multi-millionaire at first. It was a spin-off of Debian, a long-time "free-as-in-freedom" distro.

Other big distros do similar things. RedHat for example, is a public company. Most small distros are just spin-offs of these bigger, paid, distros.

Some distros go the other route, taking the open-source/GPL-licensed pieces of the paid distros and re-releasing them as a distro. (CentOS for example, is Red Hat Enterprise Linux -- a pay-to-play distro -- with all proprietary non-free items removed... works great for most servers, and is a virtual copy of RHEL minus their value-added things they add for their commercial customers.)

So... each distro has a story and a history of its own. Users can choose the full pay-to-play, copies of those that have had any value-added stuff removed, or a totally "free as in freedom" distro made by one or more people in their basements. Choosing wisely is probably a good idea, and reading the distro's story/history on their website is probably important. Or just stick to the biggies with lots of momentum and coders working on them at all times.

At the end of the day, many distros just repackage the "upstream" code, which anyone can do by just trolling sourceforge or the developer's websites or whatever. Distros just make it easier to find it all. Often a distro's "package maintainer" for each individual package is someone distinctly interested in that particular piece of software for the distro -- and there's often a closed-loop between them and the upstream developers where they send in patches and comments as problems are reported by the distro users.

Is it a better system than the closed-source commercial system? Having seen both, it seems to depend a lot more on whether the coders are any good and whether or not they care about good code and user input. There are religious zealots on both sides besides that, but either the code is good or it sucks... there's not any better way to measure it than that.
 
Am I likely to have trouble finding device drivers if I try Linux on one of my computers?

I've not had any problems with this in recent times. At one time, this was a very big problem. Try it out....I think you'll be pleased.

How do the people who support free Linux distribution, such as Ubuntu, pay the grocer and their mortgages?

Canonical, the developers of Ubuntu, sell their support services to commercial operations, and they make a lot of money doing so. That is their primary means of support.

Add to that the fact that a good chunk of their code is written by the community, and you'll see that its probably not at expensive for them to maintain a codebase as it would appear.

Ubuntu is awesome. If I might make a suggestion, start off with the 6 month versions until you get used to it. Then, switch to the LTS version of Ubuntu, so that you do major upgrades every 2 years instead. You'll get a very stable OS, and the same security patches, without having to do major upgrades quite so often.
 
I should point out that almost all big/good distros can run from a LiveCD or USB stick these days... so you can load up a CD or USB stick with the distro, boot it, and see if your hardware works without much trouble. Each distro chooses whether or not to take the time to develop this type of "try before you buy" stuff.
 
I should point out that almost all big/good distros can run from a LiveCD or USB stick these days... so you can load up a CD or USB stick with the distro, boot it, and see if your hardware works without much trouble. Each distro chooses whether or not to take the time to develop this type of "try before you buy" stuff.

Gitta remember a few things with the LiveCDs though -

1. they perform horribly because they usually run in a RAM disk or directly off the CD.
2. Any changes made WILL NOT SAVE
3. I'm not sure if this is true anymore, but it used to be that you could not load the non-free hardware drivers when running the LiveCD because it required a restart to take effect.
4. Not all computers will boot off of USB. I've wasted MANY hours trying to get a thumbdrive to boot before I found out that the BIOS did not support it.
 
Thanks for the info. All my computers have drive trays, and I have a bunch of spare hard drives, so it would be easy for me to do a test installation while leaving the old OS undisturbed.
 
Thanks for the info. All my computers have drive trays, and I have a bunch of spare hard drives, so it would be easy for me to do a test installation while leaving the old OS undisturbed.

Sounds like a plan. Be cautious of trying to install to a drive other than the boot drive... the Master Boot Record of your boot drive will be overwritten with grub or grub2.

So say for example that you pull out your disk Windows calls "D:" (your second drive) that you've been using for data and want to install Linux on it... be cautious that most installers will have to modify the MBR on your "C:" drive.

It can be put back to Window's MBR if you have Windows installation media... but just letting you know...
 
Thanks for the warning. I was thinking in terms of having the old boot drive out of the computer in most cases. However I do have one computer whose BIOS allows me to specify which is the boot drive. Perhaps I'd better update my image copy of the old boot drive before experimenting.
 
... How do the people who support free Linux distribution, such as Ubuntu, pay the grocer and their mortgages?
Mostly a combination of:
A ) "I have a day job (or I'm a student), I wrote this code in my spare time and decided to give it away for free"​
and:
B ) "I wrote this code as part of my day job, and the GPL license requires that we make the source available"​

Note that B ) only works after a whole lot of A) has taken place.

Note that distributions are more about bundling and packaging and updating and releasing than they are about writing code.
-harry
 
How do the people who support free Linux distribution, such as Ubuntu, pay the grocer and their mortgages?
I don't know about Ubuntu, or should I say Canonical, but Red Hat mostly derives revenue from enterprise subscriptions for Red Hat Enterprise Linux, its layered products (such as RHEV), subsriptions to JBoss, and OEM installations. There is also a trickle from Global Support Services specials (e.g. 7-year contracts for RHEL 3), Global Professional Services (which is mostly stuff like Red Hat Federal and such), and other odds and ends like that. The information is filed with SEC and is public. You can research it all from the comfort of your home if you mean to invest.
 
When I installed a version of UBUNTU (10.04 I think) on a DELL Netbook with UBUNTU 8.04, I had a problem with the wi-fi on the device. The solution, which I found several hints in the user base/experts, was to hardwire the machine, boot it up, and let the software figure it out.
I've used the user base/experts to figure out how to define my network drives and my HP 7260. To date, the UBUNTU machines are the only ones that DUPLEX on the printer.
The information for upgrading particular hardware drivers is out there. Some research might be needed but you do not have to invent the process yourself.
 
When I installed a version of UBUNTU (10.04 I think) on a DELL Netbook with UBUNTU 8.04, I had a problem with the wi-fi on the device. The solution, which I found several hints in the user base/experts, was to hardwire the machine, boot it up, and let the software figure it out.

So, after that process was complete, were you able to disconnect the hardwire connection and use the Wi-Fi connection?
 
Something like 80% of the code updates to the main kernel were by paid programmers paid by their employers to make the kernel work with their hardware and/or increase the performance of the kernel, during a recent study. Sure there are unpaid programmers that find it interesting to patch major things for their own use and share them, the quintessential "basement programmer" but most of Linux development is mainstream, paid programmers these days.

How and/or by whom do decisions get made about what patches get included in the software that is installed on users' computers? If someone develops a change that turns out not to be beneficial, how does that get weeded out?
 
I saw on a ZDNet quiz that switching to Linux is a way to avoid malware. If this is true, is it because there's something fundamental about the way Linux is structured that makes malware impossible or harder to implement, is it because a smaller user base is less attractive to malware authors, is it both, or is there some other reason?
 
... is it because there's something fundamental about the way Linux is structured that makes malware impossible or harder to implement, is it because a smaller user base is less attractive to malware authors, is it both, or is there some other reason?
There are aspects of the design which make malware harder to implement and a smaller user base.
-harry
 
I saw on a ZDNet quiz that switching to Linux is a way to avoid malware. If this is true, is it because there's something fundamental about the way Linux is structured that makes malware impossible or harder to implement, is it because a smaller user base is less attractive to malware authors, is it both, or is there some other reason?
I'm going to be curt here, sorry... But here's a short list of keywords in case you want to research. Approaches, techniques, implementations used by Linux to address the changing nature of attacks (in particular the shift from attacking the base OS to attacking applications such as browser):
- Priviledge separation, root account (available on Windows too) - protects from escalations, relatively less relevant now, but is important for system integrity
- Address-space randomization (available on OSX too) - defends against specific overflow attacks that used to be popular
- NX (augments ASLR, replaces Execshield) - same role as ASLR
- SElinux - extremely important for breach containment
- Automated updates - a prevention mechanism (same as on Windows)

Personally I also think that the culture of security-conscious programming plays an enormous role. I do not think that mitigation techniques such as SElinux can substitute for the proper application and library development.
 
Another big difference is that most of the software installed on modern distributions now happens through that distribution's package management system. That then provides one common interface and method of updating all software on the system.

Windows Update pretty much just updates Windows. All of the other software you install needs to be updated in different ways and that almost never happens for most users.
 
I'm going to be curt here, sorry... But here's a short list of keywords in case you want to research. Approaches, techniques, implementations used by Linux to address the changing nature of attacks (in particular the shift from attacking the base OS to attacking applications such as browser):
- Priviledge separation, root account (available on Windows too) - protects from escalations, relatively less relevant now, but is important for system integrity
- Address-space randomization (available on OSX too) - defends against specific overflow attacks that used to be popular
- NX (augments ASLR, replaces Execshield) - same role as ASLR
- SElinux - extremely important for breach containment
- Automated updates - a prevention mechanism (same as on Windows)

Personally I also think that the culture of security-conscious programming plays an enormous role. I do not think that mitigation techniques such as SElinux can substitute for the proper application and library development.

Hmmm...Looks like verbose mode to me. Thanks for the info!

So basically it sounds like security is not guaranteed, but is likely to be much better for a number of reasons.

I'm thinking of switching to Linux for most Internet access purposes, and only using Windows when there's no other way to run the software I need to run.
 
How and/or by whom do decisions get made about what patches get included in the software that is installed on users' computers? If someone develops a change that turns out not to be beneficial, how does that get weeded out?

Depends on if you're talking about the kernel (the base functionality of the OS... gets the machine booted and running and talking to the hardware), or the entire Linux "suite" of tools... everything from the start-up scripts to the desktop browser.

Kernel stuff is pretty well guarded by Linux Torvalds and his "minions" he trusts, and tons of discussion over at kernel.org.

Other stuff, each individual developer or team handles everything else.

Then it all gets rolled into a big hopefully-organized ball called a "distribution" and the distro also looks over the code and can make changes or recommend that the "upstream" make them for all distros.

It's collaborative, and really the same thing happens behind closed doors on closed-source stuff... it's just a bit more out in the open on Linux. And even then, we all know there's "powerful" folks who converse privately and figure out the "tough" decisions.

So basically it sounds like security is not guaranteed, but is likely to be much better for a number of reasons.

I'm thinking of switching to Linux for most Internet access purposes, and only using Windows when there's no other way to run the software I need to run.

I'm not sure it's any more secure, really. For SOME things it is, but a Firefox bug is a Firefox bug, often. Depends on how it interacts with the OS after the exploit is done, and privelidge-escalation is privelidge-escalation.

SANS and other organizations seem to publish about the same number of exploits for different OSs in their alert lists these days. Doesn't seem to matter much which one it is.

Most folks who are "security-consious" who have to browse various things on the Net are starting to recommend virtualizing a copy of the OS and doing any "non-trusted" browsing in that virtual machine. If it gets infected with crap, blow it away and start over. Isolation tactics.

Sad, but all OSs are all way too prone to attacks, and updates are usually days/weeks behind the publication of the exploit. The whole security industry hopes and prays that "zero-day" exploits aren't really taken advantage of for just long enough to get patches out.

Windows has a slight disadvantage in that way too many devs are still acting like Admin level privs are needed to do things, and writing their code that way. Unix comes at it from "no one needs to be an admin" and works its way up from there, usually -- so it's MAYBE a little better and the devs heads think that way more.

But I'm not sure that's something to hang your hat on if you're protecting critical data. Only strong encryption of the entire portable machine (with tools like TrueCrypt) and many layers of defense on business networks, ever seems to contain the real-world enough to make it manageable for security folks. Otherwise they'd be overwhelmed quickly.

One of the most shocking faults of the Net and also an interesting example in "Freedom" is that most e-mail today still is un-authenticated, easily-spoofable, and un-encrypted. People don't want to deal with personal encryption certificates, since most implementations are a complete PITA... and until some HUGE entity starts requiring that all incoming e-mail be digitally signed and/or encrypted with PKI or you don't do business with them... it'll never really take off.

Spammers would be gone virtually overnight if mail servers had to authenticate and use TLS between one another with real signed verifiable SSL keys. The problem is, no exec or decision-maker at a giant firm will lead the way and say, "If you can't prove who you are, we won't even accept an e-mail from you."

And the backbone folks get paid a lot to carry the spam traffic, no matter how "hard" they say they work to fight it. It's serious cash for "someone" carrying the traffic.

Authenticated/Encrypted e-mail is just one example of "should be done, but won't be"... we know how to build properly authenticated and identified networks, but the Internet attitude is resistant to such things.

The Internet at large still values anonymity which is a complete fallacy if you have the cooperation of the carrier to track down the source unless obfuscation tools like Tor Onion Router, and other similar things are utilized. Thus, why the NSA had fiber taps in AT&T POP's on the West Coast and presumably in a lot more places than just that one where they got caught...
 
Most folks who are "security-consious" who have to browse various things on the Net are starting to recommend virtualizing a copy of the OS and doing any "non-trusted" browsing in that virtual machine. If it gets infected with crap, blow it away and start over. Isolation tactics.
These "security-conscious" seem to trust the hypervisor's magical ability to prevent the guest escape.
 
These "security-conscious" seem to trust the hypervisor's magical ability to prevent the guest escape.
It's a hell of a lot safer in most cases than without the hypervisor. It's not bulletproof but it's a very effective shield for most things. It takes a very specialized targeted attack to have a chance at getting around it. Not very common at all.
 
Agreed. I'd rather put my trust in the hypervisor than in the known security swiss chesse holes built into any modern OS by itself! :D
 
Back
Top