Crowdstrike

But is it a hacker's paradise precisely due to its massive user base and wide number of variants/versions?

Linux/Unix OS's aren't immune to security vulnerabilities.
Something like 70% of the servers on the internet run Linux, so its security is not due to obscurity.

That said, many companies do put add-on security software on Linux. But whether that is due to technical necessity or to corporate policy, is an open question.
 
Do public-facing Linux systems not also need add-on security software?

depends on the risk assessment.

But there aren't a lot of stories about linux machines getting a fresh install, connecting to the interweb to download the latest security hacks, and being hacked before all the updates can be installed. Of course that NEVER happens with windoze machines... nope.

For kicks and giggles, count the number of vulnerabilities and patches/updates/whatever that get published for windoze and compare that with linux. Before I retired <mumble> years ago, it wasn't funny how many more patches/updates/whatever needed to be applied to the windoze machines and how long it took to get it done.
 
It is astonishing how 1 bad update from 1 vendor can tank our global systems: stranding people at airports, bringing down hospital operations and corporations. All because 1 team at 1 vendor f'd up a block of code.
And honestly taking systems down like this is something most malicious cyber actors can only dream of, and here CrowdStrike is ******* it all up and getting paid by clients to boot.
And I'm sure after it passes we'll all collectively forget how reliant we are on so many single points of failure :)
 
Nothing I preside over seems to be impacted- probably because I avoid anything cloud-based or 3rd party security solutions I have no control over. Blows my mind that this is the way of IT now... I guess it's cheaper than doing everything in-house. I have to concede the usefulness of canned management solutions as well but man... this shows the weakness of it.

The other thing is I don't think we're disabled as much because the individuals manning the counters/phones/etc are incapable of operating without the computer. I'd bet most of them could get out a notebook, pen, and phone and be able to muddle through for the day. Maybe that wouldn't work for things requiring complex scheduling and whatnot but surely you could manage for a lot of stuff. But taking that kind of initiative is against modern corporate culture these days and there's absolutely no incentive for anyone to stick their neck out like that... don't blame them at all. From what I hear most workers unable to do their jobs are occupying a chair and goofing off until the clock runs out of the day. Probably what I'd do too if I was still a corporate employee... silent fuming that my time was being wasted when I had stuff to do at home.
 
It is astonishing how 1 bad update from 1 vendor can tank our global systems: stranding people at airports, bringing down hospital operations and corporations. All because 1 team at 1 vendor f'd up a block of code.
And honestly taking systems down like this is something most malicious cyber actors can only dream of, and here CrowdStrike is ******* it all up and getting paid by clients to boot.
And I'm sure after it passes we'll all collectively forget how reliant we are on so many single points of failure :)
I'm absolutely certain the risk presented by the vulnerability they "fixed" with this release was far lower than the actual harm caused.

But hey, a bunch of idiots at a bunch of companies can point the finger at CrowdStrike and say "not my fault", which is really all you're gaining when you use such a product.
 
But hey, a bunch of idiots at a bunch of companies can point the finger at CrowdStrike and say "not my fault", which is really all you're gaining when you use such a product.
I get the impression there are a lot of decisions being made these days based on that sort of logic. “Due diligence has been done, it’s not my fault.”
 
Nothing I preside over seems to be impacted- probably because I avoid anything cloud-based or 3rd party security solutions I have no control over. Blows my mind that this is the way of IT now... I guess it's cheaper than doing everything in-house. I have to concede the usefulness of canned management solutions as well but man... this shows the weakness of it.
I've never been convinced that it's cheaper. It's easier. It doesn't require that anyone -- especially middle and senior management -- actually know anything about what the hell they're doing. Farm it out to a vendor who could, if anyone asks, (but it seems no one does any more) provide some pretty PowerPoint slides showing how it is theoretically cheaper to pay them to do it than it is to actually hire, manage, and keep trained people to do it right. And of course most importantly, it wrongly absolves the managers and execs making these decisions of the responsibility when a vendor drops an anvil on their heads. The response never seems to be, "Hey, we're way too dependent on vendors to do this stuff". It's almost always, "We'll get them to knock a few bucks off our bill for torpedoing our entire business", or maybe "We'll find a new vendor and let them find new ways to screw us".
 
I clicked 4 toggle switches forming a 4 bit entry, then pushed the enter button until I had entered all the code to read punch tape slowly.

Then the computer read a short tape that programmed it to read fast.

The next tape was the system operating system, and now it could think!

The system normal status data was next.

Final step of rebooting after an outage from a voltage blip, was the RUN button, and a deluge of alarms came in for every event hat had occurred while the computer was down, plus alarms for every device in an alternate state, on purpose.

That ended my troubles, the operators took over and took whatever action the system needed to get the customer power back on from the thunder storm that took the computer down.

All the peripherals were on leased lines
 
Last edited:
And of course most importantly, it wrongly absolves the managers and execs making these decisions of the responsibility when a vendor drops an anvil on their heads.
Yessir! One lesson I learned in the earliest days of my career was how pervasive the culture of "risk-transfer" is across all levels of an organization, from junior level to C-Suite. Whether it's hiring decisions, evaluating buy-vs-build options, or just plain old ordinary transactional decisions. A lot of choices are dictated by what is the safest for them and presents the lowest career risk. By no coincidence a lot of these decisions are also made by committee, which further increases the surface area of blame so that no one person bears enormous career risk.

Doesn't really matter if you make the right decision if you never have to bear the consequences of a bad one :cool:
 
Linux in the early days was a complete s***t show. Bind and Sendmail should make ANY *nix admin start to to shiver (I ran qmail, postfix was also a horror show... still is). How many TLS/SSH/SSL CERT advisories are there? SQL injection attacks area also common... all because devs only want to learn the latest language and never the entire stack.

Also you cannot blame MS for this... a thrd party uploaded an update that was pushed out. As I said before the real issue is that companies are allowing updates automatically. Again, FireEye anyone? The exact same thing.

My personal server is a cloud based CentOS 5 server happily doing web/db/email for damn near 15 years.
 
I clicked 4 toggle switches forming a 4 bit entry, then pushed the enter button until I had entered all the code to read punch tape slowly.

Then the computer read a short tape that programmed it to read fast.

The next tape was the system operating system, and now it could think!

The system normal status data was next.

Final step of rebooting after an outage from a voltage blip, was the RUN button, and a deluge of alarms came in for every event hat had occurred while the computer was down, plus alarms for every device in an alternate state, on purpose.

That ended my troubles, the operators took over and took whatever action the system needed to get the customer power back on from the thunder storm that took the computer down.

All the peripherals were on leased lines
On board the sub I was on the MDF had a panel of switches and indicators that let you run embeded code (ROM) or if you were really bored you could position the heads manually, then enter the READ ROM start program code into the register and read either the sector, track, or entire binary. All the disk file locations were hard coded. Crazy, crazy times.

Early IBM HDDs (this would of been the mid 1990s) had a Cessna landing gear hydraulic pump and htdraulically positioned heads. Yes, it was a PM to service the HDD with 5056 hydraulic oil.
 
Linux in the early days was a complete s***t show. Bind and Sendmail should make ANY *nix admin start to to shiver (I ran qmail, postfix was also a horror show... still is). How many TLS/SSH/SSL CERT advisories are there? SQL injection attacks area also common... all because devs only want to learn the latest language and never the entire stack.

Also you cannot blame MS for this... a thrd party uploaded an update that was pushed out. As I said before the real issue is that companies are allowing updates automatically. Again, FireEye anyone? The exact same thing.

My personal server is a cloud based CentOS 5 server happily doing web/db/email for damn near 15 years.
What are you using as an email server now?
 
Back
Top