FS - VMware Lab

sferguson524

Pattern Altitude
Joined
Feb 8, 2011
Messages
1,761
Location
Las Vegas
Display Name

Display name:
FormerSocalFlyer
Hey guys,

I finished up my VCP and am looking to sell my lab servers. I have 2 dell 1950s, one with dual 3.3 ghz xeons, 16GB RAM, 2 900GB 10K SAS drives. The other is a single xeon (can't remember the speed. Have 2 3.3s that can be installed, but needs a gen 3 motherboard), 8GB of RAM and 3 73GB 10K SAS disks. Shoot me an offer, and we'll talk.
 
The 1950s are a bit long in the tooth. I don't want to blast your post though. They're fine for exactly what you used them for. Ran a lot of them at the last company. Solid hardware.

If the price is right and folks have the space and don't mind the noise and power draw of these, they'll work great.

Problem is, we just built a couple of screaming fast desktop machines for the DBAs at the office, brand new, for about $1000, that smoke "server class" hardware.

The first hard test of the machines was firing up 25 VMs on them just to see how they behaved. 12 cores, piles of RAM, they're just screamers for virtually (heh) no money.

I suspect most IT folk have a similar machine in their home office.

Conversely a fairly wimpy Dell R515 with 3TB of 15K drives and 64GB RAM ran $6500. Not worth it. It'll be the last one unless we deem something must truly be only on-site.

I think VMWare is going to get the doors blown off of them by some of the cloud services.

Microsoft's thing is actually unbelievably brilliant, and I'm no MSFT fan. One of our devs did a demo for me two days ago. It's impressive.

We currently use a provider that runs our VMWare farm for us. I'm actively searching for the right vendor to switch to that if I need a new VM, I have an API and a way to simply script the build and startup of the VMs as needed. No "turn in a ticket and wait".

VMWare is too expensive and doesn't scale that seamlessly. If I have to build the farm myself it'll be KVM or XenServer. I don't want to bother with building a farm. BTDT, not worth it anymore.

Meanwhile, MSFT just offered $200 in free service as a demo last week with 30 days to use it. Google responded with $300 in free service and 60 days to use it.

The virtual cloud price war is now getting serious. Amazon proved the model and got it going. Their pricing is way too complex and wastes my time, though. Their newer competitors just fixed that.

Utilizing tools like Ansible, I will eventually be able to just pick which cloud to fire up a machine on, sit back, and watch Ansible Fire up the machine and move services to it. The hardest part is waiting for the data to copy over.

Meanwhile the hosted VMWare place called yesterday to warn me that they're upgrading the VMWare version on my private VMWare farm. They claim there will be no downtime. When I asked what new features I get, they said, "Oh, do you do multi-site replication?" No. "Ummm, well... It'll help us manage it better for you!"

Great. Put my Production environment at risk for human error for your convenience. Sounds like a great plan. Friday night into Saturday night works for me. Sure. Great.

Soooo... Good hardware and useful for that cert, but I'm in the lucky position that they neglected their virtual farm long enough that I'll be working to skip hard virtuals and skipping straight to virtual virtuals, for lack of a better term. For me there's little benefit to physically owned virtuals in a year or so.

Just an opinion.
 
The 1950s are a bit long in the tooth. I don't want to blast your post though. They're fine for exactly what you used them for. Ran a lot of them at the last company. Solid hardware.

If the price is right and folks have the space and don't mind the noise and power draw of these, they'll work great.

Problem is, we just built a couple of screaming fast desktop machines for the DBAs at the office, brand new, for about $1000, that smoke "server class" hardware.

The first hard test of the machines was firing up 25 VMs on them just to see how they behaved. 12 cores, piles of RAM, they're just screamers for virtually (heh) no money.

I suspect most IT folk have a similar machine in their home office.

Conversely a fairly wimpy Dell R515 with 3TB of 15K drives and 64GB RAM ran $6500. Not worth it. It'll be the last one unless we deem something must truly be only on-site.

I think VMWare is going to get the doors blown off of them by some of the cloud services.

Microsoft's thing is actually unbelievably brilliant, and I'm no MSFT fan. One of our devs did a demo for me two days ago. It's impressive.

We currently use a provider that runs our VMWare farm for us. I'm actively searching for the right vendor to switch to that if I need a new VM, I have an API and a way to simply script the build and startup of the VMs as needed. No "turn in a ticket and wait".

VMWare is too expensive and doesn't scale that seamlessly. If I have to build the farm myself it'll be KVM or XenServer. I don't want to bother with building a farm. BTDT, not worth it anymore.

Meanwhile, MSFT just offered $200 in free service as a demo last week with 30 days to use it. Google responded with $300 in free service and 60 days to use it.

The virtual cloud price war is now getting serious. Amazon proved the model and got it going. Their pricing is way too complex and wastes my time, though. Their newer competitors just fixed that.

Utilizing tools like Ansible, I will eventually be able to just pick which cloud to fire up a machine on, sit back, and watch Ansible Fire up the machine and move services to it. The hardest part is waiting for the data to copy over.

Meanwhile the hosted VMWare place called yesterday to warn me that they're upgrading the VMWare version on my private VMWare farm. They claim there will be no downtime. When I asked what new features I get, they said, "Oh, do you do multi-site replication?" No. "Ummm, well... It'll help us manage it better for you!"

Great. Put my Production environment at risk for human error for your convenience. Sounds like a great plan. Friday night into Saturday night works for me. Sure. Great.

Soooo... Good hardware and useful for that cert, but I'm in the lucky position that they neglected their virtual farm long enough that I'll be working to skip hard virtuals and skipping straight to virtual virtuals, for lack of a better term. For me there's little benefit to physically owned virtuals in a year or so.

Just an opinion.
I'm a pretty huge AWS fanboy, and will be speaking again at their conference in a few weeks, IMO there isn't a cloud provider out there that is even close right now to offering everything they have to offer. They're also not just riding on their success they are innovating faster than anyone can catch. The price drops just never end. It's amazing how often and how aggressive the prices keep getting cut.
 
I'm a pretty huge AWS fanboy, and will be speaking again at their conference in a few weeks, IMO there isn't a cloud provider out there that is even close right now to offering everything they have to offer. They're also not just riding on their success they are innovating faster than anyone can catch. The price drops just never end. It's amazing how often and how aggressive the prices keep getting cut.


Understand. I just don't need most of their complexity. But I'll look at it again. Lower pricing is nice. :)

The vast majority of our virtuals are CentOS minimal installs with Apache/PHP and tiny storage and RAM requirements.

Just a lot of them.

My largest "server" only needs 4GB of RAM and only two machines have more than 4GB of disk. The few big data machines are attached to a NAS/SAN type setup.

Fast creation and destroy of machines that do single tasks, is the current methodology. That could change with scale, but that won't be for a while unless something really takes off.

The more urgent problem is the former admin never pushed continuous patching into the design. If development never figures out how to do that, I'll be doing it by moving whole servers and forcing a regression test cycle.

He also painted himself into a bad corner with cfengine. I already killed it and went to Ansible. He also stuck with Big Brother for monitoring?! Beat head here. Nagios here we come.

Devs now develop on virtuals on their desktops and the Ansible playbooks that build those machines via Vagrant are re-usable on the QA and Production VM environments. That got done for almost all environments this week after a lengthy timeframe to get the idea into everyone's brains. Dev gets it now.

I had to be a mean old sysadmin and deny a playbook pull request today that included a shell script to build a perl module from CPAN... that won't scale. No. Build it and put it on the internal package repo. Ansible can install it.

The above is part of why I haven't flown in months. I'm tired. Ha. And it continues.

New firewall next month. New internal network design implementation after that. New backup server/system in parallel. New XenServer to get us by until we have auto-build on AWS, MSFT, DigitalOcean, whatever service we like... And a new phone system in late Dec or Jan.

I'll come up for air around Feb. Ha.

If they weren't paying well enough, there's no way I'd kill myself like this.

Next year had better include additional staff. If it doesn't, the plan will start to move toward "personal exit strategy".
 
Understand. I just don't need most of their complexity. But I'll look at it again. Lower pricing is nice. :)

The vast majority of our virtuals are CentOS minimal installs with Apache/PHP and tiny storage and RAM requirements.

Just a lot of them.

My largest "server" only needs 4GB of RAM and only two machines have more than 4GB of disk. The few big data machines are attached to a NAS/SAN type setup.

Fast creation and destroy of machines that do single tasks, is the current methodology. That could change with scale, but that won't be for a while unless something really takes off.

The more urgent problem is the former admin never pushed continuous patching into the design. If development never figures out how to do that, I'll be doing it by moving whole servers and forcing a regression test cycle.

He also painted himself into a bad corner with cfengine. I already killed it and went to Ansible. He also stuck with Big Brother for monitoring?! Beat head here. Nagios here we come.

Devs now develop on virtuals on their desktops and the Ansible playbooks that build those machines via Vagrant are re-usable on the QA and Production VM environments. That got done for almost all environments this week after a lengthy timeframe to get the idea into everyone's brains. Dev gets it now.

I had to be a mean old sysadmin and deny a playbook pull request today that included a shell script to build a perl module from CPAN... that won't scale. No. Build it and put it on the internal package repo. Ansible can install it.

The above is part of why I haven't flown in months. I'm tired. Ha. And it continues.

New firewall next month. New internal network design implementation after that. New backup server/system in parallel. New XenServer to get us by until we have auto-build on AWS, MSFT, DigitalOcean, whatever service we like... And a new phone system in late Dec or Jan.

I'll come up for air around Feb. Ha.

If they weren't paying well enough, there's no way I'd kill myself like this.

Next year had better include additional staff. If it doesn't, the plan will start to move toward "personal exit strategy".

t2.small with 2GB of memory, if you buy a 1 year reserved instance, works out to $12/month.

t2.medium with 4GB of memory, 1 year reserved, works out to $25/month.

Storage is basically free for how much you're talking about using. It'd be like pennies.

The real power of AWS is all the services they provide. You end up running MUCH less infrastructure as a result. I use Cloudwatch for monitoring, RDS for MySQL, ElasticCache for memcached and redis, S3 for storage, GREAT load balancing etc, etc. Each one of those services I use cost me far less than if I ran them and quite frankly Amazon does a HELL of a job running them. Better than I could do myself.

Plus if you build it all in a VPC then you can have a real private network within their cloud and hook it up with your network with a VPN if you want. Won't even feel like it's remote.

DigitalOcean is nice if you want a box to run Wordpress on for a small site. If you're trying to build out a real environment to run **** that really matters on they're really lacking in features.

If the company grows to where you suddenly have people that care about security standards and compliance there is no cloud that is as compliant as AWS with so many standards (PCI, etc).

The number of times a server actually fails (which if it does, you just start it again and it'll be on new hardware) is incredibly low. I've only seen it happen ONCE. I can't say the same about our hardware in our traditional DC.

Plus I'm migrating everything to run AutoScale now so if a server were to **** out they'll have another one running before I know what even happened.

AWS makes it incredibly easy to build out infrastructure that is quite fault tolerant and lives in multiple data centers with extremely fast fiber between them. All of their services play well using multiple availability zones and their load balancers make it stupid easy to route traffic.

Yes..it is a bit complicated and perhaps overwhelming as you learn their technology but it's damn good technology. Finest I've ever used.

I would **LOVE** to move PoA to AWS to get it off the 5 year old Tyan 1U sitting in our data center. But it would cost money and we have no stream to pay for that. At some point we will probably have to try to solve that problem because we won't have our traditional data center forever. We are moving everything out to AWS as fast as we can.
 
Last edited:
Good info. I'll have to do some research into the few higher servers. They have terabytes of PDFs on those. They're the PITA. They also suck down lots of monthly bandwidth shuffling those around. (Worse, they start life as faxes, but we 3rd party the fax crap. They're PDF by the first time we touch them.)

BTW: Didn't want to give the impression I'd use DigitalOcean for anything Production. Even they don't recommend that. That's trashable Dev if anything at all. Probably no point in using them at all.

Someone asked how many users, that's a complex question. Which company? ;)

About 80 in the building, 10 or so fully remote. And a 50 person call center is one of the companies.

If the whole environment hadn't been significantly neglected for a few years, it wouldn't be awful for two admins. It's the mix of maintaining current status quo while upgrading EVERYTHING in the right order by priority that's a butt buster right now. We have the option of bringing in consultants but they usually cause more harm than help for the price tag. I do have one excellent Windows guru who may want a side gig updating and splitting the main internal Windows server into a properly split farm by server role.
 
I would **LOVE** to move PoA to AWS to get it off the 5 year old Tyan 1U sitting in our data center. But it would cost money and we have no stream to pay for that. At some point we will probably have to try to solve that problem because we won't have our traditional data center forever. We are moving everything out to AWS as fast as we can.

What would it cost for the year?
 
You could probably sell enough PoA swag to cover that cost if you wanted.

And there are some painless ways to offer swag via online providers these days (although I am not sure how much of a cut they take vs the old way of going out buying a batch of printed stuff and then reselling).


Maybe a special T or Hat with text that alludes to what is happening.
PoA, cleared direct AWS via ____ etc etc.
or make it look like the flight plan form with a route of
____(airport near current server location) to "AWS"
 
Back
Top