I take a great
amount of pride and time when I pull together my blog posts. Why? I don't want
to waste your time. Sure I could pull together some meaningless dribble each
month, perhaps just poaching links and re-blogging what others have said. What
would be the point right? So instead, like a good chicken soup, my blogs are
hand-crafted using an original recipe and the finest ingredients. Most
important, it is never rushed and allowed to simmer a little bit. I'm a bit of
a perfectionist so I will sample it a few times, walk away, and come back.
It'll be done when it is done. By the way, be sure and click the links in this blog. Makes everything taste better.
Blacksmith and the artist, reflect it in their art. They forge their creativity, closer to the heart - Rush

Let's start from the
very beginning. A very good place to start. When you read you begin with A,B,C.
When you count your servers you say 1,2,3…and 4 and 5 and beyond. Jeez. This server room seems to fill up like John Belushi at the Faber College lunch buffet in Animal House. Need to store some data, order a server and add it to
the network. Want to start using that new accounting system? No problem. Get
with Dell or HP and a new server will arrive about ten days later. Your company
needs to move from an old version of Microsoft Exchange to a new one. Call IT
and tell them to order a few more of those metal monsters . Before long, your server "corner" became a server room. And it isn't a little PC with a power strip any longer. No, your server room now looks like the bridge of the starship Enterprise. You can't help but
stare at it and say "jeez, when did we get so advanced?" Then, in the next breath you
exclaim, "how the hell did anyone get anything done before?"
We in the
biz call this "server sprawl." That apparently endless growing
footprint of dedicated server hardware. And how did this happen? Blame it on software. Software is kind of like that spoiled kid you knew in high school that talked like a Valley Girl; a bit selfish, slightly conceited,
and a prima donna. See, long ago, everyone in the software business
pretty much decided that their software was special. So special that they started requiring their own dedicated server. For the software companies, it was a rather obvious (albeit expensive for the customer) way to reduce the amount of headbanging in the support department. Put another
way, if my software, and only my software is running on this here server,
nothing else should interfere with it. That should give me stability and
troubleshooting should be easier. There is
validity in this thinking. However, this widely adopted practice led
to many a propeller head having to stack companies full of single, dedicated
servers, with lots of computing power. When they worked, they worked hard; yet they spent a fair amount of time kinda hanging out; waiting...waiting...Bueller...Bueller. Unfortunately, there wasn't much we could do about it. Remember, you expect us IT guys to predict the future. What do I mean?
Well, we're given a budget to buy gear; we're also given a directive that it
must last for five years, perhaps longer. Therefore, we need to anticipate
heavy and light work days; we have to be ready for a company that grows, adding
new or even temp employees at a moment's notice; we must have enough power to
push through the crush of end-of-month reporting or the addition of 10,000 new
SKUs. And of course, we know how to place phasers on stun. So while you believe we all speak Klingon, or think that we sit around half the day debating whether the technology in Star Trek or Star Wars is actually better, there is usually logic, knowledge, and a bit of clairvoyance behind our decisions. Your servers, each and every one of them, were likely sized
right given the rules by which we played. But man there was so much computing
horsepower sitting around not really being used all the time. If only there was
a way for one server to use the idle resources of another server; if only we
could just fuse these metal boxes together instead of duplicating or
triplicating the same resources across all these individual servers. Now that
is a technology you would jump all over, right? I feel the water in the soup
pot heating up a bit. Read on.

First Ingredient:
Hypervisor

Second Ingredient:
Core Technology
So why did virtualization take so long to hit the mainstream? Because back in the '90s and early 2000s' the price
of server hardware was still pretty expensive. Same was true for the price of hypervisors. So while this virtualization
technology looked promising, it was still cheaper to have multiple dedicated
servers. So we bought servers. A lot of them. And while we were all bopping
down the network road with all these servers humming along we learned a few
things. First, like anything else, it takes money and expertise to maintain
them. Second, about every five years, they need to be replaced with something newer and faster.
See, for all these years, we had been taught to chase the fastest processor (aka CPU) possible. After all, the faster the processor, the quicker the server. Then we were taught that two processors were better than one. And just like Noah did with the Ark, we started buying servers with processors two-by-two. Man these suckers were pricey. Thousands of dollars just for the CPUs alone. Did it matter? Not so much. Those multi-processor servers never seemed to really give us that gut-wrenching speed we wanted, let alone paid for. Ugh. We needed a technology that could deliver and also be affordable. A VW with the horsepower of an aircraft engine. Flash forward to 2005. Intel and AMD introduced the masses to something called core processors. You'll notice I said "introduced the masses." Guess who actually developed the first known core processors? Yup…IBM. You don't need to know anything else about that story except to say it ends like the last one I just told you.
Anyway, remember those two processors I was just talking about? Well, they take up a lot of space inside of a server, mostly because they stand side by side. And if you think space is at a premium in a server, think about a tiny smartphone. Which begged the question of why go wide when you can go up? And Intel did just that; they took two processors, stripped them down to the core and, instead of setting them side by side, they stacked them right on top of each other just like you stack dishes in a cabinet. Doing so used much less space and power, reduced heat, and gave us lightning fast performance. And, thanks to shedding all that extra baggage, prices dropped significantly. What started as a stack of 2 cores in 2005 has moved up to a stack of 10 cores now. Just unbelievable. My friends, this was the turning point. Core technology is the Emancipation Proclamation for virtualization. What happens next is quite amazing.
See, for all these years, we had been taught to chase the fastest processor (aka CPU) possible. After all, the faster the processor, the quicker the server. Then we were taught that two processors were better than one. And just like Noah did with the Ark, we started buying servers with processors two-by-two. Man these suckers were pricey. Thousands of dollars just for the CPUs alone. Did it matter? Not so much. Those multi-processor servers never seemed to really give us that gut-wrenching speed we wanted, let alone paid for. Ugh. We needed a technology that could deliver and also be affordable. A VW with the horsepower of an aircraft engine. Flash forward to 2005. Intel and AMD introduced the masses to something called core processors. You'll notice I said "introduced the masses." Guess who actually developed the first known core processors? Yup…IBM. You don't need to know anything else about that story except to say it ends like the last one I just told you.
Anyway, remember those two processors I was just talking about? Well, they take up a lot of space inside of a server, mostly because they stand side by side. And if you think space is at a premium in a server, think about a tiny smartphone. Which begged the question of why go wide when you can go up? And Intel did just that; they took two processors, stripped them down to the core and, instead of setting them side by side, they stacked them right on top of each other just like you stack dishes in a cabinet. Doing so used much less space and power, reduced heat, and gave us lightning fast performance. And, thanks to shedding all that extra baggage, prices dropped significantly. What started as a stack of 2 cores in 2005 has moved up to a stack of 10 cores now. Just unbelievable. My friends, this was the turning point. Core technology is the Emancipation Proclamation for virtualization. What happens next is quite amazing.
Mixing It Together
So now we have this
hypervisor thingy and we have servers which contain all of these cores inside. What can we do now? Say we have a single server
with a processor containing four cores and a bunch of memory. We can now run say four different servers, and electronically divvy up the cores and memory amongst them, with each one running Windows all by itself. In olden times, we would need four physical servers to get this done. Not any longer.
Hmmm I just re-read all that stuff and maybe this is a bit hard to wrap your mind around. Try this out. Think of your software programs as people. Naturally, people share some of the same attributes. Things like arms, legs, eyes, and feet. Yet, we're also all rather unique. Software is exactly the same. It is common in terms of compatibility. That is, hundreds if not thousands of programs are written to work with Windows. Yet, they are unique in what they do for you and your business. I have software for email, accounting, even for playing music and movies. They are all compatible with each other (ie attributes) yet do very different things. Now then, think of servers as small cars, perhaps Mini Coopers. Cars hold people just like servers hold programs. Get it? Good. Let's keep going.
So, imagine that you have a group of people, each in
their own little cars, who all work together on the same floor. We typically
call them a department. Like a Sales Department. In the computer world, a group
of programs (ie people) that each run on their own servers (ie cars) are linked
together to form a network (ie department). Now you've got it. Stay with me
here. These people all take the same highway to and from work, and, on most
days, are all sitting in traffic together. And we all know about traffic; it
slows you down at the worst time…all the time. Believe me, I am a connoisseur
of traffic; I live in Atlanta. Everyone drives their own car. Hence the reason
for the traffic. I get it though. You have your own space, your own music, and
you don't have to interact with anyone unless you wish to do so. So we have all
of these "cars" full of "people" that make up the
"department" that is your network. And, like the interstates,
networks have their share of traffic and congestion. Sometime the speed is good
while other times it is slow. When rush hour comes and you are hammered in
gridlock traffic, you can't help but wonder what the heck we are doing. All
these cars. All this gas. All these people. All driving their own cars. Down to
your bones you know there is a better way. There is, it's called carpooling. In
the network world, when servers are allowed to carpool, it is called
virtualization! I know, I know, it just clicked. Keep going.
Hmmm I just re-read all that stuff and maybe this is a bit hard to wrap your mind around. Try this out. Think of your software programs as people. Naturally, people share some of the same attributes. Things like arms, legs, eyes, and feet. Yet, we're also all rather unique. Software is exactly the same. It is common in terms of compatibility. That is, hundreds if not thousands of programs are written to work with Windows. Yet, they are unique in what they do for you and your business. I have software for email, accounting, even for playing music and movies. They are all compatible with each other (ie attributes) yet do very different things. Now then, think of servers as small cars, perhaps Mini Coopers. Cars hold people just like servers hold programs. Get it? Good. Let's keep going.


Now, each "person" no longer needs a car. That is a
huge savings. Second, we're making better use of the one resource we are
sharing, in this case the Suburban. Take space for instance. The space within the
Suburban can be dynamically assigned to each of the people based on their needs on
any given day. If they are travelling light, they don't need as much space. If
they have a laptop bag, luggage, a projector, they can take up more space. It's ok, everyone understands and we've got some room to spare here. Now, let me hook the fish for you and bring it all together.
In the networking
world, this means you don't have to buy as many servers. See, thanks to core technology, the performance that used to require multiple servers is
now available in one large server…for less money than say eight smaller
ones. And the performance is better. Much better. Fewer servers also means less
infrastructure to buy and maintain; less infrastructure means lower operating
costs; lower operating costs means savings for everyone. Cloud services would
not have become reality without virtualization. The cost of buying servers
alone would have placed the cost of these services out of reach for most
people. Ever asked yourself how services like Gmail or Dropbox are offered for
near nothing? Because they can now put upwards of fifty dedicated servers (aka virtual machines) inside of one robust server (aka host server). That wasn't a typo. Fifty. To the users and the software there are
fifty different servers. And that remains true. However, they all live in one
physical piece of hardware about three inches tall. Ponder that for a moment.
An eight foot tall stack of individual servers reduced to a single box less
than three inches in height. With virtualization you really get the chance to
use the server you have paid for. Here is another advantage: suppose you wish
to test a new email program or accounting package. That used to mean buying
another physical server. That is a pretty expensive investment considering you
are unsure which program you will actually use. With virtualization, it's easy:
just add another virtual machine to the host. Easy. Like adding another person
into the Suburban. What if you want to upgrade the host server? Or perhaps it fails
due to a hardware issue? This used to be a catastrophe. Days spent reloading
the server, your programs, and all your data. Not any longer when you are
virtual. Here is how easy it becomes. Get a new server, plug in your USB thumb drive and start up your virtual machines. No restaging, no day-long
slow Internet due to massive Windows updates. You are back in business in hours
instead of days.

I'll tell you a tale of soup, servers, and SUVs.
A story which is sure to please.
And if you will pardon the tease,
I'll impress you with my expertise.
Another nice one, Rick! As a result of your blogs, I'm a lot smarter than I used to be (but not yet as smart as I actually think)! :-)
ReplyDelete