Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology > Computers
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 11-07-2013, 10:34 AM
 
Location: Wandering.
3,549 posts, read 6,661,462 times
Reputation: 2704

Advertisements

Quote:
Originally Posted by Knight2009 View Post
Please understand, I am definitely not trying to ask a question that sounds dumb or uninformed in any way here, but is there a particular reason why you would want to use a VM envionment, if you could already have the actual physical hardware itself instead?

Also j/c, do you mind if I ask what VM software you are currently using? I have not really had a chance to experiment with VM COTS software yet so far, so that may be worth looking into more, for me...
There lots of reasons to use VMs, even if you have the physical hardware.

The main reason is to not have to have the physical hardware in the first place:
I have probably a dozen test environments ranging from full versions of various OS's (both desktop and servers) to clean debug environments, and I only have a single machine.
I run a complete dev environment for a client in a VM just to keep their code, email, etc, completely separated from mine (I work in this environment 5+ days a week, and on my i7 based rig I can't tell the difference between being in the VM and being on the desktop). This was a physical machine that I had on my desktop for a number of years.
I also use the VMWare tool to dump physical machines to virtual machines whenever I retire a physical box that I may want to access later.

Even with dedicated hardware there are a bunch of solid reasons to use visualization:

Multiple copies of the same machine, in different states: I usually keep a bare clean copy of an OS as a VHD, and then copy it to a new file when I need a new machine. This means a single install, but multiple different machines / configurations. You can also use a base image, and create sub images from the base installation.

The ability to run multiple VM's at once, each having a different installation / configuration. This is especially helpful in replicating multiple server configurations where each server has a specific job, or for working with client / server environments where each needs to be on a different machine.

The snapshot feature of a VM is invaluable. In a few seconds you can create a snapshot of a clean environment. Then if you install something and it fails, you just click one button and revert back to that point in time. This is image based, and not something like a Windows system restore point that is "kind of a backup of some things, maybe, if your lucky". You can also have multiple snapshots of different points in time.

Another advantage is that you can upgrade the hardware (even a complete hardware swap), without changing anything about the VM. If you go from a 4 core to a 12 core machine, you just add additional cores to the VM from the configuration. Basically it insulates the VM from the physical box. I had to reload my personal machine a few weeks ago, and while it took me a couple of days to get everything put back together, my clients VM was fully functioning as soon as I loaded VMWare.


As for what tools. I use VMWare Workstation, since I run them on my desktop OS. For servers there are better tools that will install on the bare metal, and allow you to run multiple simultaneous VMs.
Reply With Quote Quick reply to this message

 
Old 11-07-2013, 11:09 AM
 
Location: Wandering.
3,549 posts, read 6,661,462 times
Reputation: 2704
Quote:
Originally Posted by Knight2009 View Post
One other reason -- aside from the server programs that I had planned to install on the second system -- that I chose this particular configuration was because of expandability, scalability, and relative future-proofing. For example, I bought the first system I listed almost 6 years ago, and it still today either meets or exceeds modern and contemporary hardware specs. You can't really do that (i.e., future-proofing against inevitable hardware inflation), if you buy a consumer-driven, mainstream computer system, because they are locked-down, in terms of their maximum hardware expandability. For example: if I had bought my first system with mainstream consumer hardware, at that time 6 years ago, it would have very likely to have been locked down to max RAM scalability of 4-8GB. I like to buy hardware that lasts for a 7+ year lifecycle, not the typical 2-3 year lifecycle that mainstream hardware faces. With my latest recent hardware upgrades to the first system (upgraded to 32GB RAM and 2x3.4GHz proceessors), I can probably still get at least a few more years out of it, maybe perhaps even up to a lifecycle of 10 years total, since the RAM maxxes out at 128GB, if I decide to do further memory upgrades later on.

The second system is intended as the successor of the first, also intended for a long life-cycle. I figure that if/when I need to later on in the future, I can simply upgrade the processor specs. The RAM maxxes out at 512GB. Subsequently, I am planning on a 7+ year lifecycle for this one, as well...we'll see if it hopefully works out that way

So it is not just the server software intended for testing itself -- the hardware specs also matter a great deal to me as well, which is probably why in my case, a VM in itself would still not be enough...again since I prefer to have hardware specs that are much more scalable than conventional consumer desktop systems.

ETA: I take backup images of the 2 systems I had mentioned almost religiously, so that I can always rollback to a stable configuration, if I need to. Right now Windows Server is installed, configured, and working fine (I'm glad I did several backup images, b/c I have already had to rollback to one once already); I just have to install and configure the other software items that I had mentioned, atm...

ETA: Agreed, the test rig is very expensive, but it is the only way I am currently able to train myself in a hands-on way for the referenced software packages such as Windows Server, SharePoint Server, etc., since my workplace does not really have any kinds of hands-on training, for these programs...and although I work with SharePoint Server every day with elevated rights, my workplace locks down the rights so that I do not have access to components such as Central Admin, etc., that would allow me to train myself, on-the-job. So training at home on my own time, on my own home server test rig is realy my only viable option...
Still not sure I see the point of building server grade hardware for a test / learning machine, unless it's actually going to see production loads (load testing, or something like that), or actually seeing full time use, but I probably don't fully understand everything about the situation.


Even for production servers, I don't buy / build servers anymore, it's all VPS. For years I would buy or build servers and COLO them, but most of the time that hardware is sitting idle, and it was costing several hundred dollars a month for each server in the COLO, plus the cost of the server. The hardware changes so fast that a few year old server can't really keep up with newer consumer grade equipment, and the resale value on these boxes is really bad.
Reply With Quote Quick reply to this message
 
Old 11-07-2013, 12:13 PM
 
5,460 posts, read 7,757,868 times
Reputation: 4631
Quote:
Originally Posted by Skunk Workz View Post
Still not sure I see the point of building server grade hardware for a test / learning machine, unless it's actually going to see production loads (load testing, or something like that), or actually seeing full time use, but I probably don't fully understand everything about the situation.
I have a seperate, more basic and consumer-grade desktop that I use for production purposes. The 2 server boxes have been used continuously over the years to test, evaluate, and in training myself on various enterprise software packages, such as Windows Server 2008, Server 2008 R2, Server 2012, and some others.

Quote:
Even for production servers, I don't buy / build servers anymore, it's all VPS. For years I would buy or build servers and COLO them, but most of the time that hardware is sitting idle, and it was costing several hundred dollars a month for each server in the COLO, plus the cost of the server. The hardware changes so fast that a few year old server can't really keep up with newer consumer grade equipment, and the resale value on these boxes is really bad.
My apologies and please forgive my non-familiarity with them, but can you please elaborate on what "VPS" and "COLO" refer to?

As far as servers being able to keep up with consumer-grade equipment, below are the hardware specs on some of the server-compatible makes and models, that I have used in the past. For example, the HP xw8600 was released around 2007 if I am remembering correctly, and even today in 2013/close to 2014, it can still run a hardware config of quad core, dual-processor 3.4 GHz Xeons and 128 GB total RAM. Even the ultra high-end consumer-based systems of today still only come with 64GB RAM, and while the processors might be a little faster or more efficient today vs. the processors of back then, I think the overall ability to continuously re-use a test server from 2008 - almost 2014 and beyond, without having to go through the hassle of replacing it with a new brand-new server, is a major plus in MHO in favor of buying long-lasting, ahead-of-the-curve hardware

http://h18000.www1.hp.com/products/q...a/12849_na.pdf

http://h18000.www1.hp.com/products/q...a/13278_na.PDF

http://h18004.www1.hp.com/products/q...a/14264_na.pdf

ETA: I also at one time had a server test rig running 2x4.4 GHz X5698 Xeon processors with a max memory capability of 192GB, but the hardware and motherboard unfortunately died prematurely on me, which was subsequently replaced with my current, second test rig system.
Reply With Quote Quick reply to this message
 
Old 11-07-2013, 12:36 PM
 
14,780 posts, read 43,668,651 times
Reputation: 14622
Quote:
Originally Posted by Knight2009 View Post
I was wondering if I could please ask a question, on the issues of computer processor hardware: specifically, on the issue of having more cores vs. higher stock-clock speeds?

For example, I currently have 2 PC desktop workstation systems set up at home (please see additional specs below), one that is older and has a higher clock speed and less cores, and one that has a lower clock speed, but more cores. Just wondering if it is possible for the novice hardware user how to determine how fast each of these is in relation to each other, versus just looking at the stock-clock speed alone?

[System 1]

x2 Xeon (Harpertown series, Quad Core, LGA 771) X5492 processors, running at 3.4 GHz each
32 GB FB-DIMM RAM installed (max RAM capacity of 128 GB)

[System 2]

x2 Xeon (Sandy Bridge-EP series, Six Core, LGA 2011) E5-2640 processors, running at 2.5 GHz each
16 GB DDR3(?) RAM installed (max RAM capacity of 512 GB)

The irony is, even though System 1 is almost 1 GHz faster in terms of stock-clock speed, and has twice as much RAM installed, System 2 still outperforms it, in terms of overall speed -- not really sure I understand how exactly though, tbh, other than the fact that the processors in System 2 are newer in terms of when they were released?
While your conversation has progressed to the very technical, I have a pretty easy, pedestrian explanation for this original question.

Think of each processor core like a highway. The more modern the architecture the more lanes on that section of highway. The clock speed is how fast the traffic can drive on the highway. If you have a one lane highway where the traffic moves at 80mph it still moves less volume in a given timeframe then a 5 lane highway where the traffic moves at 65mph. Each core you add is adding another identical parallel highway and hence increasing volume, assuming that the "traffic" knows how to use those parallel routes.

This is why modern processors with a lower clock are generally faster than older processors with a higher clock. They have way more lanes to send the traffic down, even if the traffic isn't moving as fast.

The issue of the traffic not knowing about the parallel highways is the reason that some processors with 4+ cores aren't any faster in performing certain tasks than a 1 or 2 core processor.
Reply With Quote Quick reply to this message
 
Old 11-25-2013, 02:28 PM
 
24 posts, read 32,020 times
Reputation: 22
Most of the posters have told you everything you need to know already except for one thing: Single core performance.

There are a few points to be made here. For one, There are VERY few meaningful programs left that can only run a single core, yet still require enough power that you need a high clock processor to utilize it. A software developer in 2013 would be laughed at for writing a high-load program to run with single-core functionality. Your CELLPHONE is a quad-core, for pete's sake! And a Playstation 3 video game machine from 2007 is an Octo-core.

Two, even with an surface disadvantage in clock speed, an LGA2011 processor will still put such an outdated opponent to shame, even measuring ONLY single core performance. There are so many updates to the efficiency of the processor's design that even experts including myself don't fully understand all of it. The engineers at Intel are the best in the world at what they do.

In fact I can offer you an easy way to see what I mean about the age difference with a simple hard drive test. BOTH of your hard drives should rate at 7200RPM right? Of course they do. ... Go ahead and run an HDTune Free Speed test. Your older HDD should get around 80MB/s read speeds while your newer one will score deep into the 100's. (assuming that each HDD is about the same age as the processors they're being used with) They're drastically different performance.....but weren't they rated for the same speeds?

It's a bit like comparing a bronze sword VS a titanium sword, by comparing the length of the blade. That LGA2011 System is your Titanium Sword. Just because they're both shiny and of similar shape, doesn't mean they're on a comparable level with each other.

Just a secondary note, The Xeon X5492's are still just barely holding onto some worthwhile resale value. It may be a good time to sell those processors and memory on ebay before they tank for good. And yes, I mean only the processors and memory. The rest of the machine wont be worth its shipping weight.
Reply With Quote Quick reply to this message
 
Old 11-28-2013, 12:35 AM
 
5,460 posts, read 7,757,868 times
Reputation: 4631
Quote:
Originally Posted by townsendigital View Post
...

Two, even with an surface disadvantage in clock speed, an LGA2011 processor will still put such an outdated opponent to shame, even measuring ONLY single core performance. There are so many updates to the efficiency of the processor's design that even experts including myself don't fully understand all of it. The engineers at Intel are the best in the world at what they do.

...
Many thanks for providing the very helpful and fascinating background info I have something of a brief follow-up question for you please, regarding the bolded portion of your quoted selection above: what kind of confuses me a little I guess about modern processors is, why has Intel been continuously recycling the same stock-clock / non-overclocked CPU speeds of 1.x GHz - 3.x GHz, for almost 14 years now, since 2000 and dating back to the release of the Pentium 4 chip? The only and sole exception to this rule that I am aware of was the Intel X5698 Xeon chip released in 2010, which was a dual-core chip running 100% of the time and permanently at 4.4 GHz, which Intel did not even mass-market, and released on a limited quantity to OEMs. Even the highest-end contemporary i7 and/or Xeons of today cannot achieve a stock-clock speed of over 4 GHz without overclocking and being placed into (temporary) turbo mode.

Regardless of the number of cores a processor has, why is Intel rigidly sticking to this not-to-exceed-4 GHz ceiling, for all of these years, again excepting the X5698 chip? Why not build Ivy Bridge chips, Haswell chips, etc., that score a stock-clock (non-turbo) speed of between 4 GHz - 5 GHz, or even 10 GHz, for that matter? Is there something that is limiting them to not exceed 3.9 GHz for max non-overclocked speeds?
Reply With Quote Quick reply to this message
 
Old 11-28-2013, 06:46 AM
 
Location: Reno
843 posts, read 2,215,795 times
Reputation: 586
Quote:
Originally Posted by Knight2009 View Post
Many thanks for providing the very helpful and fascinating background info I have something of a brief follow-up question for you please, regarding the bolded portion of your quoted selection above: what kind of confuses me a little I guess about modern processors is, why has Intel been continuously recycling the same stock-clock / non-overclocked CPU speeds of 1.x GHz - 3.x GHz, for almost 14 years now, since 2000 and dating back to the release of the Pentium 4 chip? The only and sole exception to this rule that I am aware of was the Intel X5698 Xeon chip released in 2010, which was a dual-core chip running 100% of the time and permanently at 4.4 GHz, which Intel did not even mass-market, and released on a limited quantity to OEMs. Even the highest-end contemporary i7 and/or Xeons of today cannot achieve a stock-clock speed of over 4 GHz without overclocking and being placed into (temporary) turbo mode.

Regardless of the number of cores a processor has, why is Intel rigidly sticking to this not-to-exceed-4 GHz ceiling, for all of these years, again excepting the X5698 chip? Why not build Ivy Bridge chips, Haswell chips, etc., that score a stock-clock (non-turbo) speed of between 4 GHz - 5 GHz, or even 10 GHz, for that matter? Is there something that is limiting them to not exceed 3.9 GHz for max non-overclocked speeds?
Lowering heat and power consumption have been driving Intel since the Pentium 4. The way to do that is to increase efficiency via various means. There have been a lot of improvements in chipsets, memory, storage as well.
Reply With Quote Quick reply to this message
 
Old 11-28-2013, 07:44 AM
 
Location: Wandering.
3,549 posts, read 6,661,462 times
Reputation: 2704
Quote:
Originally Posted by Knight2009 View Post
Many thanks for providing the very helpful and fascinating background info I have something of a brief follow-up question for you please, regarding the bolded portion of your quoted selection above: what kind of confuses me a little I guess about modern processors is, why has Intel been continuously recycling the same stock-clock / non-overclocked CPU speeds of 1.x GHz - 3.x GHz, for almost 14 years now, since 2000 and dating back to the release of the Pentium 4 chip? The only and sole exception to this rule that I am aware of was the Intel X5698 Xeon chip released in 2010, which was a dual-core chip running 100% of the time and permanently at 4.4 GHz, which Intel did not even mass-market, and released on a limited quantity to OEMs. Even the highest-end contemporary i7 and/or Xeons of today cannot achieve a stock-clock speed of over 4 GHz without overclocking and being placed into (temporary) turbo mode.

Regardless of the number of cores a processor has, why is Intel rigidly sticking to this not-to-exceed-4 GHz ceiling, for all of these years, again excepting the X5698 chip? Why not build Ivy Bridge chips, Haswell chips, etc., that score a stock-clock (non-turbo) speed of between 4 GHz - 5 GHz, or even 10 GHz, for that matter? Is there something that is limiting them to not exceed 3.9 GHz for max non-overclocked speeds?
There's no need / advantage to it. By making the cores work more efficiently, and adding more of them, they can get more work done in the same number of cycles. Faster processors need more power and produce more heat, thus costing more to operate. Having had equipment in Colocation where we paid for our power, moving to newer more efficient machines allowed us to pay less to have them in the DC (along with getting much faster machines).
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology > Computers

All times are GMT -6. The time now is 09:37 PM.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top