Can you help me with my capacity planning? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Come Celebrate our 10 Year Anniversary!When to Add another server(s)How much free memory should I have on my webserver?Web Server Hardware - What Do I Need?Hardware requirement for video serving to 400 concurrent usersHow to handle 1M websocket connections (Nginx/HAProxy/Amazon/Google)When should I consider a 2 socket motherboard for Server?How much memory is required for base lamp setup?Sever configuration for 100 usersApache HTTPD configuration for high loadwill a 100GB database with large tables on MSSQL 2008 server run on a 3GB RAM system?Complete infrastructure capacity planning exerciseAccurately trending random I/O performance for capacity planningThroughput; capacity planning help for C10K like designWindows 2008 web app server -hardware planning helpHow do you do load testing and capacity planning for web sites?How do you do load testing and capacity planning for databases?VPS users capacitygradual increase in capacity of a virtual hddHow to plan for buying drive capacity for a NAS?JMETER: Which requests to load

Nose gear failure in single prop aircraft: belly landing or nose-gear up landing?

Monty Hall Problem-Probability Paradox

What is the origin of 落第?

What is the difference between a "ranged attack" and a "ranged weapon attack"?

Is there public access to the Meteor Crater in Arizona?

The test team as an enemy of development? And how can this be avoided?

Caught masturbating at work

How to ask rejected full-time candidates to apply to teach individual courses?

Can you force honesty by using the Speak with Dead and Zone of Truth spells together?

How many time has Arya actually used Needle?

Why datecode is SO IMPORTANT to chip manufacturers?

If Windows 7 doesn't support WSL, then what is "Subsystem for UNIX-based Applications"?

Trying to understand entropy as a novice in thermodynamics

Why weren't discrete x86 CPUs ever used in game hardware?

Would color changing eyes affect vision?

Weaponising the Grasp-at-a-Distance spell

Why not send Voyager 3 and 4 following up the paths taken by Voyager 1 and 2 to re-transmit signals of later as they fly away from Earth?

Random body shuffle every night—can we still function?

One-one communication

Why is it faster to reheat something than it is to cook it?

Found this skink in my tomato plant bucket. Is he trapped? Or could he leave if he wanted?

How much damage would a cupful of neutron star matter do to the Earth?

Printing attributes of selection in ArcPy?

Does the Black Tentacles spell do damage twice at the start of turn to an already restrained creature?



Can you help me with my capacity planning?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
Come Celebrate our 10 Year Anniversary!When to Add another server(s)How much free memory should I have on my webserver?Web Server Hardware - What Do I Need?Hardware requirement for video serving to 400 concurrent usersHow to handle 1M websocket connections (Nginx/HAProxy/Amazon/Google)When should I consider a 2 socket motherboard for Server?How much memory is required for base lamp setup?Sever configuration for 100 usersApache HTTPD configuration for high loadwill a 100GB database with large tables on MSSQL 2008 server run on a 3GB RAM system?Complete infrastructure capacity planning exerciseAccurately trending random I/O performance for capacity planningThroughput; capacity planning help for C10K like designWindows 2008 web app server -hardware planning helpHow do you do load testing and capacity planning for web sites?How do you do load testing and capacity planning for databases?VPS users capacitygradual increase in capacity of a virtual hddHow to plan for buying drive capacity for a NAS?JMETER: Which requests to load



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








132
















This is a canonical question about capacity planning



Related:



  • How do you do load testing and capacity planning for web sites?

  • How do you do load testing and capacity planning for databases?



I have a question regarding capacity planning. Can the Server Fault community please help with the following:




  • What kind of server do I need to handle some number of users?

  • How many users can a server with some specifications handle?

  • Will some server configuration be fast enough for my use case?

  • I'm building a social networking site: what kind of hardware do I need?

  • How much bandwidth do I need for some project?

  • How much bandwidth will some number of users use in some application?









share|improve this question














We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.




















    132
















    This is a canonical question about capacity planning



    Related:



    • How do you do load testing and capacity planning for web sites?

    • How do you do load testing and capacity planning for databases?



    I have a question regarding capacity planning. Can the Server Fault community please help with the following:




    • What kind of server do I need to handle some number of users?

    • How many users can a server with some specifications handle?

    • Will some server configuration be fast enough for my use case?

    • I'm building a social networking site: what kind of hardware do I need?

    • How much bandwidth do I need for some project?

    • How much bandwidth will some number of users use in some application?









    share|improve this question














    We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.
















      132












      132








      132


      81







      This is a canonical question about capacity planning



      Related:



      • How do you do load testing and capacity planning for web sites?

      • How do you do load testing and capacity planning for databases?



      I have a question regarding capacity planning. Can the Server Fault community please help with the following:




      • What kind of server do I need to handle some number of users?

      • How many users can a server with some specifications handle?

      • Will some server configuration be fast enough for my use case?

      • I'm building a social networking site: what kind of hardware do I need?

      • How much bandwidth do I need for some project?

      • How much bandwidth will some number of users use in some application?









      share|improve this question

















      This is a canonical question about capacity planning



      Related:



      • How do you do load testing and capacity planning for web sites?

      • How do you do load testing and capacity planning for databases?



      I have a question regarding capacity planning. Can the Server Fault community please help with the following:




      • What kind of server do I need to handle some number of users?

      • How many users can a server with some specifications handle?

      • Will some server configuration be fast enough for my use case?

      • I'm building a social networking site: what kind of hardware do I need?

      • How much bandwidth do I need for some project?

      • How much bandwidth will some number of users use in some application?






      capacity-planning






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 13 '17 at 12:14









      Community

      1




      1










      asked Apr 30 '12 at 19:20









      voretaq7voretaq7

      74.6k14114199




      74.6k14114199



      We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.




      We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.





















          3 Answers
          3






          active

          oldest

          votes


















          96














          The Server Fault community generally can't help you with capacity planning - the best answer we can offer is "Benchmark your code on hardware similar to what you'll be using in production, identify any bottlenecks, then determine how much of a workload your current hardware can handle, and/or how much hardware horsepower you need to handle your target workload".




          There are a number of factors at play in capacity planning which we can't adequately assess on a Question and Answer site:



          • The requirements of your particular code/software

          • External resources (databases, other software/sites/servers)

          • Your workload (peak, average, queueing)

          • The business value of performance (cost/benefit analysis)

          • The performance expectations of your users

          • Any service level agreements/contractual obligations you may have

          Doing a proper analysis on these factors, and others, is beyond the scope of a simple question-and-answer site: They require detailed knowledge about your environment and requirements which only your team (or an adequately-compensated consultant) can gather efficiently.




          Some Capacity Planning Axioms




          1. RAM is cheap

            If you expect your application to use a lot of RAM you should put in as much RAM as you can afford / fit.


          2. Disk is cheap

            If you expect to use a lot of disk you should buy big drives - lots of them.

            SAN/NAS storage is less cheap, and should also usually be spec'd large rather than small to avoid costly upgrades later.


          3. Workloads grow over time

            Assume your resource needs will increase.

            Bear in mind that the increase may not be symmetrical (CPU and RAM may rise faster than disk), and it may not be linear.


          4. Electricity is expensive

            Even though RAM and disks have decreased in price considerably, the cost of electricity has gone up steadily. All those extra disks and RAM, not to mention CPU power, will increase your electricity bill (or the bill you pay to your provider). Plan accordingly.





          share|improve this answer




















          • 1





            You should totally drop that and use integration by parts!

            – Gilles
            May 28 '13 at 19:13











          • +1. And RAM, as you suggest in axiom #1, is one of those things that has massive benefits. For instance, it increases your ability to better utilize caching, which in turn allows you to make fewer database queries, which in turn lightens the load on the disk and CPU. I'm often frustrated by hosting providers that offer a fast CPU with their servers and a minimal amount of RAM.

            – Steve Wortham
            May 28 '13 at 23:03







          • 30





            I'd add to this: Disk capacity is cheap. Disk performance gets expensive. Especially as we see a growth in disk sizes over 10 yearrs, but the laws of physics haven't changed. Rule of thumb I use (as of today; June 2014) is that for optimal performance: 75 IOPs per spindle on SATA, 200 IOPs per spindle on FC, and 1500 IOPs per SSD. Big SATA drives give really quite poor IO per gigabyte ratios.

            – Sobrique
            Jun 11 '14 at 10:52


















          43














          Virtual Machine Count planning



          When it comes to figuring out how many VMs you should plan for on a single host, there are actually no really good rules of thumb. In fact, there is only one, and it is only kind of good:




          Virtual-Machine counts are usually bounded by RAM, except for when they're not.




          Which isn't terribly helpful. If those VMs are going to be running low-CPU applications, then your limiter is going to be based on RAM. Each VM platform has its own abilities to oversubscribe RAM, so it isn't as easy as TOTAL_RAM / Per-VM-RAM = MachineCount, but that number is a good planning item.



          But what if your VMs are doing things besides low-CPU packet-slinging?




          Virtual-machine counts are bounded by seven discrete resources available to the host machine:




          • Hypervisor VMware, Xen, HyperV, KVM, whatever. Each has their own count-impacting features. Some are very good at memory-page deduplication, others not so much. Some don't permit oversubscription of CPU capacity, some do.


          • CPU Core Speed This limits the maximum single-threaded performance a VM will be able to run. 36 cores of a 1.8 GHz CPU may be 64.8 GHz of CPU on a host, but no single thread will run faster than 1.8 GHz.


          • CPU Core Count This, with core-speed, describes the ceiling of maximal CPU performance you can experience.


          • System RAM As described above, this limits the number of VMs you can run. Certain hypervisors are better than others at things like memory-page deduplication, so if you're running 100 identical VMs you can pack a lot more of these on such deduplicating systems than if you were running 100 completely different VMs.


          • Disk Size Each OS image takes a certain amount of space. You need enough space to store it all. Therefore, disk-size puts an upper limit on how many VMs you can host.


          • I/O Bandwidth The disk underlying the VMs has a maximum on how many I/Os per second it can handle. If you throw too much at it, systems will bog down waiting for the I/O to complete. This puts an upper limit on how many I/O consuming VMs you can run.


          • Network Bandwidth For network-using VMs, the available network bandwidth will put a ceiling on how many such VMs you can run on a given host.

          All of these can be the thing you trip over, it all depends on what you're doing with your VMs. Some things to remember:



          • There is no such thing as a generic system.


          • There is no such thing as a generic web-server, since application code can run from barely-moves-the-needle CDN-style serving, to big deep-crack stuff like video transcoding.


          • There is no such thing as a generic database server. These can run from tiny systems used just for session-state-tracking, to very big ones.


          To figure out how many VMs you can pack into a host-system, you need to know how your systems run and what they require to run well. Once you know that, you can then do the count-planning. And better yet, figure out how beefy you need to make your host-systems!






          share|improve this answer

























          • above all else, do use vm based systems on two separate physical servers with unbound vm's. this allows for hardware failure without loss of the entire system. vm's can move between identical servers without loss of data. just sessions get lost, then rebuilt. personally, i would outsource to a hosting company that offers these services (google or amazon). they are expensive but a lot less than running your own.

            – Random-IT
            Mar 19 '15 at 16:07






          • 1





            The thing that I've seen undersized most often in VM implementations is disk I/O. Most people understand disk space, CPU speed, and memory. They forget about that disk performance.

            – Dan Pritts
            Feb 19 '16 at 17:31


















          5














          Make sure you're asking the right question.



          • Computers are cheap

          • Future needs are very hard to predict

          • Plan how to scale, not what to buy ahead of time

          If you don't know what you'll need, that implies you don't need very much. If you have a hot web site, you also probably also have an operations team who knows how much ram, disk, io, network etc... your app needs. If you're in the dreaming stage, you should start with your desktop and work your way up.



          Make sure you have some idea how you're going to scale when things get bigger. Can you add more servers behind the load balancer? Can you shard the redis server?



          Also, having your own data center sucks. A data center (even if it's just one computer) is a distraction from your actual purpose. You can not just buy a computer, turn it on, and walk away. You need air conditioning, air filtration, reliable power, reliable internet, backups, spare parts, physical room to grow, power capacity to grow, power cables that don't get tripped on and a zillion other headaches.






          share|improve this answer























            protected by voretaq7 Apr 30 '12 at 19:20



            Thank you for your interest in this question.
            Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



            Would you like to answer one of these unanswered questions instead?














            3 Answers
            3






            active

            oldest

            votes








            3 Answers
            3






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            96














            The Server Fault community generally can't help you with capacity planning - the best answer we can offer is "Benchmark your code on hardware similar to what you'll be using in production, identify any bottlenecks, then determine how much of a workload your current hardware can handle, and/or how much hardware horsepower you need to handle your target workload".




            There are a number of factors at play in capacity planning which we can't adequately assess on a Question and Answer site:



            • The requirements of your particular code/software

            • External resources (databases, other software/sites/servers)

            • Your workload (peak, average, queueing)

            • The business value of performance (cost/benefit analysis)

            • The performance expectations of your users

            • Any service level agreements/contractual obligations you may have

            Doing a proper analysis on these factors, and others, is beyond the scope of a simple question-and-answer site: They require detailed knowledge about your environment and requirements which only your team (or an adequately-compensated consultant) can gather efficiently.




            Some Capacity Planning Axioms




            1. RAM is cheap

              If you expect your application to use a lot of RAM you should put in as much RAM as you can afford / fit.


            2. Disk is cheap

              If you expect to use a lot of disk you should buy big drives - lots of them.

              SAN/NAS storage is less cheap, and should also usually be spec'd large rather than small to avoid costly upgrades later.


            3. Workloads grow over time

              Assume your resource needs will increase.

              Bear in mind that the increase may not be symmetrical (CPU and RAM may rise faster than disk), and it may not be linear.


            4. Electricity is expensive

              Even though RAM and disks have decreased in price considerably, the cost of electricity has gone up steadily. All those extra disks and RAM, not to mention CPU power, will increase your electricity bill (or the bill you pay to your provider). Plan accordingly.





            share|improve this answer




















            • 1





              You should totally drop that and use integration by parts!

              – Gilles
              May 28 '13 at 19:13











            • +1. And RAM, as you suggest in axiom #1, is one of those things that has massive benefits. For instance, it increases your ability to better utilize caching, which in turn allows you to make fewer database queries, which in turn lightens the load on the disk and CPU. I'm often frustrated by hosting providers that offer a fast CPU with their servers and a minimal amount of RAM.

              – Steve Wortham
              May 28 '13 at 23:03







            • 30





              I'd add to this: Disk capacity is cheap. Disk performance gets expensive. Especially as we see a growth in disk sizes over 10 yearrs, but the laws of physics haven't changed. Rule of thumb I use (as of today; June 2014) is that for optimal performance: 75 IOPs per spindle on SATA, 200 IOPs per spindle on FC, and 1500 IOPs per SSD. Big SATA drives give really quite poor IO per gigabyte ratios.

              – Sobrique
              Jun 11 '14 at 10:52















            96














            The Server Fault community generally can't help you with capacity planning - the best answer we can offer is "Benchmark your code on hardware similar to what you'll be using in production, identify any bottlenecks, then determine how much of a workload your current hardware can handle, and/or how much hardware horsepower you need to handle your target workload".




            There are a number of factors at play in capacity planning which we can't adequately assess on a Question and Answer site:



            • The requirements of your particular code/software

            • External resources (databases, other software/sites/servers)

            • Your workload (peak, average, queueing)

            • The business value of performance (cost/benefit analysis)

            • The performance expectations of your users

            • Any service level agreements/contractual obligations you may have

            Doing a proper analysis on these factors, and others, is beyond the scope of a simple question-and-answer site: They require detailed knowledge about your environment and requirements which only your team (or an adequately-compensated consultant) can gather efficiently.




            Some Capacity Planning Axioms




            1. RAM is cheap

              If you expect your application to use a lot of RAM you should put in as much RAM as you can afford / fit.


            2. Disk is cheap

              If you expect to use a lot of disk you should buy big drives - lots of them.

              SAN/NAS storage is less cheap, and should also usually be spec'd large rather than small to avoid costly upgrades later.


            3. Workloads grow over time

              Assume your resource needs will increase.

              Bear in mind that the increase may not be symmetrical (CPU and RAM may rise faster than disk), and it may not be linear.


            4. Electricity is expensive

              Even though RAM and disks have decreased in price considerably, the cost of electricity has gone up steadily. All those extra disks and RAM, not to mention CPU power, will increase your electricity bill (or the bill you pay to your provider). Plan accordingly.





            share|improve this answer




















            • 1





              You should totally drop that and use integration by parts!

              – Gilles
              May 28 '13 at 19:13











            • +1. And RAM, as you suggest in axiom #1, is one of those things that has massive benefits. For instance, it increases your ability to better utilize caching, which in turn allows you to make fewer database queries, which in turn lightens the load on the disk and CPU. I'm often frustrated by hosting providers that offer a fast CPU with their servers and a minimal amount of RAM.

              – Steve Wortham
              May 28 '13 at 23:03







            • 30





              I'd add to this: Disk capacity is cheap. Disk performance gets expensive. Especially as we see a growth in disk sizes over 10 yearrs, but the laws of physics haven't changed. Rule of thumb I use (as of today; June 2014) is that for optimal performance: 75 IOPs per spindle on SATA, 200 IOPs per spindle on FC, and 1500 IOPs per SSD. Big SATA drives give really quite poor IO per gigabyte ratios.

              – Sobrique
              Jun 11 '14 at 10:52













            96












            96








            96







            The Server Fault community generally can't help you with capacity planning - the best answer we can offer is "Benchmark your code on hardware similar to what you'll be using in production, identify any bottlenecks, then determine how much of a workload your current hardware can handle, and/or how much hardware horsepower you need to handle your target workload".




            There are a number of factors at play in capacity planning which we can't adequately assess on a Question and Answer site:



            • The requirements of your particular code/software

            • External resources (databases, other software/sites/servers)

            • Your workload (peak, average, queueing)

            • The business value of performance (cost/benefit analysis)

            • The performance expectations of your users

            • Any service level agreements/contractual obligations you may have

            Doing a proper analysis on these factors, and others, is beyond the scope of a simple question-and-answer site: They require detailed knowledge about your environment and requirements which only your team (or an adequately-compensated consultant) can gather efficiently.




            Some Capacity Planning Axioms




            1. RAM is cheap

              If you expect your application to use a lot of RAM you should put in as much RAM as you can afford / fit.


            2. Disk is cheap

              If you expect to use a lot of disk you should buy big drives - lots of them.

              SAN/NAS storage is less cheap, and should also usually be spec'd large rather than small to avoid costly upgrades later.


            3. Workloads grow over time

              Assume your resource needs will increase.

              Bear in mind that the increase may not be symmetrical (CPU and RAM may rise faster than disk), and it may not be linear.


            4. Electricity is expensive

              Even though RAM and disks have decreased in price considerably, the cost of electricity has gone up steadily. All those extra disks and RAM, not to mention CPU power, will increase your electricity bill (or the bill you pay to your provider). Plan accordingly.





            share|improve this answer















            The Server Fault community generally can't help you with capacity planning - the best answer we can offer is "Benchmark your code on hardware similar to what you'll be using in production, identify any bottlenecks, then determine how much of a workload your current hardware can handle, and/or how much hardware horsepower you need to handle your target workload".




            There are a number of factors at play in capacity planning which we can't adequately assess on a Question and Answer site:



            • The requirements of your particular code/software

            • External resources (databases, other software/sites/servers)

            • Your workload (peak, average, queueing)

            • The business value of performance (cost/benefit analysis)

            • The performance expectations of your users

            • Any service level agreements/contractual obligations you may have

            Doing a proper analysis on these factors, and others, is beyond the scope of a simple question-and-answer site: They require detailed knowledge about your environment and requirements which only your team (or an adequately-compensated consultant) can gather efficiently.




            Some Capacity Planning Axioms




            1. RAM is cheap

              If you expect your application to use a lot of RAM you should put in as much RAM as you can afford / fit.


            2. Disk is cheap

              If you expect to use a lot of disk you should buy big drives - lots of them.

              SAN/NAS storage is less cheap, and should also usually be spec'd large rather than small to avoid costly upgrades later.


            3. Workloads grow over time

              Assume your resource needs will increase.

              Bear in mind that the increase may not be symmetrical (CPU and RAM may rise faster than disk), and it may not be linear.


            4. Electricity is expensive

              Even though RAM and disks have decreased in price considerably, the cost of electricity has gone up steadily. All those extra disks and RAM, not to mention CPU power, will increase your electricity bill (or the bill you pay to your provider). Plan accordingly.






            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 25 '15 at 20:34


























            community wiki





            4 revs, 2 users 90%
            voretaq7








            • 1





              You should totally drop that and use integration by parts!

              – Gilles
              May 28 '13 at 19:13











            • +1. And RAM, as you suggest in axiom #1, is one of those things that has massive benefits. For instance, it increases your ability to better utilize caching, which in turn allows you to make fewer database queries, which in turn lightens the load on the disk and CPU. I'm often frustrated by hosting providers that offer a fast CPU with their servers and a minimal amount of RAM.

              – Steve Wortham
              May 28 '13 at 23:03







            • 30





              I'd add to this: Disk capacity is cheap. Disk performance gets expensive. Especially as we see a growth in disk sizes over 10 yearrs, but the laws of physics haven't changed. Rule of thumb I use (as of today; June 2014) is that for optimal performance: 75 IOPs per spindle on SATA, 200 IOPs per spindle on FC, and 1500 IOPs per SSD. Big SATA drives give really quite poor IO per gigabyte ratios.

              – Sobrique
              Jun 11 '14 at 10:52












            • 1





              You should totally drop that and use integration by parts!

              – Gilles
              May 28 '13 at 19:13











            • +1. And RAM, as you suggest in axiom #1, is one of those things that has massive benefits. For instance, it increases your ability to better utilize caching, which in turn allows you to make fewer database queries, which in turn lightens the load on the disk and CPU. I'm often frustrated by hosting providers that offer a fast CPU with their servers and a minimal amount of RAM.

              – Steve Wortham
              May 28 '13 at 23:03







            • 30





              I'd add to this: Disk capacity is cheap. Disk performance gets expensive. Especially as we see a growth in disk sizes over 10 yearrs, but the laws of physics haven't changed. Rule of thumb I use (as of today; June 2014) is that for optimal performance: 75 IOPs per spindle on SATA, 200 IOPs per spindle on FC, and 1500 IOPs per SSD. Big SATA drives give really quite poor IO per gigabyte ratios.

              – Sobrique
              Jun 11 '14 at 10:52







            1




            1





            You should totally drop that and use integration by parts!

            – Gilles
            May 28 '13 at 19:13





            You should totally drop that and use integration by parts!

            – Gilles
            May 28 '13 at 19:13













            +1. And RAM, as you suggest in axiom #1, is one of those things that has massive benefits. For instance, it increases your ability to better utilize caching, which in turn allows you to make fewer database queries, which in turn lightens the load on the disk and CPU. I'm often frustrated by hosting providers that offer a fast CPU with their servers and a minimal amount of RAM.

            – Steve Wortham
            May 28 '13 at 23:03






            +1. And RAM, as you suggest in axiom #1, is one of those things that has massive benefits. For instance, it increases your ability to better utilize caching, which in turn allows you to make fewer database queries, which in turn lightens the load on the disk and CPU. I'm often frustrated by hosting providers that offer a fast CPU with their servers and a minimal amount of RAM.

            – Steve Wortham
            May 28 '13 at 23:03





            30




            30





            I'd add to this: Disk capacity is cheap. Disk performance gets expensive. Especially as we see a growth in disk sizes over 10 yearrs, but the laws of physics haven't changed. Rule of thumb I use (as of today; June 2014) is that for optimal performance: 75 IOPs per spindle on SATA, 200 IOPs per spindle on FC, and 1500 IOPs per SSD. Big SATA drives give really quite poor IO per gigabyte ratios.

            – Sobrique
            Jun 11 '14 at 10:52





            I'd add to this: Disk capacity is cheap. Disk performance gets expensive. Especially as we see a growth in disk sizes over 10 yearrs, but the laws of physics haven't changed. Rule of thumb I use (as of today; June 2014) is that for optimal performance: 75 IOPs per spindle on SATA, 200 IOPs per spindle on FC, and 1500 IOPs per SSD. Big SATA drives give really quite poor IO per gigabyte ratios.

            – Sobrique
            Jun 11 '14 at 10:52













            43














            Virtual Machine Count planning



            When it comes to figuring out how many VMs you should plan for on a single host, there are actually no really good rules of thumb. In fact, there is only one, and it is only kind of good:




            Virtual-Machine counts are usually bounded by RAM, except for when they're not.




            Which isn't terribly helpful. If those VMs are going to be running low-CPU applications, then your limiter is going to be based on RAM. Each VM platform has its own abilities to oversubscribe RAM, so it isn't as easy as TOTAL_RAM / Per-VM-RAM = MachineCount, but that number is a good planning item.



            But what if your VMs are doing things besides low-CPU packet-slinging?




            Virtual-machine counts are bounded by seven discrete resources available to the host machine:




            • Hypervisor VMware, Xen, HyperV, KVM, whatever. Each has their own count-impacting features. Some are very good at memory-page deduplication, others not so much. Some don't permit oversubscription of CPU capacity, some do.


            • CPU Core Speed This limits the maximum single-threaded performance a VM will be able to run. 36 cores of a 1.8 GHz CPU may be 64.8 GHz of CPU on a host, but no single thread will run faster than 1.8 GHz.


            • CPU Core Count This, with core-speed, describes the ceiling of maximal CPU performance you can experience.


            • System RAM As described above, this limits the number of VMs you can run. Certain hypervisors are better than others at things like memory-page deduplication, so if you're running 100 identical VMs you can pack a lot more of these on such deduplicating systems than if you were running 100 completely different VMs.


            • Disk Size Each OS image takes a certain amount of space. You need enough space to store it all. Therefore, disk-size puts an upper limit on how many VMs you can host.


            • I/O Bandwidth The disk underlying the VMs has a maximum on how many I/Os per second it can handle. If you throw too much at it, systems will bog down waiting for the I/O to complete. This puts an upper limit on how many I/O consuming VMs you can run.


            • Network Bandwidth For network-using VMs, the available network bandwidth will put a ceiling on how many such VMs you can run on a given host.

            All of these can be the thing you trip over, it all depends on what you're doing with your VMs. Some things to remember:



            • There is no such thing as a generic system.


            • There is no such thing as a generic web-server, since application code can run from barely-moves-the-needle CDN-style serving, to big deep-crack stuff like video transcoding.


            • There is no such thing as a generic database server. These can run from tiny systems used just for session-state-tracking, to very big ones.


            To figure out how many VMs you can pack into a host-system, you need to know how your systems run and what they require to run well. Once you know that, you can then do the count-planning. And better yet, figure out how beefy you need to make your host-systems!






            share|improve this answer

























            • above all else, do use vm based systems on two separate physical servers with unbound vm's. this allows for hardware failure without loss of the entire system. vm's can move between identical servers without loss of data. just sessions get lost, then rebuilt. personally, i would outsource to a hosting company that offers these services (google or amazon). they are expensive but a lot less than running your own.

              – Random-IT
              Mar 19 '15 at 16:07






            • 1





              The thing that I've seen undersized most often in VM implementations is disk I/O. Most people understand disk space, CPU speed, and memory. They forget about that disk performance.

              – Dan Pritts
              Feb 19 '16 at 17:31















            43














            Virtual Machine Count planning



            When it comes to figuring out how many VMs you should plan for on a single host, there are actually no really good rules of thumb. In fact, there is only one, and it is only kind of good:




            Virtual-Machine counts are usually bounded by RAM, except for when they're not.




            Which isn't terribly helpful. If those VMs are going to be running low-CPU applications, then your limiter is going to be based on RAM. Each VM platform has its own abilities to oversubscribe RAM, so it isn't as easy as TOTAL_RAM / Per-VM-RAM = MachineCount, but that number is a good planning item.



            But what if your VMs are doing things besides low-CPU packet-slinging?




            Virtual-machine counts are bounded by seven discrete resources available to the host machine:




            • Hypervisor VMware, Xen, HyperV, KVM, whatever. Each has their own count-impacting features. Some are very good at memory-page deduplication, others not so much. Some don't permit oversubscription of CPU capacity, some do.


            • CPU Core Speed This limits the maximum single-threaded performance a VM will be able to run. 36 cores of a 1.8 GHz CPU may be 64.8 GHz of CPU on a host, but no single thread will run faster than 1.8 GHz.


            • CPU Core Count This, with core-speed, describes the ceiling of maximal CPU performance you can experience.


            • System RAM As described above, this limits the number of VMs you can run. Certain hypervisors are better than others at things like memory-page deduplication, so if you're running 100 identical VMs you can pack a lot more of these on such deduplicating systems than if you were running 100 completely different VMs.


            • Disk Size Each OS image takes a certain amount of space. You need enough space to store it all. Therefore, disk-size puts an upper limit on how many VMs you can host.


            • I/O Bandwidth The disk underlying the VMs has a maximum on how many I/Os per second it can handle. If you throw too much at it, systems will bog down waiting for the I/O to complete. This puts an upper limit on how many I/O consuming VMs you can run.


            • Network Bandwidth For network-using VMs, the available network bandwidth will put a ceiling on how many such VMs you can run on a given host.

            All of these can be the thing you trip over, it all depends on what you're doing with your VMs. Some things to remember:



            • There is no such thing as a generic system.


            • There is no such thing as a generic web-server, since application code can run from barely-moves-the-needle CDN-style serving, to big deep-crack stuff like video transcoding.


            • There is no such thing as a generic database server. These can run from tiny systems used just for session-state-tracking, to very big ones.


            To figure out how many VMs you can pack into a host-system, you need to know how your systems run and what they require to run well. Once you know that, you can then do the count-planning. And better yet, figure out how beefy you need to make your host-systems!






            share|improve this answer

























            • above all else, do use vm based systems on two separate physical servers with unbound vm's. this allows for hardware failure without loss of the entire system. vm's can move between identical servers without loss of data. just sessions get lost, then rebuilt. personally, i would outsource to a hosting company that offers these services (google or amazon). they are expensive but a lot less than running your own.

              – Random-IT
              Mar 19 '15 at 16:07






            • 1





              The thing that I've seen undersized most often in VM implementations is disk I/O. Most people understand disk space, CPU speed, and memory. They forget about that disk performance.

              – Dan Pritts
              Feb 19 '16 at 17:31













            43












            43








            43







            Virtual Machine Count planning



            When it comes to figuring out how many VMs you should plan for on a single host, there are actually no really good rules of thumb. In fact, there is only one, and it is only kind of good:




            Virtual-Machine counts are usually bounded by RAM, except for when they're not.




            Which isn't terribly helpful. If those VMs are going to be running low-CPU applications, then your limiter is going to be based on RAM. Each VM platform has its own abilities to oversubscribe RAM, so it isn't as easy as TOTAL_RAM / Per-VM-RAM = MachineCount, but that number is a good planning item.



            But what if your VMs are doing things besides low-CPU packet-slinging?




            Virtual-machine counts are bounded by seven discrete resources available to the host machine:




            • Hypervisor VMware, Xen, HyperV, KVM, whatever. Each has their own count-impacting features. Some are very good at memory-page deduplication, others not so much. Some don't permit oversubscription of CPU capacity, some do.


            • CPU Core Speed This limits the maximum single-threaded performance a VM will be able to run. 36 cores of a 1.8 GHz CPU may be 64.8 GHz of CPU on a host, but no single thread will run faster than 1.8 GHz.


            • CPU Core Count This, with core-speed, describes the ceiling of maximal CPU performance you can experience.


            • System RAM As described above, this limits the number of VMs you can run. Certain hypervisors are better than others at things like memory-page deduplication, so if you're running 100 identical VMs you can pack a lot more of these on such deduplicating systems than if you were running 100 completely different VMs.


            • Disk Size Each OS image takes a certain amount of space. You need enough space to store it all. Therefore, disk-size puts an upper limit on how many VMs you can host.


            • I/O Bandwidth The disk underlying the VMs has a maximum on how many I/Os per second it can handle. If you throw too much at it, systems will bog down waiting for the I/O to complete. This puts an upper limit on how many I/O consuming VMs you can run.


            • Network Bandwidth For network-using VMs, the available network bandwidth will put a ceiling on how many such VMs you can run on a given host.

            All of these can be the thing you trip over, it all depends on what you're doing with your VMs. Some things to remember:



            • There is no such thing as a generic system.


            • There is no such thing as a generic web-server, since application code can run from barely-moves-the-needle CDN-style serving, to big deep-crack stuff like video transcoding.


            • There is no such thing as a generic database server. These can run from tiny systems used just for session-state-tracking, to very big ones.


            To figure out how many VMs you can pack into a host-system, you need to know how your systems run and what they require to run well. Once you know that, you can then do the count-planning. And better yet, figure out how beefy you need to make your host-systems!






            share|improve this answer















            Virtual Machine Count planning



            When it comes to figuring out how many VMs you should plan for on a single host, there are actually no really good rules of thumb. In fact, there is only one, and it is only kind of good:




            Virtual-Machine counts are usually bounded by RAM, except for when they're not.




            Which isn't terribly helpful. If those VMs are going to be running low-CPU applications, then your limiter is going to be based on RAM. Each VM platform has its own abilities to oversubscribe RAM, so it isn't as easy as TOTAL_RAM / Per-VM-RAM = MachineCount, but that number is a good planning item.



            But what if your VMs are doing things besides low-CPU packet-slinging?




            Virtual-machine counts are bounded by seven discrete resources available to the host machine:




            • Hypervisor VMware, Xen, HyperV, KVM, whatever. Each has their own count-impacting features. Some are very good at memory-page deduplication, others not so much. Some don't permit oversubscription of CPU capacity, some do.


            • CPU Core Speed This limits the maximum single-threaded performance a VM will be able to run. 36 cores of a 1.8 GHz CPU may be 64.8 GHz of CPU on a host, but no single thread will run faster than 1.8 GHz.


            • CPU Core Count This, with core-speed, describes the ceiling of maximal CPU performance you can experience.


            • System RAM As described above, this limits the number of VMs you can run. Certain hypervisors are better than others at things like memory-page deduplication, so if you're running 100 identical VMs you can pack a lot more of these on such deduplicating systems than if you were running 100 completely different VMs.


            • Disk Size Each OS image takes a certain amount of space. You need enough space to store it all. Therefore, disk-size puts an upper limit on how many VMs you can host.


            • I/O Bandwidth The disk underlying the VMs has a maximum on how many I/Os per second it can handle. If you throw too much at it, systems will bog down waiting for the I/O to complete. This puts an upper limit on how many I/O consuming VMs you can run.


            • Network Bandwidth For network-using VMs, the available network bandwidth will put a ceiling on how many such VMs you can run on a given host.

            All of these can be the thing you trip over, it all depends on what you're doing with your VMs. Some things to remember:



            • There is no such thing as a generic system.


            • There is no such thing as a generic web-server, since application code can run from barely-moves-the-needle CDN-style serving, to big deep-crack stuff like video transcoding.


            • There is no such thing as a generic database server. These can run from tiny systems used just for session-state-tracking, to very big ones.


            To figure out how many VMs you can pack into a host-system, you need to know how your systems run and what they require to run well. Once you know that, you can then do the count-planning. And better yet, figure out how beefy you need to make your host-systems!







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 20 '16 at 11:05









            Peter Mortensen

            2,15142124




            2,15142124










            answered Jan 17 '13 at 15:46









            sysadmin1138sysadmin1138

            117k17145282




            117k17145282












            • above all else, do use vm based systems on two separate physical servers with unbound vm's. this allows for hardware failure without loss of the entire system. vm's can move between identical servers without loss of data. just sessions get lost, then rebuilt. personally, i would outsource to a hosting company that offers these services (google or amazon). they are expensive but a lot less than running your own.

              – Random-IT
              Mar 19 '15 at 16:07






            • 1





              The thing that I've seen undersized most often in VM implementations is disk I/O. Most people understand disk space, CPU speed, and memory. They forget about that disk performance.

              – Dan Pritts
              Feb 19 '16 at 17:31

















            • above all else, do use vm based systems on two separate physical servers with unbound vm's. this allows for hardware failure without loss of the entire system. vm's can move between identical servers without loss of data. just sessions get lost, then rebuilt. personally, i would outsource to a hosting company that offers these services (google or amazon). they are expensive but a lot less than running your own.

              – Random-IT
              Mar 19 '15 at 16:07






            • 1





              The thing that I've seen undersized most often in VM implementations is disk I/O. Most people understand disk space, CPU speed, and memory. They forget about that disk performance.

              – Dan Pritts
              Feb 19 '16 at 17:31
















            above all else, do use vm based systems on two separate physical servers with unbound vm's. this allows for hardware failure without loss of the entire system. vm's can move between identical servers without loss of data. just sessions get lost, then rebuilt. personally, i would outsource to a hosting company that offers these services (google or amazon). they are expensive but a lot less than running your own.

            – Random-IT
            Mar 19 '15 at 16:07





            above all else, do use vm based systems on two separate physical servers with unbound vm's. this allows for hardware failure without loss of the entire system. vm's can move between identical servers without loss of data. just sessions get lost, then rebuilt. personally, i would outsource to a hosting company that offers these services (google or amazon). they are expensive but a lot less than running your own.

            – Random-IT
            Mar 19 '15 at 16:07




            1




            1





            The thing that I've seen undersized most often in VM implementations is disk I/O. Most people understand disk space, CPU speed, and memory. They forget about that disk performance.

            – Dan Pritts
            Feb 19 '16 at 17:31





            The thing that I've seen undersized most often in VM implementations is disk I/O. Most people understand disk space, CPU speed, and memory. They forget about that disk performance.

            – Dan Pritts
            Feb 19 '16 at 17:31











            5














            Make sure you're asking the right question.



            • Computers are cheap

            • Future needs are very hard to predict

            • Plan how to scale, not what to buy ahead of time

            If you don't know what you'll need, that implies you don't need very much. If you have a hot web site, you also probably also have an operations team who knows how much ram, disk, io, network etc... your app needs. If you're in the dreaming stage, you should start with your desktop and work your way up.



            Make sure you have some idea how you're going to scale when things get bigger. Can you add more servers behind the load balancer? Can you shard the redis server?



            Also, having your own data center sucks. A data center (even if it's just one computer) is a distraction from your actual purpose. You can not just buy a computer, turn it on, and walk away. You need air conditioning, air filtration, reliable power, reliable internet, backups, spare parts, physical room to grow, power capacity to grow, power cables that don't get tripped on and a zillion other headaches.






            share|improve this answer





























              5














              Make sure you're asking the right question.



              • Computers are cheap

              • Future needs are very hard to predict

              • Plan how to scale, not what to buy ahead of time

              If you don't know what you'll need, that implies you don't need very much. If you have a hot web site, you also probably also have an operations team who knows how much ram, disk, io, network etc... your app needs. If you're in the dreaming stage, you should start with your desktop and work your way up.



              Make sure you have some idea how you're going to scale when things get bigger. Can you add more servers behind the load balancer? Can you shard the redis server?



              Also, having your own data center sucks. A data center (even if it's just one computer) is a distraction from your actual purpose. You can not just buy a computer, turn it on, and walk away. You need air conditioning, air filtration, reliable power, reliable internet, backups, spare parts, physical room to grow, power capacity to grow, power cables that don't get tripped on and a zillion other headaches.






              share|improve this answer



























                5












                5








                5







                Make sure you're asking the right question.



                • Computers are cheap

                • Future needs are very hard to predict

                • Plan how to scale, not what to buy ahead of time

                If you don't know what you'll need, that implies you don't need very much. If you have a hot web site, you also probably also have an operations team who knows how much ram, disk, io, network etc... your app needs. If you're in the dreaming stage, you should start with your desktop and work your way up.



                Make sure you have some idea how you're going to scale when things get bigger. Can you add more servers behind the load balancer? Can you shard the redis server?



                Also, having your own data center sucks. A data center (even if it's just one computer) is a distraction from your actual purpose. You can not just buy a computer, turn it on, and walk away. You need air conditioning, air filtration, reliable power, reliable internet, backups, spare parts, physical room to grow, power capacity to grow, power cables that don't get tripped on and a zillion other headaches.






                share|improve this answer















                Make sure you're asking the right question.



                • Computers are cheap

                • Future needs are very hard to predict

                • Plan how to scale, not what to buy ahead of time

                If you don't know what you'll need, that implies you don't need very much. If you have a hot web site, you also probably also have an operations team who knows how much ram, disk, io, network etc... your app needs. If you're in the dreaming stage, you should start with your desktop and work your way up.



                Make sure you have some idea how you're going to scale when things get bigger. Can you add more servers behind the load balancer? Can you shard the redis server?



                Also, having your own data center sucks. A data center (even if it's just one computer) is a distraction from your actual purpose. You can not just buy a computer, turn it on, and walk away. You need air conditioning, air filtration, reliable power, reliable internet, backups, spare parts, physical room to grow, power capacity to grow, power cables that don't get tripped on and a zillion other headaches.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Feb 6 '17 at 21:16

























                answered Feb 6 '17 at 20:32









                Dylan MartinDylan Martin

                433310




                433310















                    protected by voretaq7 Apr 30 '12 at 19:20



                    Thank you for your interest in this question.
                    Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                    Would you like to answer one of these unanswered questions instead?



                    Popular posts from this blog

                    Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

                    Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

                    Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020