Planning my first server with Ubuntu KVM Virtual MachinesRunning 100 virtual machines on a single VMWare host serverHow much swap space for a 32GB ubuntu virtualization server?Ubuntu KVM Should I used 32 or 64bit guest on 64bit hostHow to improve network performance between two Win 2008 KMV guest having virtio driver already?KVM: maximum number of cores and amount of memory for VMsSuggestions for splitting server roles amongst Hyper-V virtual servers / RAID6 or RAID10? / AppAssureHosting a ZFS server as a virtual guestZFS on KVM virtualised server. Should I run ZFS on the host or the guest?vSphere education - What are the downsides of configuring VMs with *too* much RAM?sharing resources between virtual machines
Thread Pool C++ Implementation
Soft question: Examples where lack of mathematical rigour cause security breaches?
Why did the Herschel Space Telescope need helium coolant?
SHELL environment variable still points to zsh after using bash
Did Milano or Benatar approve or comment on their namesake MCU ships?
How to tell your grandparent to not come to fetch you with their car?
What language is software running on the ISS written in?
Impedance ratio vs. SWR
Logarithm of exponential
Prime Sieve and brute force
How can I end combat quickly when the outcome is inevitable?
Is a lack of character descriptions a problem?
Cycle through MeshStyle directives in ListLinePlot
How to deal with apathetic co-worker?
1980s live-action movie where individually-coloured nations on clouds fight
How to return a security deposit to a tenant
Does the spell Clone require any material components to cast on a Zealot barbarian?
Recommended tools for graphs and charts
Is it possible to have a wealthy country without middle class?
Is it a problem if <h4>, <h5> and <h6> are smaller than regular text?
Inward extrusion is not working
English word for "product of tinkering"
What do abbreviations in movie scripts stand for?
Is the term 'open source' a trademark?
Planning my first server with Ubuntu KVM Virtual Machines
Running 100 virtual machines on a single VMWare host serverHow much swap space for a 32GB ubuntu virtualization server?Ubuntu KVM Should I used 32 or 64bit guest on 64bit hostHow to improve network performance between two Win 2008 KMV guest having virtio driver already?KVM: maximum number of cores and amount of memory for VMsSuggestions for splitting server roles amongst Hyper-V virtual servers / RAID6 or RAID10? / AppAssureHosting a ZFS server as a virtual guestZFS on KVM virtualised server. Should I run ZFS on the host or the guest?vSphere education - What are the downsides of configuring VMs with *too* much RAM?sharing resources between virtual machines
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I am putting together a dual-xeon quad core (i.e., 8 cores total) 12GB RAM linux server to replace several old smaller servers. I would like to use virtualization both to learn about it and because the individuals who were using the old servers need to be kept separated.
I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID mirror.
I believe I will use Ubuntu 10.04 LTS with KVM as the host system and Ubuntu 10.04 for the primary resource-intensive guest VM. The three additional guest VMs will probably be Debian Lenny and are low usage and low priority.
Does the following resource allocation plan make sense or do more experienced users see pitfalls?
- Host System: use 24 GB off the SSD, i.e. 12GB for files + 12GB as swap
- Primary Guest VM: use 96 GB SSD + 1,900GB SATA (allocate 4CPUs + 8GB RAM)
- VM DNS Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
- VM WebServer: use 8 GB SATA (allocate 1CPU +1GB RAM)
- VM Mail Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
- Reserved for Future Use: 76GB SATA
In particular, will 12GB be enough space for the host system's files?
Will 12GB swap be adequate? Is it a bad idea to use the SSD for the swap space?
The primary guest VM is the most-used server and it needs fast disk I/O, rebuilds a roughly 30GB MySQL database frequently, needs a lot of file storage space, runs Apache and a mail server. This entire hardware purchase is wasted if this server isn't performing well.
How should I partition the disks in order to most easily tell the host system where to put the various guest VMs? That is, I want the primary VM to take advantage of the faster SSD drives for its core/OS files, and use the SATA drives for its storage, and want the less important VMs to just use a portion of the SATA drives and stay off the SSDs.
Can I allocate more RAM or CPUs to the guest VMs (overcommit) without causing problems or is that just not worth it?
Thanks for any suggestions.
virtualization ubuntu virtual-machines kvm-virtualization capacity-planning
add a comment |
I am putting together a dual-xeon quad core (i.e., 8 cores total) 12GB RAM linux server to replace several old smaller servers. I would like to use virtualization both to learn about it and because the individuals who were using the old servers need to be kept separated.
I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID mirror.
I believe I will use Ubuntu 10.04 LTS with KVM as the host system and Ubuntu 10.04 for the primary resource-intensive guest VM. The three additional guest VMs will probably be Debian Lenny and are low usage and low priority.
Does the following resource allocation plan make sense or do more experienced users see pitfalls?
- Host System: use 24 GB off the SSD, i.e. 12GB for files + 12GB as swap
- Primary Guest VM: use 96 GB SSD + 1,900GB SATA (allocate 4CPUs + 8GB RAM)
- VM DNS Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
- VM WebServer: use 8 GB SATA (allocate 1CPU +1GB RAM)
- VM Mail Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
- Reserved for Future Use: 76GB SATA
In particular, will 12GB be enough space for the host system's files?
Will 12GB swap be adequate? Is it a bad idea to use the SSD for the swap space?
The primary guest VM is the most-used server and it needs fast disk I/O, rebuilds a roughly 30GB MySQL database frequently, needs a lot of file storage space, runs Apache and a mail server. This entire hardware purchase is wasted if this server isn't performing well.
How should I partition the disks in order to most easily tell the host system where to put the various guest VMs? That is, I want the primary VM to take advantage of the faster SSD drives for its core/OS files, and use the SATA drives for its storage, and want the less important VMs to just use a portion of the SATA drives and stay off the SSDs.
Can I allocate more RAM or CPUs to the guest VMs (overcommit) without causing problems or is that just not worth it?
Thanks for any suggestions.
virtualization ubuntu virtual-machines kvm-virtualization capacity-planning
add a comment |
I am putting together a dual-xeon quad core (i.e., 8 cores total) 12GB RAM linux server to replace several old smaller servers. I would like to use virtualization both to learn about it and because the individuals who were using the old servers need to be kept separated.
I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID mirror.
I believe I will use Ubuntu 10.04 LTS with KVM as the host system and Ubuntu 10.04 for the primary resource-intensive guest VM. The three additional guest VMs will probably be Debian Lenny and are low usage and low priority.
Does the following resource allocation plan make sense or do more experienced users see pitfalls?
- Host System: use 24 GB off the SSD, i.e. 12GB for files + 12GB as swap
- Primary Guest VM: use 96 GB SSD + 1,900GB SATA (allocate 4CPUs + 8GB RAM)
- VM DNS Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
- VM WebServer: use 8 GB SATA (allocate 1CPU +1GB RAM)
- VM Mail Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
- Reserved for Future Use: 76GB SATA
In particular, will 12GB be enough space for the host system's files?
Will 12GB swap be adequate? Is it a bad idea to use the SSD for the swap space?
The primary guest VM is the most-used server and it needs fast disk I/O, rebuilds a roughly 30GB MySQL database frequently, needs a lot of file storage space, runs Apache and a mail server. This entire hardware purchase is wasted if this server isn't performing well.
How should I partition the disks in order to most easily tell the host system where to put the various guest VMs? That is, I want the primary VM to take advantage of the faster SSD drives for its core/OS files, and use the SATA drives for its storage, and want the less important VMs to just use a portion of the SATA drives and stay off the SSDs.
Can I allocate more RAM or CPUs to the guest VMs (overcommit) without causing problems or is that just not worth it?
Thanks for any suggestions.
virtualization ubuntu virtual-machines kvm-virtualization capacity-planning
I am putting together a dual-xeon quad core (i.e., 8 cores total) 12GB RAM linux server to replace several old smaller servers. I would like to use virtualization both to learn about it and because the individuals who were using the old servers need to be kept separated.
I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID mirror.
I believe I will use Ubuntu 10.04 LTS with KVM as the host system and Ubuntu 10.04 for the primary resource-intensive guest VM. The three additional guest VMs will probably be Debian Lenny and are low usage and low priority.
Does the following resource allocation plan make sense or do more experienced users see pitfalls?
- Host System: use 24 GB off the SSD, i.e. 12GB for files + 12GB as swap
- Primary Guest VM: use 96 GB SSD + 1,900GB SATA (allocate 4CPUs + 8GB RAM)
- VM DNS Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
- VM WebServer: use 8 GB SATA (allocate 1CPU +1GB RAM)
- VM Mail Server: use 8 GB SATA (allocate 1CPU +1GB RAM)
- Reserved for Future Use: 76GB SATA
In particular, will 12GB be enough space for the host system's files?
Will 12GB swap be adequate? Is it a bad idea to use the SSD for the swap space?
The primary guest VM is the most-used server and it needs fast disk I/O, rebuilds a roughly 30GB MySQL database frequently, needs a lot of file storage space, runs Apache and a mail server. This entire hardware purchase is wasted if this server isn't performing well.
How should I partition the disks in order to most easily tell the host system where to put the various guest VMs? That is, I want the primary VM to take advantage of the faster SSD drives for its core/OS files, and use the SATA drives for its storage, and want the less important VMs to just use a portion of the SATA drives and stay off the SSDs.
Can I allocate more RAM or CPUs to the guest VMs (overcommit) without causing problems or is that just not worth it?
Thanks for any suggestions.
virtualization ubuntu virtual-machines kvm-virtualization capacity-planning
virtualization ubuntu virtual-machines kvm-virtualization capacity-planning
asked Oct 24 '10 at 20:24
brianwcbrianwc
1613
1613
add a comment |
add a comment |
7 Answers
7
active
oldest
votes
My setup is somewhat similar and works well. Virt-manager makes it really easy (even over ssh X forwarding it works well). Some random thoughts:
I would use LVM + virtio (perhaps except for the very large volumes; there appears to be a "1TB problem" with virtio) in this scenario. You can put the IO-intensive vm's volume on the fastest part of the sata raid.
Swap: unless you know exactly why you probably don't need 12GB at all.
On the small systems I would recommend splitting off the data volume from the system volume. You'll probably be using ~4 out of 8GB for system files leaving only 4GB for those "oops" moments. Systems behave a lot better when their root volume isn't full.
What kind of raid are you using? DM-softraid or some battery-backed hardware controller?
Putting the system files on a SSD will give you nice bootup times but not much after that. Putting data files (esp seek intensive stuff) on the SSD will give you intense joy for a very long time.
Afaik there is still some gain to be had if you do not fill up your SSD's all the way, leaving 20% unused (never written to) is easy with LVM, just make a volume for it.
As with any hardware rebuild I urge to use ECC memory.
+1 for pointing out that swap won't been need (at least not 12GBs worth) if planned correctly.
– Coops
Jun 30 '11 at 20:06
add a comment |
"It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server."
I think You are wrong. KVM is very good choice for fail-over solution. Just bring XML definition file to another server and use shared storage and/or identical network card config for all severs in the cluster. Tested, worked. Remember about LACP and link aggregation - also works :)
+1 for bringing up interesting associated points.
– Coops
Jun 30 '11 at 20:08
add a comment |
12 GB should be adequate for your system.
12GB should be more than adequate for swap. I wouldn't worry to much about swap access speed as swap is typically not used much. With your available memory you shouldn't see any significant swapping. If you want a large temp space, you may want to use a larger swap size and use tmpfs for /tmp.
You can manually place the virtual systems file systems, either as files, or partitions. They will be wherever you placed them.
You have way more RAM and CPU than appear needed. Watch the memory use on the servers and increase as needed.
I would install a munin server process on the server, and munin clients on the server and all virtual servers. This will allow you to quickly determine if you have any bottlenecks that need tending to.
I wouldn't overcommit RAM, but depending on load you should be able to overcommit CPUs. Given your increased power, this shouldn't be necessary. KVM allows you to specify max values for both of these which are higher than used at startup. I haven't tried dynamically changing these values.
add a comment |
It all sounds like a test server we have already :)
Well, avoid OCZ Vertex SSD even if they can do 550MB/s read and 530MB/s write - they are probablly just too faulty, you can read that on the 'Net. But I haven't tested them myself.
For me the best option is still SAS or FC drives in a RAID10 array, even if SSD will do more IOs, but its lifetime is limited (get from SMART that data!). When a disk fails you just replace it, what's going to happen when all SSD are the same series and all will fail at once? Uh.
Yes, I can confirm storage for VM is very IO intensive. One day I turned on screen and Ubuntu Server was saying IO queue too big or something like that and it hanged for long time.
I allocate as much CPU/RAM as I need for VMs, for example if a new VM is to be deployed I reduce RAM for the rest while in maintenance, not too much but enough for new VM.
Now I'm testing bonding together with bridging exactly for KVM VMs. I successfuly set bonding mode LACP and round-robin (test says 1 packet lost when cable unplugged). Now I'm wondering is it possible to reach 2Gbit over network to KVM VM...
Next thing is to set up cluster.
add a comment |
As a system admin responsible for keeping things running, this plan would give me an uneasy feeling. It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server. Any kind of downtime for this super server will mean downtime for ALL of your services, not just a subset of them.
I would recommend using your budget to provision, say, two or three smaller servers instead. You can still use virtualization to partition your services into separate containers, both for security, and for ease of backup and migration.
Decision to consolidate has already been made due to 1) really old hardware on old servers that is a large risk of failure; 2) other servers suck so much power it noticeably affects electric bill. Also, the 3 small VMs proposed, while internet-facing, are used in such a limited way that some occasional downtime on those is expected/not a big deal.
– brianwc
Oct 24 '10 at 22:01
I would recommend a combination of the following, in case you hadn't already had this in mind: 1. use drbd to mirror your vm's to another machine dedicated to backups. Or you could even build a fairly similar machine to start up the vm's if the first server goes down. 2. backup the vm images to somewhere else. I had a server that had lots of disks in it all go bad when the power supply (non redundant) blew. At the time I was only backing up data, but backing up the entire vm images would have made restoring so much easier.
– senorsmile
Dec 24 '10 at 7:38
add a comment |
I dont like your discs. Looks like totally wrong focus.
I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID
mirror.
Assuming you use the 120gb for operatoing system and the 32tb for the virtual machiens - welcome to sucking IO. Ok, granted, your server is small.
Anyhow, here some of my servers:
10gb AMD based Hyper-V (fits 16gb, we did have BIOS problems). The OS+Discs are on a RAID 10, Adaptec, 4x 320gb Black Scorptio. IO load is BAD. I feel it being overloaded. It gets an upgrade now to 16gb, but the number of VM's will be reduced - too much IO load during patching etc.
64gb, 8 core AMD opterons. I had a 4x300gb Velocirraptors i na RAID 10 on it. Was getting full and I WAS FEELING THE LOAD. Really feeling it. I just upgraded to 6 raptors in a RAID 10 and may go higher. This server had a number of database server on it, but they pretty much all have separate discs for the db work. The RAID controller is an Adaptec 5805 on a SAS infrastructure.
As oyu can see - your IO subsystem is really bad. Memory overcommit will just make it a LOT worse. SSD can work nicely, but are way too pricy still. If you put the VM's on the 2tb drives, your IO will just suck. They likely have around 250 IOPS or so each - compared to the 450 I mearsured on my raptors, and as I said, I use a lot of them AND they are on a high end raid controlller.
I got a nice SuperMicro cage with 24 disc slots for the larger server ;)
I currently have capacity for six SATA, and this plan uses 4, so I could add two more 120GB SSDs to create a RAID10 if you think that would dramatically increase IO performance, but as you say, that's expensive, and here would increase the hardware cost by over 15%.
– brianwc
Oct 24 '10 at 22:04
SSD hhave a LOT more IOPS capacity than normal discs - depending on what you pay and buy you get about 400 IOPS from a velociraptor and I know of SSD able to do about 40.000 (!) IOPS. They are expensive, though, this is why I go with Valociraptors, not even SAS drives. Best bang for the buck. 2gb drives are just slow - lots of space, little IOPS capacity (and that is all that talks about speed in virtualization). You have to plan about what you need and want - a LOT depends on the servers you will run.
– TomTom
Oct 25 '10 at 5:51
2
Most important thing: do avoid swap by all costs. Give the machines good RAM and do not overcommit RAM. Swap kils your IOPS budget faster than you can say "shit". Really. Do not turn off swapping (it makes sense to swap out unussed stuff) but make sure the machines are not starving for memory. Memory is CHEAP.
– TomTom
Oct 25 '10 at 5:52
add a comment |
Here is your available resources:
- 8 Cores
- 12GB RAM
- 120GB SSD Storage
- 2TB Sata Storage
A few thoughts come to mind with your plan:
- First off 12GB of RAM?... Spend more $ and get more RAM!
- I would consider running the "Host System" from a separate small SSD (Say 32GB's) or the sata drives assuming all the host does is run KVM. My main reason for this is because I would want to directly pass the entire 120GB SSD directly to your workhorse VM.
- I also use "CPU Pinning" for my main VM (Pin an entire CPU since you have two!)
- I also use RAM "HugePages" which basically reserves the RAM only for that VM, like CPU Pinning but for RAM
- I would want to give every VM at least 1 Core w/ 2 Threads for the small VM's and 4Gb Ram
- Swap shouldn't be needed, if your using swap a lot your system is underpowered. It's there as a last resort and RAM is cheap enough now it shouldn't be needed
- I would be fine giving the "Host System" a small amount of storage as long as it has access to the 2TB Sata drives for doing backups etc.
- Overcommitting is the way to go IMO as the hypervisor can then allocate as needed, but if your bottlenecking often then you may want to consider tightening up your resources to allow for priority processes to run smoother.
- Finally, I realize you and your work is likely more familiar with Debian based OS's but it's not that hard to jump to another Linux Distro if you understand Linux. You just swap 'apt-get' for 'yum' or 'dnf' and a few files are located in different places, but google will help you. My main reason for saying this is I will always want to use a RHEL based distro to run KVM since RHEL develops KVM. I personally use FEDORA. KVM is new IMO and I have found reasons/improvements that only Fedora had that other OS's were still working on importing.
- 8GB's Storage for a Linux OS is really small. I would want 32GB's. Sata storage is cheap. The best price point at the moment seems to be 8TB drives, but that might be overkill, regardless, 8GB's is small.
- Find some sort of monitoring solution that can alert you to issues like RAM/CPU/Storage bottlenecks. I like XYMon but I've looked into Zabbix as well.
Enjoy KVM! Make sure to backup your Domain.XML's and a copy of each VM offsite preferably.
2
Don't get too hung up on that server's stats. The post is over eight years old, after all.
– Michael Hampton♦
May 22 at 5:51
Whoops :P. I totally didn't notice, somehow I ended up here just browsing the site tags and I think I just assumed it was recent
– FreeSoftwareServers
May 22 at 23:52
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f194307%2fplanning-my-first-server-with-ubuntu-kvm-virtual-machines%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
My setup is somewhat similar and works well. Virt-manager makes it really easy (even over ssh X forwarding it works well). Some random thoughts:
I would use LVM + virtio (perhaps except for the very large volumes; there appears to be a "1TB problem" with virtio) in this scenario. You can put the IO-intensive vm's volume on the fastest part of the sata raid.
Swap: unless you know exactly why you probably don't need 12GB at all.
On the small systems I would recommend splitting off the data volume from the system volume. You'll probably be using ~4 out of 8GB for system files leaving only 4GB for those "oops" moments. Systems behave a lot better when their root volume isn't full.
What kind of raid are you using? DM-softraid or some battery-backed hardware controller?
Putting the system files on a SSD will give you nice bootup times but not much after that. Putting data files (esp seek intensive stuff) on the SSD will give you intense joy for a very long time.
Afaik there is still some gain to be had if you do not fill up your SSD's all the way, leaving 20% unused (never written to) is easy with LVM, just make a volume for it.
As with any hardware rebuild I urge to use ECC memory.
+1 for pointing out that swap won't been need (at least not 12GBs worth) if planned correctly.
– Coops
Jun 30 '11 at 20:06
add a comment |
My setup is somewhat similar and works well. Virt-manager makes it really easy (even over ssh X forwarding it works well). Some random thoughts:
I would use LVM + virtio (perhaps except for the very large volumes; there appears to be a "1TB problem" with virtio) in this scenario. You can put the IO-intensive vm's volume on the fastest part of the sata raid.
Swap: unless you know exactly why you probably don't need 12GB at all.
On the small systems I would recommend splitting off the data volume from the system volume. You'll probably be using ~4 out of 8GB for system files leaving only 4GB for those "oops" moments. Systems behave a lot better when their root volume isn't full.
What kind of raid are you using? DM-softraid or some battery-backed hardware controller?
Putting the system files on a SSD will give you nice bootup times but not much after that. Putting data files (esp seek intensive stuff) on the SSD will give you intense joy for a very long time.
Afaik there is still some gain to be had if you do not fill up your SSD's all the way, leaving 20% unused (never written to) is easy with LVM, just make a volume for it.
As with any hardware rebuild I urge to use ECC memory.
+1 for pointing out that swap won't been need (at least not 12GBs worth) if planned correctly.
– Coops
Jun 30 '11 at 20:06
add a comment |
My setup is somewhat similar and works well. Virt-manager makes it really easy (even over ssh X forwarding it works well). Some random thoughts:
I would use LVM + virtio (perhaps except for the very large volumes; there appears to be a "1TB problem" with virtio) in this scenario. You can put the IO-intensive vm's volume on the fastest part of the sata raid.
Swap: unless you know exactly why you probably don't need 12GB at all.
On the small systems I would recommend splitting off the data volume from the system volume. You'll probably be using ~4 out of 8GB for system files leaving only 4GB for those "oops" moments. Systems behave a lot better when their root volume isn't full.
What kind of raid are you using? DM-softraid or some battery-backed hardware controller?
Putting the system files on a SSD will give you nice bootup times but not much after that. Putting data files (esp seek intensive stuff) on the SSD will give you intense joy for a very long time.
Afaik there is still some gain to be had if you do not fill up your SSD's all the way, leaving 20% unused (never written to) is easy with LVM, just make a volume for it.
As with any hardware rebuild I urge to use ECC memory.
My setup is somewhat similar and works well. Virt-manager makes it really easy (even over ssh X forwarding it works well). Some random thoughts:
I would use LVM + virtio (perhaps except for the very large volumes; there appears to be a "1TB problem" with virtio) in this scenario. You can put the IO-intensive vm's volume on the fastest part of the sata raid.
Swap: unless you know exactly why you probably don't need 12GB at all.
On the small systems I would recommend splitting off the data volume from the system volume. You'll probably be using ~4 out of 8GB for system files leaving only 4GB for those "oops" moments. Systems behave a lot better when their root volume isn't full.
What kind of raid are you using? DM-softraid or some battery-backed hardware controller?
Putting the system files on a SSD will give you nice bootup times but not much after that. Putting data files (esp seek intensive stuff) on the SSD will give you intense joy for a very long time.
Afaik there is still some gain to be had if you do not fill up your SSD's all the way, leaving 20% unused (never written to) is easy with LVM, just make a volume for it.
As with any hardware rebuild I urge to use ECC memory.
answered Oct 25 '10 at 7:05
JorisJoris
5,55411113
5,55411113
+1 for pointing out that swap won't been need (at least not 12GBs worth) if planned correctly.
– Coops
Jun 30 '11 at 20:06
add a comment |
+1 for pointing out that swap won't been need (at least not 12GBs worth) if planned correctly.
– Coops
Jun 30 '11 at 20:06
+1 for pointing out that swap won't been need (at least not 12GBs worth) if planned correctly.
– Coops
Jun 30 '11 at 20:06
+1 for pointing out that swap won't been need (at least not 12GBs worth) if planned correctly.
– Coops
Jun 30 '11 at 20:06
add a comment |
"It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server."
I think You are wrong. KVM is very good choice for fail-over solution. Just bring XML definition file to another server and use shared storage and/or identical network card config for all severs in the cluster. Tested, worked. Remember about LACP and link aggregation - also works :)
+1 for bringing up interesting associated points.
– Coops
Jun 30 '11 at 20:08
add a comment |
"It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server."
I think You are wrong. KVM is very good choice for fail-over solution. Just bring XML definition file to another server and use shared storage and/or identical network card config for all severs in the cluster. Tested, worked. Remember about LACP and link aggregation - also works :)
+1 for bringing up interesting associated points.
– Coops
Jun 30 '11 at 20:08
add a comment |
"It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server."
I think You are wrong. KVM is very good choice for fail-over solution. Just bring XML definition file to another server and use shared storage and/or identical network card config for all severs in the cluster. Tested, worked. Remember about LACP and link aggregation - also works :)
"It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server."
I think You are wrong. KVM is very good choice for fail-over solution. Just bring XML definition file to another server and use shared storage and/or identical network card config for all severs in the cluster. Tested, worked. Remember about LACP and link aggregation - also works :)
answered Jun 30 '11 at 19:52
TooMeeKTooMeeK
512
512
+1 for bringing up interesting associated points.
– Coops
Jun 30 '11 at 20:08
add a comment |
+1 for bringing up interesting associated points.
– Coops
Jun 30 '11 at 20:08
+1 for bringing up interesting associated points.
– Coops
Jun 30 '11 at 20:08
+1 for bringing up interesting associated points.
– Coops
Jun 30 '11 at 20:08
add a comment |
12 GB should be adequate for your system.
12GB should be more than adequate for swap. I wouldn't worry to much about swap access speed as swap is typically not used much. With your available memory you shouldn't see any significant swapping. If you want a large temp space, you may want to use a larger swap size and use tmpfs for /tmp.
You can manually place the virtual systems file systems, either as files, or partitions. They will be wherever you placed them.
You have way more RAM and CPU than appear needed. Watch the memory use on the servers and increase as needed.
I would install a munin server process on the server, and munin clients on the server and all virtual servers. This will allow you to quickly determine if you have any bottlenecks that need tending to.
I wouldn't overcommit RAM, but depending on load you should be able to overcommit CPUs. Given your increased power, this shouldn't be necessary. KVM allows you to specify max values for both of these which are higher than used at startup. I haven't tried dynamically changing these values.
add a comment |
12 GB should be adequate for your system.
12GB should be more than adequate for swap. I wouldn't worry to much about swap access speed as swap is typically not used much. With your available memory you shouldn't see any significant swapping. If you want a large temp space, you may want to use a larger swap size and use tmpfs for /tmp.
You can manually place the virtual systems file systems, either as files, or partitions. They will be wherever you placed them.
You have way more RAM and CPU than appear needed. Watch the memory use on the servers and increase as needed.
I would install a munin server process on the server, and munin clients on the server and all virtual servers. This will allow you to quickly determine if you have any bottlenecks that need tending to.
I wouldn't overcommit RAM, but depending on load you should be able to overcommit CPUs. Given your increased power, this shouldn't be necessary. KVM allows you to specify max values for both of these which are higher than used at startup. I haven't tried dynamically changing these values.
add a comment |
12 GB should be adequate for your system.
12GB should be more than adequate for swap. I wouldn't worry to much about swap access speed as swap is typically not used much. With your available memory you shouldn't see any significant swapping. If you want a large temp space, you may want to use a larger swap size and use tmpfs for /tmp.
You can manually place the virtual systems file systems, either as files, or partitions. They will be wherever you placed them.
You have way more RAM and CPU than appear needed. Watch the memory use on the servers and increase as needed.
I would install a munin server process on the server, and munin clients on the server and all virtual servers. This will allow you to quickly determine if you have any bottlenecks that need tending to.
I wouldn't overcommit RAM, but depending on load you should be able to overcommit CPUs. Given your increased power, this shouldn't be necessary. KVM allows you to specify max values for both of these which are higher than used at startup. I haven't tried dynamically changing these values.
12 GB should be adequate for your system.
12GB should be more than adequate for swap. I wouldn't worry to much about swap access speed as swap is typically not used much. With your available memory you shouldn't see any significant swapping. If you want a large temp space, you may want to use a larger swap size and use tmpfs for /tmp.
You can manually place the virtual systems file systems, either as files, or partitions. They will be wherever you placed them.
You have way more RAM and CPU than appear needed. Watch the memory use on the servers and increase as needed.
I would install a munin server process on the server, and munin clients on the server and all virtual servers. This will allow you to quickly determine if you have any bottlenecks that need tending to.
I wouldn't overcommit RAM, but depending on load you should be able to overcommit CPUs. Given your increased power, this shouldn't be necessary. KVM allows you to specify max values for both of these which are higher than used at startup. I haven't tried dynamically changing these values.
answered Oct 25 '10 at 3:26
BillThor BillThor
24.9k22662
24.9k22662
add a comment |
add a comment |
It all sounds like a test server we have already :)
Well, avoid OCZ Vertex SSD even if they can do 550MB/s read and 530MB/s write - they are probablly just too faulty, you can read that on the 'Net. But I haven't tested them myself.
For me the best option is still SAS or FC drives in a RAID10 array, even if SSD will do more IOs, but its lifetime is limited (get from SMART that data!). When a disk fails you just replace it, what's going to happen when all SSD are the same series and all will fail at once? Uh.
Yes, I can confirm storage for VM is very IO intensive. One day I turned on screen and Ubuntu Server was saying IO queue too big or something like that and it hanged for long time.
I allocate as much CPU/RAM as I need for VMs, for example if a new VM is to be deployed I reduce RAM for the rest while in maintenance, not too much but enough for new VM.
Now I'm testing bonding together with bridging exactly for KVM VMs. I successfuly set bonding mode LACP and round-robin (test says 1 packet lost when cable unplugged). Now I'm wondering is it possible to reach 2Gbit over network to KVM VM...
Next thing is to set up cluster.
add a comment |
It all sounds like a test server we have already :)
Well, avoid OCZ Vertex SSD even if they can do 550MB/s read and 530MB/s write - they are probablly just too faulty, you can read that on the 'Net. But I haven't tested them myself.
For me the best option is still SAS or FC drives in a RAID10 array, even if SSD will do more IOs, but its lifetime is limited (get from SMART that data!). When a disk fails you just replace it, what's going to happen when all SSD are the same series and all will fail at once? Uh.
Yes, I can confirm storage for VM is very IO intensive. One day I turned on screen and Ubuntu Server was saying IO queue too big or something like that and it hanged for long time.
I allocate as much CPU/RAM as I need for VMs, for example if a new VM is to be deployed I reduce RAM for the rest while in maintenance, not too much but enough for new VM.
Now I'm testing bonding together with bridging exactly for KVM VMs. I successfuly set bonding mode LACP and round-robin (test says 1 packet lost when cable unplugged). Now I'm wondering is it possible to reach 2Gbit over network to KVM VM...
Next thing is to set up cluster.
add a comment |
It all sounds like a test server we have already :)
Well, avoid OCZ Vertex SSD even if they can do 550MB/s read and 530MB/s write - they are probablly just too faulty, you can read that on the 'Net. But I haven't tested them myself.
For me the best option is still SAS or FC drives in a RAID10 array, even if SSD will do more IOs, but its lifetime is limited (get from SMART that data!). When a disk fails you just replace it, what's going to happen when all SSD are the same series and all will fail at once? Uh.
Yes, I can confirm storage for VM is very IO intensive. One day I turned on screen and Ubuntu Server was saying IO queue too big or something like that and it hanged for long time.
I allocate as much CPU/RAM as I need for VMs, for example if a new VM is to be deployed I reduce RAM for the rest while in maintenance, not too much but enough for new VM.
Now I'm testing bonding together with bridging exactly for KVM VMs. I successfuly set bonding mode LACP and round-robin (test says 1 packet lost when cable unplugged). Now I'm wondering is it possible to reach 2Gbit over network to KVM VM...
Next thing is to set up cluster.
It all sounds like a test server we have already :)
Well, avoid OCZ Vertex SSD even if they can do 550MB/s read and 530MB/s write - they are probablly just too faulty, you can read that on the 'Net. But I haven't tested them myself.
For me the best option is still SAS or FC drives in a RAID10 array, even if SSD will do more IOs, but its lifetime is limited (get from SMART that data!). When a disk fails you just replace it, what's going to happen when all SSD are the same series and all will fail at once? Uh.
Yes, I can confirm storage for VM is very IO intensive. One day I turned on screen and Ubuntu Server was saying IO queue too big or something like that and it hanged for long time.
I allocate as much CPU/RAM as I need for VMs, for example if a new VM is to be deployed I reduce RAM for the rest while in maintenance, not too much but enough for new VM.
Now I'm testing bonding together with bridging exactly for KVM VMs. I successfuly set bonding mode LACP and round-robin (test says 1 packet lost when cable unplugged). Now I'm wondering is it possible to reach 2Gbit over network to KVM VM...
Next thing is to set up cluster.
edited Oct 7 '15 at 6:58
Deer Hunter
91841625
91841625
answered Jul 11 '11 at 21:06
TooMeeKTooMeeK
111
111
add a comment |
add a comment |
As a system admin responsible for keeping things running, this plan would give me an uneasy feeling. It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server. Any kind of downtime for this super server will mean downtime for ALL of your services, not just a subset of them.
I would recommend using your budget to provision, say, two or three smaller servers instead. You can still use virtualization to partition your services into separate containers, both for security, and for ease of backup and migration.
Decision to consolidate has already been made due to 1) really old hardware on old servers that is a large risk of failure; 2) other servers suck so much power it noticeably affects electric bill. Also, the 3 small VMs proposed, while internet-facing, are used in such a limited way that some occasional downtime on those is expected/not a big deal.
– brianwc
Oct 24 '10 at 22:01
I would recommend a combination of the following, in case you hadn't already had this in mind: 1. use drbd to mirror your vm's to another machine dedicated to backups. Or you could even build a fairly similar machine to start up the vm's if the first server goes down. 2. backup the vm images to somewhere else. I had a server that had lots of disks in it all go bad when the power supply (non redundant) blew. At the time I was only backing up data, but backing up the entire vm images would have made restoring so much easier.
– senorsmile
Dec 24 '10 at 7:38
add a comment |
As a system admin responsible for keeping things running, this plan would give me an uneasy feeling. It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server. Any kind of downtime for this super server will mean downtime for ALL of your services, not just a subset of them.
I would recommend using your budget to provision, say, two or three smaller servers instead. You can still use virtualization to partition your services into separate containers, both for security, and for ease of backup and migration.
Decision to consolidate has already been made due to 1) really old hardware on old servers that is a large risk of failure; 2) other servers suck so much power it noticeably affects electric bill. Also, the 3 small VMs proposed, while internet-facing, are used in such a limited way that some occasional downtime on those is expected/not a big deal.
– brianwc
Oct 24 '10 at 22:01
I would recommend a combination of the following, in case you hadn't already had this in mind: 1. use drbd to mirror your vm's to another machine dedicated to backups. Or you could even build a fairly similar machine to start up the vm's if the first server goes down. 2. backup the vm images to somewhere else. I had a server that had lots of disks in it all go bad when the power supply (non redundant) blew. At the time I was only backing up data, but backing up the entire vm images would have made restoring so much easier.
– senorsmile
Dec 24 '10 at 7:38
add a comment |
As a system admin responsible for keeping things running, this plan would give me an uneasy feeling. It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server. Any kind of downtime for this super server will mean downtime for ALL of your services, not just a subset of them.
I would recommend using your budget to provision, say, two or three smaller servers instead. You can still use virtualization to partition your services into separate containers, both for security, and for ease of backup and migration.
As a system admin responsible for keeping things running, this plan would give me an uneasy feeling. It looks like you plan to concentrate a number of separate, working server machines into one massive single-point-of-failure server. Any kind of downtime for this super server will mean downtime for ALL of your services, not just a subset of them.
I would recommend using your budget to provision, say, two or three smaller servers instead. You can still use virtualization to partition your services into separate containers, both for security, and for ease of backup and migration.
answered Oct 24 '10 at 21:19
Steven MondaySteven Monday
10.7k22840
10.7k22840
Decision to consolidate has already been made due to 1) really old hardware on old servers that is a large risk of failure; 2) other servers suck so much power it noticeably affects electric bill. Also, the 3 small VMs proposed, while internet-facing, are used in such a limited way that some occasional downtime on those is expected/not a big deal.
– brianwc
Oct 24 '10 at 22:01
I would recommend a combination of the following, in case you hadn't already had this in mind: 1. use drbd to mirror your vm's to another machine dedicated to backups. Or you could even build a fairly similar machine to start up the vm's if the first server goes down. 2. backup the vm images to somewhere else. I had a server that had lots of disks in it all go bad when the power supply (non redundant) blew. At the time I was only backing up data, but backing up the entire vm images would have made restoring so much easier.
– senorsmile
Dec 24 '10 at 7:38
add a comment |
Decision to consolidate has already been made due to 1) really old hardware on old servers that is a large risk of failure; 2) other servers suck so much power it noticeably affects electric bill. Also, the 3 small VMs proposed, while internet-facing, are used in such a limited way that some occasional downtime on those is expected/not a big deal.
– brianwc
Oct 24 '10 at 22:01
I would recommend a combination of the following, in case you hadn't already had this in mind: 1. use drbd to mirror your vm's to another machine dedicated to backups. Or you could even build a fairly similar machine to start up the vm's if the first server goes down. 2. backup the vm images to somewhere else. I had a server that had lots of disks in it all go bad when the power supply (non redundant) blew. At the time I was only backing up data, but backing up the entire vm images would have made restoring so much easier.
– senorsmile
Dec 24 '10 at 7:38
Decision to consolidate has already been made due to 1) really old hardware on old servers that is a large risk of failure; 2) other servers suck so much power it noticeably affects electric bill. Also, the 3 small VMs proposed, while internet-facing, are used in such a limited way that some occasional downtime on those is expected/not a big deal.
– brianwc
Oct 24 '10 at 22:01
Decision to consolidate has already been made due to 1) really old hardware on old servers that is a large risk of failure; 2) other servers suck so much power it noticeably affects electric bill. Also, the 3 small VMs proposed, while internet-facing, are used in such a limited way that some occasional downtime on those is expected/not a big deal.
– brianwc
Oct 24 '10 at 22:01
I would recommend a combination of the following, in case you hadn't already had this in mind: 1. use drbd to mirror your vm's to another machine dedicated to backups. Or you could even build a fairly similar machine to start up the vm's if the first server goes down. 2. backup the vm images to somewhere else. I had a server that had lots of disks in it all go bad when the power supply (non redundant) blew. At the time I was only backing up data, but backing up the entire vm images would have made restoring so much easier.
– senorsmile
Dec 24 '10 at 7:38
I would recommend a combination of the following, in case you hadn't already had this in mind: 1. use drbd to mirror your vm's to another machine dedicated to backups. Or you could even build a fairly similar machine to start up the vm's if the first server goes down. 2. backup the vm images to somewhere else. I had a server that had lots of disks in it all go bad when the power supply (non redundant) blew. At the time I was only backing up data, but backing up the entire vm images would have made restoring so much easier.
– senorsmile
Dec 24 '10 at 7:38
add a comment |
I dont like your discs. Looks like totally wrong focus.
I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID
mirror.
Assuming you use the 120gb for operatoing system and the 32tb for the virtual machiens - welcome to sucking IO. Ok, granted, your server is small.
Anyhow, here some of my servers:
10gb AMD based Hyper-V (fits 16gb, we did have BIOS problems). The OS+Discs are on a RAID 10, Adaptec, 4x 320gb Black Scorptio. IO load is BAD. I feel it being overloaded. It gets an upgrade now to 16gb, but the number of VM's will be reduced - too much IO load during patching etc.
64gb, 8 core AMD opterons. I had a 4x300gb Velocirraptors i na RAID 10 on it. Was getting full and I WAS FEELING THE LOAD. Really feeling it. I just upgraded to 6 raptors in a RAID 10 and may go higher. This server had a number of database server on it, but they pretty much all have separate discs for the db work. The RAID controller is an Adaptec 5805 on a SAS infrastructure.
As oyu can see - your IO subsystem is really bad. Memory overcommit will just make it a LOT worse. SSD can work nicely, but are way too pricy still. If you put the VM's on the 2tb drives, your IO will just suck. They likely have around 250 IOPS or so each - compared to the 450 I mearsured on my raptors, and as I said, I use a lot of them AND they are on a high end raid controlller.
I got a nice SuperMicro cage with 24 disc slots for the larger server ;)
I currently have capacity for six SATA, and this plan uses 4, so I could add two more 120GB SSDs to create a RAID10 if you think that would dramatically increase IO performance, but as you say, that's expensive, and here would increase the hardware cost by over 15%.
– brianwc
Oct 24 '10 at 22:04
SSD hhave a LOT more IOPS capacity than normal discs - depending on what you pay and buy you get about 400 IOPS from a velociraptor and I know of SSD able to do about 40.000 (!) IOPS. They are expensive, though, this is why I go with Valociraptors, not even SAS drives. Best bang for the buck. 2gb drives are just slow - lots of space, little IOPS capacity (and that is all that talks about speed in virtualization). You have to plan about what you need and want - a LOT depends on the servers you will run.
– TomTom
Oct 25 '10 at 5:51
2
Most important thing: do avoid swap by all costs. Give the machines good RAM and do not overcommit RAM. Swap kils your IOPS budget faster than you can say "shit". Really. Do not turn off swapping (it makes sense to swap out unussed stuff) but make sure the machines are not starving for memory. Memory is CHEAP.
– TomTom
Oct 25 '10 at 5:52
add a comment |
I dont like your discs. Looks like totally wrong focus.
I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID
mirror.
Assuming you use the 120gb for operatoing system and the 32tb for the virtual machiens - welcome to sucking IO. Ok, granted, your server is small.
Anyhow, here some of my servers:
10gb AMD based Hyper-V (fits 16gb, we did have BIOS problems). The OS+Discs are on a RAID 10, Adaptec, 4x 320gb Black Scorptio. IO load is BAD. I feel it being overloaded. It gets an upgrade now to 16gb, but the number of VM's will be reduced - too much IO load during patching etc.
64gb, 8 core AMD opterons. I had a 4x300gb Velocirraptors i na RAID 10 on it. Was getting full and I WAS FEELING THE LOAD. Really feeling it. I just upgraded to 6 raptors in a RAID 10 and may go higher. This server had a number of database server on it, but they pretty much all have separate discs for the db work. The RAID controller is an Adaptec 5805 on a SAS infrastructure.
As oyu can see - your IO subsystem is really bad. Memory overcommit will just make it a LOT worse. SSD can work nicely, but are way too pricy still. If you put the VM's on the 2tb drives, your IO will just suck. They likely have around 250 IOPS or so each - compared to the 450 I mearsured on my raptors, and as I said, I use a lot of them AND they are on a high end raid controlller.
I got a nice SuperMicro cage with 24 disc slots for the larger server ;)
I currently have capacity for six SATA, and this plan uses 4, so I could add two more 120GB SSDs to create a RAID10 if you think that would dramatically increase IO performance, but as you say, that's expensive, and here would increase the hardware cost by over 15%.
– brianwc
Oct 24 '10 at 22:04
SSD hhave a LOT more IOPS capacity than normal discs - depending on what you pay and buy you get about 400 IOPS from a velociraptor and I know of SSD able to do about 40.000 (!) IOPS. They are expensive, though, this is why I go with Valociraptors, not even SAS drives. Best bang for the buck. 2gb drives are just slow - lots of space, little IOPS capacity (and that is all that talks about speed in virtualization). You have to plan about what you need and want - a LOT depends on the servers you will run.
– TomTom
Oct 25 '10 at 5:51
2
Most important thing: do avoid swap by all costs. Give the machines good RAM and do not overcommit RAM. Swap kils your IOPS budget faster than you can say "shit". Really. Do not turn off swapping (it makes sense to swap out unussed stuff) but make sure the machines are not starving for memory. Memory is CHEAP.
– TomTom
Oct 25 '10 at 5:52
add a comment |
I dont like your discs. Looks like totally wrong focus.
I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID
mirror.
Assuming you use the 120gb for operatoing system and the 32tb for the virtual machiens - welcome to sucking IO. Ok, granted, your server is small.
Anyhow, here some of my servers:
10gb AMD based Hyper-V (fits 16gb, we did have BIOS problems). The OS+Discs are on a RAID 10, Adaptec, 4x 320gb Black Scorptio. IO load is BAD. I feel it being overloaded. It gets an upgrade now to 16gb, but the number of VM's will be reduced - too much IO load during patching etc.
64gb, 8 core AMD opterons. I had a 4x300gb Velocirraptors i na RAID 10 on it. Was getting full and I WAS FEELING THE LOAD. Really feeling it. I just upgraded to 6 raptors in a RAID 10 and may go higher. This server had a number of database server on it, but they pretty much all have separate discs for the db work. The RAID controller is an Adaptec 5805 on a SAS infrastructure.
As oyu can see - your IO subsystem is really bad. Memory overcommit will just make it a LOT worse. SSD can work nicely, but are way too pricy still. If you put the VM's on the 2tb drives, your IO will just suck. They likely have around 250 IOPS or so each - compared to the 450 I mearsured on my raptors, and as I said, I use a lot of them AND they are on a high end raid controlller.
I got a nice SuperMicro cage with 24 disc slots for the larger server ;)
I dont like your discs. Looks like totally wrong focus.
I will have two 120GB SSD drives in a RAID mirror and 2 2TB SATA II drives in a RAID
mirror.
Assuming you use the 120gb for operatoing system and the 32tb for the virtual machiens - welcome to sucking IO. Ok, granted, your server is small.
Anyhow, here some of my servers:
10gb AMD based Hyper-V (fits 16gb, we did have BIOS problems). The OS+Discs are on a RAID 10, Adaptec, 4x 320gb Black Scorptio. IO load is BAD. I feel it being overloaded. It gets an upgrade now to 16gb, but the number of VM's will be reduced - too much IO load during patching etc.
64gb, 8 core AMD opterons. I had a 4x300gb Velocirraptors i na RAID 10 on it. Was getting full and I WAS FEELING THE LOAD. Really feeling it. I just upgraded to 6 raptors in a RAID 10 and may go higher. This server had a number of database server on it, but they pretty much all have separate discs for the db work. The RAID controller is an Adaptec 5805 on a SAS infrastructure.
As oyu can see - your IO subsystem is really bad. Memory overcommit will just make it a LOT worse. SSD can work nicely, but are way too pricy still. If you put the VM's on the 2tb drives, your IO will just suck. They likely have around 250 IOPS or so each - compared to the 450 I mearsured on my raptors, and as I said, I use a lot of them AND they are on a high end raid controlller.
I got a nice SuperMicro cage with 24 disc slots for the larger server ;)
answered Oct 24 '10 at 21:40
TomTomTomTom
46.2k642120
46.2k642120
I currently have capacity for six SATA, and this plan uses 4, so I could add two more 120GB SSDs to create a RAID10 if you think that would dramatically increase IO performance, but as you say, that's expensive, and here would increase the hardware cost by over 15%.
– brianwc
Oct 24 '10 at 22:04
SSD hhave a LOT more IOPS capacity than normal discs - depending on what you pay and buy you get about 400 IOPS from a velociraptor and I know of SSD able to do about 40.000 (!) IOPS. They are expensive, though, this is why I go with Valociraptors, not even SAS drives. Best bang for the buck. 2gb drives are just slow - lots of space, little IOPS capacity (and that is all that talks about speed in virtualization). You have to plan about what you need and want - a LOT depends on the servers you will run.
– TomTom
Oct 25 '10 at 5:51
2
Most important thing: do avoid swap by all costs. Give the machines good RAM and do not overcommit RAM. Swap kils your IOPS budget faster than you can say "shit". Really. Do not turn off swapping (it makes sense to swap out unussed stuff) but make sure the machines are not starving for memory. Memory is CHEAP.
– TomTom
Oct 25 '10 at 5:52
add a comment |
I currently have capacity for six SATA, and this plan uses 4, so I could add two more 120GB SSDs to create a RAID10 if you think that would dramatically increase IO performance, but as you say, that's expensive, and here would increase the hardware cost by over 15%.
– brianwc
Oct 24 '10 at 22:04
SSD hhave a LOT more IOPS capacity than normal discs - depending on what you pay and buy you get about 400 IOPS from a velociraptor and I know of SSD able to do about 40.000 (!) IOPS. They are expensive, though, this is why I go with Valociraptors, not even SAS drives. Best bang for the buck. 2gb drives are just slow - lots of space, little IOPS capacity (and that is all that talks about speed in virtualization). You have to plan about what you need and want - a LOT depends on the servers you will run.
– TomTom
Oct 25 '10 at 5:51
2
Most important thing: do avoid swap by all costs. Give the machines good RAM and do not overcommit RAM. Swap kils your IOPS budget faster than you can say "shit". Really. Do not turn off swapping (it makes sense to swap out unussed stuff) but make sure the machines are not starving for memory. Memory is CHEAP.
– TomTom
Oct 25 '10 at 5:52
I currently have capacity for six SATA, and this plan uses 4, so I could add two more 120GB SSDs to create a RAID10 if you think that would dramatically increase IO performance, but as you say, that's expensive, and here would increase the hardware cost by over 15%.
– brianwc
Oct 24 '10 at 22:04
I currently have capacity for six SATA, and this plan uses 4, so I could add two more 120GB SSDs to create a RAID10 if you think that would dramatically increase IO performance, but as you say, that's expensive, and here would increase the hardware cost by over 15%.
– brianwc
Oct 24 '10 at 22:04
SSD hhave a LOT more IOPS capacity than normal discs - depending on what you pay and buy you get about 400 IOPS from a velociraptor and I know of SSD able to do about 40.000 (!) IOPS. They are expensive, though, this is why I go with Valociraptors, not even SAS drives. Best bang for the buck. 2gb drives are just slow - lots of space, little IOPS capacity (and that is all that talks about speed in virtualization). You have to plan about what you need and want - a LOT depends on the servers you will run.
– TomTom
Oct 25 '10 at 5:51
SSD hhave a LOT more IOPS capacity than normal discs - depending on what you pay and buy you get about 400 IOPS from a velociraptor and I know of SSD able to do about 40.000 (!) IOPS. They are expensive, though, this is why I go with Valociraptors, not even SAS drives. Best bang for the buck. 2gb drives are just slow - lots of space, little IOPS capacity (and that is all that talks about speed in virtualization). You have to plan about what you need and want - a LOT depends on the servers you will run.
– TomTom
Oct 25 '10 at 5:51
2
2
Most important thing: do avoid swap by all costs. Give the machines good RAM and do not overcommit RAM. Swap kils your IOPS budget faster than you can say "shit". Really. Do not turn off swapping (it makes sense to swap out unussed stuff) but make sure the machines are not starving for memory. Memory is CHEAP.
– TomTom
Oct 25 '10 at 5:52
Most important thing: do avoid swap by all costs. Give the machines good RAM and do not overcommit RAM. Swap kils your IOPS budget faster than you can say "shit". Really. Do not turn off swapping (it makes sense to swap out unussed stuff) but make sure the machines are not starving for memory. Memory is CHEAP.
– TomTom
Oct 25 '10 at 5:52
add a comment |
Here is your available resources:
- 8 Cores
- 12GB RAM
- 120GB SSD Storage
- 2TB Sata Storage
A few thoughts come to mind with your plan:
- First off 12GB of RAM?... Spend more $ and get more RAM!
- I would consider running the "Host System" from a separate small SSD (Say 32GB's) or the sata drives assuming all the host does is run KVM. My main reason for this is because I would want to directly pass the entire 120GB SSD directly to your workhorse VM.
- I also use "CPU Pinning" for my main VM (Pin an entire CPU since you have two!)
- I also use RAM "HugePages" which basically reserves the RAM only for that VM, like CPU Pinning but for RAM
- I would want to give every VM at least 1 Core w/ 2 Threads for the small VM's and 4Gb Ram
- Swap shouldn't be needed, if your using swap a lot your system is underpowered. It's there as a last resort and RAM is cheap enough now it shouldn't be needed
- I would be fine giving the "Host System" a small amount of storage as long as it has access to the 2TB Sata drives for doing backups etc.
- Overcommitting is the way to go IMO as the hypervisor can then allocate as needed, but if your bottlenecking often then you may want to consider tightening up your resources to allow for priority processes to run smoother.
- Finally, I realize you and your work is likely more familiar with Debian based OS's but it's not that hard to jump to another Linux Distro if you understand Linux. You just swap 'apt-get' for 'yum' or 'dnf' and a few files are located in different places, but google will help you. My main reason for saying this is I will always want to use a RHEL based distro to run KVM since RHEL develops KVM. I personally use FEDORA. KVM is new IMO and I have found reasons/improvements that only Fedora had that other OS's were still working on importing.
- 8GB's Storage for a Linux OS is really small. I would want 32GB's. Sata storage is cheap. The best price point at the moment seems to be 8TB drives, but that might be overkill, regardless, 8GB's is small.
- Find some sort of monitoring solution that can alert you to issues like RAM/CPU/Storage bottlenecks. I like XYMon but I've looked into Zabbix as well.
Enjoy KVM! Make sure to backup your Domain.XML's and a copy of each VM offsite preferably.
2
Don't get too hung up on that server's stats. The post is over eight years old, after all.
– Michael Hampton♦
May 22 at 5:51
Whoops :P. I totally didn't notice, somehow I ended up here just browsing the site tags and I think I just assumed it was recent
– FreeSoftwareServers
May 22 at 23:52
add a comment |
Here is your available resources:
- 8 Cores
- 12GB RAM
- 120GB SSD Storage
- 2TB Sata Storage
A few thoughts come to mind with your plan:
- First off 12GB of RAM?... Spend more $ and get more RAM!
- I would consider running the "Host System" from a separate small SSD (Say 32GB's) or the sata drives assuming all the host does is run KVM. My main reason for this is because I would want to directly pass the entire 120GB SSD directly to your workhorse VM.
- I also use "CPU Pinning" for my main VM (Pin an entire CPU since you have two!)
- I also use RAM "HugePages" which basically reserves the RAM only for that VM, like CPU Pinning but for RAM
- I would want to give every VM at least 1 Core w/ 2 Threads for the small VM's and 4Gb Ram
- Swap shouldn't be needed, if your using swap a lot your system is underpowered. It's there as a last resort and RAM is cheap enough now it shouldn't be needed
- I would be fine giving the "Host System" a small amount of storage as long as it has access to the 2TB Sata drives for doing backups etc.
- Overcommitting is the way to go IMO as the hypervisor can then allocate as needed, but if your bottlenecking often then you may want to consider tightening up your resources to allow for priority processes to run smoother.
- Finally, I realize you and your work is likely more familiar with Debian based OS's but it's not that hard to jump to another Linux Distro if you understand Linux. You just swap 'apt-get' for 'yum' or 'dnf' and a few files are located in different places, but google will help you. My main reason for saying this is I will always want to use a RHEL based distro to run KVM since RHEL develops KVM. I personally use FEDORA. KVM is new IMO and I have found reasons/improvements that only Fedora had that other OS's were still working on importing.
- 8GB's Storage for a Linux OS is really small. I would want 32GB's. Sata storage is cheap. The best price point at the moment seems to be 8TB drives, but that might be overkill, regardless, 8GB's is small.
- Find some sort of monitoring solution that can alert you to issues like RAM/CPU/Storage bottlenecks. I like XYMon but I've looked into Zabbix as well.
Enjoy KVM! Make sure to backup your Domain.XML's and a copy of each VM offsite preferably.
2
Don't get too hung up on that server's stats. The post is over eight years old, after all.
– Michael Hampton♦
May 22 at 5:51
Whoops :P. I totally didn't notice, somehow I ended up here just browsing the site tags and I think I just assumed it was recent
– FreeSoftwareServers
May 22 at 23:52
add a comment |
Here is your available resources:
- 8 Cores
- 12GB RAM
- 120GB SSD Storage
- 2TB Sata Storage
A few thoughts come to mind with your plan:
- First off 12GB of RAM?... Spend more $ and get more RAM!
- I would consider running the "Host System" from a separate small SSD (Say 32GB's) or the sata drives assuming all the host does is run KVM. My main reason for this is because I would want to directly pass the entire 120GB SSD directly to your workhorse VM.
- I also use "CPU Pinning" for my main VM (Pin an entire CPU since you have two!)
- I also use RAM "HugePages" which basically reserves the RAM only for that VM, like CPU Pinning but for RAM
- I would want to give every VM at least 1 Core w/ 2 Threads for the small VM's and 4Gb Ram
- Swap shouldn't be needed, if your using swap a lot your system is underpowered. It's there as a last resort and RAM is cheap enough now it shouldn't be needed
- I would be fine giving the "Host System" a small amount of storage as long as it has access to the 2TB Sata drives for doing backups etc.
- Overcommitting is the way to go IMO as the hypervisor can then allocate as needed, but if your bottlenecking often then you may want to consider tightening up your resources to allow for priority processes to run smoother.
- Finally, I realize you and your work is likely more familiar with Debian based OS's but it's not that hard to jump to another Linux Distro if you understand Linux. You just swap 'apt-get' for 'yum' or 'dnf' and a few files are located in different places, but google will help you. My main reason for saying this is I will always want to use a RHEL based distro to run KVM since RHEL develops KVM. I personally use FEDORA. KVM is new IMO and I have found reasons/improvements that only Fedora had that other OS's were still working on importing.
- 8GB's Storage for a Linux OS is really small. I would want 32GB's. Sata storage is cheap. The best price point at the moment seems to be 8TB drives, but that might be overkill, regardless, 8GB's is small.
- Find some sort of monitoring solution that can alert you to issues like RAM/CPU/Storage bottlenecks. I like XYMon but I've looked into Zabbix as well.
Enjoy KVM! Make sure to backup your Domain.XML's and a copy of each VM offsite preferably.
Here is your available resources:
- 8 Cores
- 12GB RAM
- 120GB SSD Storage
- 2TB Sata Storage
A few thoughts come to mind with your plan:
- First off 12GB of RAM?... Spend more $ and get more RAM!
- I would consider running the "Host System" from a separate small SSD (Say 32GB's) or the sata drives assuming all the host does is run KVM. My main reason for this is because I would want to directly pass the entire 120GB SSD directly to your workhorse VM.
- I also use "CPU Pinning" for my main VM (Pin an entire CPU since you have two!)
- I also use RAM "HugePages" which basically reserves the RAM only for that VM, like CPU Pinning but for RAM
- I would want to give every VM at least 1 Core w/ 2 Threads for the small VM's and 4Gb Ram
- Swap shouldn't be needed, if your using swap a lot your system is underpowered. It's there as a last resort and RAM is cheap enough now it shouldn't be needed
- I would be fine giving the "Host System" a small amount of storage as long as it has access to the 2TB Sata drives for doing backups etc.
- Overcommitting is the way to go IMO as the hypervisor can then allocate as needed, but if your bottlenecking often then you may want to consider tightening up your resources to allow for priority processes to run smoother.
- Finally, I realize you and your work is likely more familiar with Debian based OS's but it's not that hard to jump to another Linux Distro if you understand Linux. You just swap 'apt-get' for 'yum' or 'dnf' and a few files are located in different places, but google will help you. My main reason for saying this is I will always want to use a RHEL based distro to run KVM since RHEL develops KVM. I personally use FEDORA. KVM is new IMO and I have found reasons/improvements that only Fedora had that other OS's were still working on importing.
- 8GB's Storage for a Linux OS is really small. I would want 32GB's. Sata storage is cheap. The best price point at the moment seems to be 8TB drives, but that might be overkill, regardless, 8GB's is small.
- Find some sort of monitoring solution that can alert you to issues like RAM/CPU/Storage bottlenecks. I like XYMon but I've looked into Zabbix as well.
Enjoy KVM! Make sure to backup your Domain.XML's and a copy of each VM offsite preferably.
answered May 22 at 5:41
FreeSoftwareServersFreeSoftwareServers
326214
326214
2
Don't get too hung up on that server's stats. The post is over eight years old, after all.
– Michael Hampton♦
May 22 at 5:51
Whoops :P. I totally didn't notice, somehow I ended up here just browsing the site tags and I think I just assumed it was recent
– FreeSoftwareServers
May 22 at 23:52
add a comment |
2
Don't get too hung up on that server's stats. The post is over eight years old, after all.
– Michael Hampton♦
May 22 at 5:51
Whoops :P. I totally didn't notice, somehow I ended up here just browsing the site tags and I think I just assumed it was recent
– FreeSoftwareServers
May 22 at 23:52
2
2
Don't get too hung up on that server's stats. The post is over eight years old, after all.
– Michael Hampton♦
May 22 at 5:51
Don't get too hung up on that server's stats. The post is over eight years old, after all.
– Michael Hampton♦
May 22 at 5:51
Whoops :P. I totally didn't notice, somehow I ended up here just browsing the site tags and I think I just assumed it was recent
– FreeSoftwareServers
May 22 at 23:52
Whoops :P. I totally didn't notice, somehow I ended up here just browsing the site tags and I think I just assumed it was recent
– FreeSoftwareServers
May 22 at 23:52
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f194307%2fplanning-my-first-server-with-ubuntu-kvm-virtual-machines%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown