Incredibly low KVM disk performance (qcow2 disk files + virtio)Is there a big difference in working with Fedora and Debian servers?KVM guest io is much slower than host io: is that normal?Poor guest I/O performance KVM Ubuntu 12.04KVM + NFS poor disk performanceImprove Disk Performance of a Linux 2.4 Guest on KVM HostOptimizing disk performance on KVM VM configurationKVM Windows 7 graphics performance using remote desktopExtremely slow qemu storage performance with qcow2 imagesKVM bad IO sync performance in Linux guestkvm virtualized Windows get low disk performancesKVM direct disk access vs raw files
Does Lawful Interception of 4G / the proposed 5G provide a back door for hackers as well?
Why was Endgame Thanos so different than Infinity War Thanos?
Run script for 10 times until meets the condition, but break the loop if it meets the condition during iteration
Is it a bad idea to replace pull-up resistors with hard pull-ups?
How many are the non-negative integer solutions of 𝑥 + 𝑦 + 𝑤 + 𝑧 = 16 where x < y?
A cryptic tricolour
Extrude the faces of a cube symmetrically along XYZ
Create a list of all possible Boolean configurations of three constraints
How do I compare the result of "1d20+x, with advantage" to "1d20+y, without advantage", assuming x < y?
Extracting sublists that contain similar elements
How to slow yourself down (for playing nice with others)
On studying Computer Science vs. Software Engineering to become a proficient coder
Usefulness of complex chord names?
Why can't RGB or bicolour LEDs produce a decent yellow?
How to cope with regret and shame about not fully utilizing opportunities during PhD?
As programers say: Strive to be lazy
Early arrival in Australia, early hotel check in not available
How did Thanos not realise this had happened at the end of Endgame?
Is the schwa sound consistent?
How are Core iX names like Core i5, i7 related to Haswell, Ivy Bridge?
Setting the major mode of a new buffer interactively
Two researchers want to work on the same extension to my paper. Who to help?
Should these notes be played as a chord or one after another?
Exclude loop* snap devices from lsblk output?
Incredibly low KVM disk performance (qcow2 disk files + virtio)
Is there a big difference in working with Fedora and Debian servers?KVM guest io is much slower than host io: is that normal?Poor guest I/O performance KVM Ubuntu 12.04KVM + NFS poor disk performanceImprove Disk Performance of a Linux 2.4 Guest on KVM HostOptimizing disk performance on KVM VM configurationKVM Windows 7 graphics performance using remote desktopExtremely slow qemu storage performance with qcow2 imagesKVM bad IO sync performance in Linux guestkvm virtualized Windows get low disk performancesKVM direct disk access vs raw files
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd
test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.
- The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.
- Performance is tested using
time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000
. - The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.
- The host partitions are 4kb aligned (and performance is fine on the host, anyway).
- Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.
- Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.
- Host has the deadline IO scheduler enabled and the guest has noop.
There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.
Update 1
And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).
Update 2
I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.
performance kvm-virtualization qcow2
|
show 9 more comments
I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd
test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.
- The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.
- Performance is tested using
time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000
. - The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.
- The host partitions are 4kb aligned (and performance is fine on the host, anyway).
- Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.
- Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.
- Host has the deadline IO scheduler enabled and the guest has noop.
There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.
Update 1
And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).
Update 2
I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.
performance kvm-virtualization qcow2
You didn't mention the distribution and software versions in use.
– dyasny
Jul 15 '12 at 7:42
Added some info on versions.
– El Yobo
Jul 15 '12 at 7:55
ah, as expected, ubuntu... any chance you can reproduce this on fedora?
– dyasny
Jul 15 '12 at 8:13
The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.
– El Yobo
Jul 15 '12 at 8:18
my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu
– dyasny
Jul 15 '12 at 8:51
|
show 9 more comments
I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd
test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.
- The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.
- Performance is tested using
time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000
. - The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.
- The host partitions are 4kb aligned (and performance is fine on the host, anyway).
- Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.
- Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.
- Host has the deadline IO scheduler enabled and the guest has noop.
There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.
Update 1
And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).
Update 2
I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.
performance kvm-virtualization qcow2
I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd
test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.
- The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.
- Performance is tested using
time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000
. - The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.
- The host partitions are 4kb aligned (and performance is fine on the host, anyway).
- Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.
- Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.
- Host has the deadline IO scheduler enabled and the guest has noop.
There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.
Update 1
And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).
Update 2
I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.
performance kvm-virtualization qcow2
performance kvm-virtualization qcow2
edited May 2 at 8:21
poige
7,14211537
7,14211537
asked Jul 15 '12 at 6:36
El YoboEl Yobo
5031511
5031511
You didn't mention the distribution and software versions in use.
– dyasny
Jul 15 '12 at 7:42
Added some info on versions.
– El Yobo
Jul 15 '12 at 7:55
ah, as expected, ubuntu... any chance you can reproduce this on fedora?
– dyasny
Jul 15 '12 at 8:13
The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.
– El Yobo
Jul 15 '12 at 8:18
my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu
– dyasny
Jul 15 '12 at 8:51
|
show 9 more comments
You didn't mention the distribution and software versions in use.
– dyasny
Jul 15 '12 at 7:42
Added some info on versions.
– El Yobo
Jul 15 '12 at 7:55
ah, as expected, ubuntu... any chance you can reproduce this on fedora?
– dyasny
Jul 15 '12 at 8:13
The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.
– El Yobo
Jul 15 '12 at 8:18
my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu
– dyasny
Jul 15 '12 at 8:51
You didn't mention the distribution and software versions in use.
– dyasny
Jul 15 '12 at 7:42
You didn't mention the distribution and software versions in use.
– dyasny
Jul 15 '12 at 7:42
Added some info on versions.
– El Yobo
Jul 15 '12 at 7:55
Added some info on versions.
– El Yobo
Jul 15 '12 at 7:55
ah, as expected, ubuntu... any chance you can reproduce this on fedora?
– dyasny
Jul 15 '12 at 8:13
ah, as expected, ubuntu... any chance you can reproduce this on fedora?
– dyasny
Jul 15 '12 at 8:13
The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.
– El Yobo
Jul 15 '12 at 8:18
The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.
– El Yobo
Jul 15 '12 at 8:18
my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu
– dyasny
Jul 15 '12 at 8:51
my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu
– dyasny
Jul 15 '12 at 8:51
|
show 9 more comments
6 Answers
6
active
oldest
votes
Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).
3
Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.
– El Yobo
Jul 15 '12 at 7:52
1
Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.
– El Yobo
Jul 15 '12 at 8:15
1
18:35 (qcow2) vs 8:48 (raw) is "comparable times"?
– womble♦
Jul 15 '12 at 8:19
1
I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.
– El Yobo
Jul 15 '12 at 16:57
1
That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.
– lzap
Nov 25 '13 at 20:43
|
show 4 more comments
How to achieve top performance with QCOW2:
qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ
The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.
Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):
<driver name='qemu' cache='unsafe' />
Some users reports that this configuration beats LVM/unsafe configuration in some tests.
For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.
2
It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.
– shodanshok
Apr 2 '15 at 17:18
1
As I've said: if this is not production instance (good for testing)
– lzap
Apr 3 '15 at 13:11
Fair enough. I missed it ;)
– shodanshok
Apr 3 '15 at 15:34
add a comment |
I achieved great results for qcow2 image with this setting:
<driver name='qemu' type='raw' cache='none' io='native'/>
which disables guest caches and enables AIO (Asynchronous IO). Running your dd
command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.
My qemu-kvm
version is 1.0+noroms-0ubuntu14.8
and kernel 3.2.0-41-generic
from stock Ubuntu 12.04.2 LTS.
5
You set a qcow2 image type to "raw"?
– Alex
Aug 18 '13 at 11:02
I guess I have copied older entry, I suppose speed benefits should be same fortype='qcow2'
, could you check that before I edit? I have no more access to such configuration - I migrated to LXC withmount bind
directories to achieve real native speeds in guests.
– gertas
Aug 18 '13 at 11:15
add a comment |
If you're running your vms with a single command, for arguments you can use
kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>
It got me from 3MB/s to 70MB/s
add a comment |
On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.
On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.
A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.
add a comment |
I experienced exactly the same issue.
Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.
Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
<target dev='sda' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f407842%2fincredibly-low-kvm-disk-performance-qcow2-disk-files-virtio%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).
3
Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.
– El Yobo
Jul 15 '12 at 7:52
1
Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.
– El Yobo
Jul 15 '12 at 8:15
1
18:35 (qcow2) vs 8:48 (raw) is "comparable times"?
– womble♦
Jul 15 '12 at 8:19
1
I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.
– El Yobo
Jul 15 '12 at 16:57
1
That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.
– lzap
Nov 25 '13 at 20:43
|
show 4 more comments
Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).
3
Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.
– El Yobo
Jul 15 '12 at 7:52
1
Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.
– El Yobo
Jul 15 '12 at 8:15
1
18:35 (qcow2) vs 8:48 (raw) is "comparable times"?
– womble♦
Jul 15 '12 at 8:19
1
I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.
– El Yobo
Jul 15 '12 at 16:57
1
That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.
– lzap
Nov 25 '13 at 20:43
|
show 4 more comments
Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).
Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).
answered Jul 15 '12 at 7:45
womble♦womble
86.2k18147205
86.2k18147205
3
Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.
– El Yobo
Jul 15 '12 at 7:52
1
Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.
– El Yobo
Jul 15 '12 at 8:15
1
18:35 (qcow2) vs 8:48 (raw) is "comparable times"?
– womble♦
Jul 15 '12 at 8:19
1
I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.
– El Yobo
Jul 15 '12 at 16:57
1
That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.
– lzap
Nov 25 '13 at 20:43
|
show 4 more comments
3
Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.
– El Yobo
Jul 15 '12 at 7:52
1
Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.
– El Yobo
Jul 15 '12 at 8:15
1
18:35 (qcow2) vs 8:48 (raw) is "comparable times"?
– womble♦
Jul 15 '12 at 8:19
1
I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.
– El Yobo
Jul 15 '12 at 16:57
1
That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.
– lzap
Nov 25 '13 at 20:43
3
3
Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.
– El Yobo
Jul 15 '12 at 7:52
Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.
– El Yobo
Jul 15 '12 at 7:52
1
1
Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.
– El Yobo
Jul 15 '12 at 8:15
Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.
– El Yobo
Jul 15 '12 at 8:15
1
1
18:35 (qcow2) vs 8:48 (raw) is "comparable times"?
– womble♦
Jul 15 '12 at 8:19
18:35 (qcow2) vs 8:48 (raw) is "comparable times"?
– womble♦
Jul 15 '12 at 8:19
1
1
I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.
– El Yobo
Jul 15 '12 at 16:57
I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.
– El Yobo
Jul 15 '12 at 16:57
1
1
That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.
– lzap
Nov 25 '13 at 20:43
That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.
– lzap
Nov 25 '13 at 20:43
|
show 4 more comments
How to achieve top performance with QCOW2:
qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ
The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.
Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):
<driver name='qemu' cache='unsafe' />
Some users reports that this configuration beats LVM/unsafe configuration in some tests.
For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.
2
It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.
– shodanshok
Apr 2 '15 at 17:18
1
As I've said: if this is not production instance (good for testing)
– lzap
Apr 3 '15 at 13:11
Fair enough. I missed it ;)
– shodanshok
Apr 3 '15 at 15:34
add a comment |
How to achieve top performance with QCOW2:
qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ
The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.
Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):
<driver name='qemu' cache='unsafe' />
Some users reports that this configuration beats LVM/unsafe configuration in some tests.
For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.
2
It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.
– shodanshok
Apr 2 '15 at 17:18
1
As I've said: if this is not production instance (good for testing)
– lzap
Apr 3 '15 at 13:11
Fair enough. I missed it ;)
– shodanshok
Apr 3 '15 at 15:34
add a comment |
How to achieve top performance with QCOW2:
qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ
The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.
Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):
<driver name='qemu' cache='unsafe' />
Some users reports that this configuration beats LVM/unsafe configuration in some tests.
For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.
How to achieve top performance with QCOW2:
qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ
The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.
Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):
<driver name='qemu' cache='unsafe' />
Some users reports that this configuration beats LVM/unsafe configuration in some tests.
For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.
edited Apr 7 '17 at 8:26
answered Nov 25 '13 at 21:04
lzaplzap
1,80111917
1,80111917
2
It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.
– shodanshok
Apr 2 '15 at 17:18
1
As I've said: if this is not production instance (good for testing)
– lzap
Apr 3 '15 at 13:11
Fair enough. I missed it ;)
– shodanshok
Apr 3 '15 at 15:34
add a comment |
2
It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.
– shodanshok
Apr 2 '15 at 17:18
1
As I've said: if this is not production instance (good for testing)
– lzap
Apr 3 '15 at 13:11
Fair enough. I missed it ;)
– shodanshok
Apr 3 '15 at 15:34
2
2
It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.
– shodanshok
Apr 2 '15 at 17:18
It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.
– shodanshok
Apr 2 '15 at 17:18
1
1
As I've said: if this is not production instance (good for testing)
– lzap
Apr 3 '15 at 13:11
As I've said: if this is not production instance (good for testing)
– lzap
Apr 3 '15 at 13:11
Fair enough. I missed it ;)
– shodanshok
Apr 3 '15 at 15:34
Fair enough. I missed it ;)
– shodanshok
Apr 3 '15 at 15:34
add a comment |
I achieved great results for qcow2 image with this setting:
<driver name='qemu' type='raw' cache='none' io='native'/>
which disables guest caches and enables AIO (Asynchronous IO). Running your dd
command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.
My qemu-kvm
version is 1.0+noroms-0ubuntu14.8
and kernel 3.2.0-41-generic
from stock Ubuntu 12.04.2 LTS.
5
You set a qcow2 image type to "raw"?
– Alex
Aug 18 '13 at 11:02
I guess I have copied older entry, I suppose speed benefits should be same fortype='qcow2'
, could you check that before I edit? I have no more access to such configuration - I migrated to LXC withmount bind
directories to achieve real native speeds in guests.
– gertas
Aug 18 '13 at 11:15
add a comment |
I achieved great results for qcow2 image with this setting:
<driver name='qemu' type='raw' cache='none' io='native'/>
which disables guest caches and enables AIO (Asynchronous IO). Running your dd
command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.
My qemu-kvm
version is 1.0+noroms-0ubuntu14.8
and kernel 3.2.0-41-generic
from stock Ubuntu 12.04.2 LTS.
5
You set a qcow2 image type to "raw"?
– Alex
Aug 18 '13 at 11:02
I guess I have copied older entry, I suppose speed benefits should be same fortype='qcow2'
, could you check that before I edit? I have no more access to such configuration - I migrated to LXC withmount bind
directories to achieve real native speeds in guests.
– gertas
Aug 18 '13 at 11:15
add a comment |
I achieved great results for qcow2 image with this setting:
<driver name='qemu' type='raw' cache='none' io='native'/>
which disables guest caches and enables AIO (Asynchronous IO). Running your dd
command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.
My qemu-kvm
version is 1.0+noroms-0ubuntu14.8
and kernel 3.2.0-41-generic
from stock Ubuntu 12.04.2 LTS.
I achieved great results for qcow2 image with this setting:
<driver name='qemu' type='raw' cache='none' io='native'/>
which disables guest caches and enables AIO (Asynchronous IO). Running your dd
command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.
My qemu-kvm
version is 1.0+noroms-0ubuntu14.8
and kernel 3.2.0-41-generic
from stock Ubuntu 12.04.2 LTS.
answered Jun 15 '13 at 21:29
gertasgertas
43467
43467
5
You set a qcow2 image type to "raw"?
– Alex
Aug 18 '13 at 11:02
I guess I have copied older entry, I suppose speed benefits should be same fortype='qcow2'
, could you check that before I edit? I have no more access to such configuration - I migrated to LXC withmount bind
directories to achieve real native speeds in guests.
– gertas
Aug 18 '13 at 11:15
add a comment |
5
You set a qcow2 image type to "raw"?
– Alex
Aug 18 '13 at 11:02
I guess I have copied older entry, I suppose speed benefits should be same fortype='qcow2'
, could you check that before I edit? I have no more access to such configuration - I migrated to LXC withmount bind
directories to achieve real native speeds in guests.
– gertas
Aug 18 '13 at 11:15
5
5
You set a qcow2 image type to "raw"?
– Alex
Aug 18 '13 at 11:02
You set a qcow2 image type to "raw"?
– Alex
Aug 18 '13 at 11:02
I guess I have copied older entry, I suppose speed benefits should be same for
type='qcow2'
, could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind
directories to achieve real native speeds in guests.– gertas
Aug 18 '13 at 11:15
I guess I have copied older entry, I suppose speed benefits should be same for
type='qcow2'
, could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind
directories to achieve real native speeds in guests.– gertas
Aug 18 '13 at 11:15
add a comment |
If you're running your vms with a single command, for arguments you can use
kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>
It got me from 3MB/s to 70MB/s
add a comment |
If you're running your vms with a single command, for arguments you can use
kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>
It got me from 3MB/s to 70MB/s
add a comment |
If you're running your vms with a single command, for arguments you can use
kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>
It got me from 3MB/s to 70MB/s
If you're running your vms with a single command, for arguments you can use
kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>
It got me from 3MB/s to 70MB/s
answered Apr 2 '15 at 16:06
Kourindou HimeKourindou Hime
211
211
add a comment |
add a comment |
On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.
On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.
A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.
add a comment |
On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.
On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.
A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.
add a comment |
On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.
On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.
A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.
On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.
On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.
A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.
answered Apr 2 '15 at 17:23
shodanshokshodanshok
27.3k34990
27.3k34990
add a comment |
add a comment |
I experienced exactly the same issue.
Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.
Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
<target dev='sda' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
add a comment |
I experienced exactly the same issue.
Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.
Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
<target dev='sda' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
add a comment |
I experienced exactly the same issue.
Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.
Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
<target dev='sda' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
I experienced exactly the same issue.
Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.
Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
<target dev='sda' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
answered Jul 24 '18 at 14:40
Greg WGreg W
211
211
add a comment |
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f407842%2fincredibly-low-kvm-disk-performance-qcow2-disk-files-virtio%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
You didn't mention the distribution and software versions in use.
– dyasny
Jul 15 '12 at 7:42
Added some info on versions.
– El Yobo
Jul 15 '12 at 7:55
ah, as expected, ubuntu... any chance you can reproduce this on fedora?
– dyasny
Jul 15 '12 at 8:13
The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.
– El Yobo
Jul 15 '12 at 8:18
my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu
– dyasny
Jul 15 '12 at 8:51