Incredibly low KVM disk performance (qcow2 disk files + virtio)Is there a big difference in working with Fedora and Debian servers?KVM guest io is much slower than host io: is that normal?Poor guest I/O performance KVM Ubuntu 12.04KVM + NFS poor disk performanceImprove Disk Performance of a Linux 2.4 Guest on KVM HostOptimizing disk performance on KVM VM configurationKVM Windows 7 graphics performance using remote desktopExtremely slow qemu storage performance with qcow2 imagesKVM bad IO sync performance in Linux guestkvm virtualized Windows get low disk performancesKVM direct disk access vs raw files

Does Lawful Interception of 4G / the proposed 5G provide a back door for hackers as well?

Why was Endgame Thanos so different than Infinity War Thanos?

Run script for 10 times until meets the condition, but break the loop if it meets the condition during iteration

Is it a bad idea to replace pull-up resistors with hard pull-ups?

How many are the non-negative integer solutions of 𝑥 + 𝑦 + 𝑤 + 𝑧 = 16 where x < y?

A cryptic tricolour

Extrude the faces of a cube symmetrically along XYZ

Create a list of all possible Boolean configurations of three constraints

How do I compare the result of "1d20+x, with advantage" to "1d20+y, without advantage", assuming x < y?

Extracting sublists that contain similar elements

How to slow yourself down (for playing nice with others)

On studying Computer Science vs. Software Engineering to become a proficient coder

Usefulness of complex chord names?

Why can't RGB or bicolour LEDs produce a decent yellow?

How to cope with regret and shame about not fully utilizing opportunities during PhD?

As programers say: Strive to be lazy

Early arrival in Australia, early hotel check in not available

How did Thanos not realise this had happened at the end of Endgame?

Is the schwa sound consistent?

How are Core iX names like Core i5, i7 related to Haswell, Ivy Bridge?

Setting the major mode of a new buffer interactively

Two researchers want to work on the same extension to my paper. Who to help?

Should these notes be played as a chord or one after another?

Exclude loop* snap devices from lsblk output?



Incredibly low KVM disk performance (qcow2 disk files + virtio)


Is there a big difference in working with Fedora and Debian servers?KVM guest io is much slower than host io: is that normal?Poor guest I/O performance KVM Ubuntu 12.04KVM + NFS poor disk performanceImprove Disk Performance of a Linux 2.4 Guest on KVM HostOptimizing disk performance on KVM VM configurationKVM Windows 7 graphics performance using remote desktopExtremely slow qemu storage performance with qcow2 imagesKVM bad IO sync performance in Linux guestkvm virtualized Windows get low disk performancesKVM direct disk access vs raw files






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








25















I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.



  • The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.

  • Performance is tested using time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000.

  • The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.

  • The host partitions are 4kb aligned (and performance is fine on the host, anyway).

  • Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.

  • Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.

  • Host has the deadline IO scheduler enabled and the guest has noop.

There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.



Update 1



And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).



Update 2



I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.










share|improve this question
























  • You didn't mention the distribution and software versions in use.

    – dyasny
    Jul 15 '12 at 7:42











  • Added some info on versions.

    – El Yobo
    Jul 15 '12 at 7:55











  • ah, as expected, ubuntu... any chance you can reproduce this on fedora?

    – dyasny
    Jul 15 '12 at 8:13











  • The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

    – El Yobo
    Jul 15 '12 at 8:18












  • my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

    – dyasny
    Jul 15 '12 at 8:51

















25















I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.



  • The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.

  • Performance is tested using time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000.

  • The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.

  • The host partitions are 4kb aligned (and performance is fine on the host, anyway).

  • Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.

  • Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.

  • Host has the deadline IO scheduler enabled and the guest has noop.

There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.



Update 1



And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).



Update 2



I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.










share|improve this question
























  • You didn't mention the distribution and software versions in use.

    – dyasny
    Jul 15 '12 at 7:42











  • Added some info on versions.

    – El Yobo
    Jul 15 '12 at 7:55











  • ah, as expected, ubuntu... any chance you can reproduce this on fedora?

    – dyasny
    Jul 15 '12 at 8:13











  • The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

    – El Yobo
    Jul 15 '12 at 8:18












  • my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

    – dyasny
    Jul 15 '12 at 8:51













25












25








25


9






I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.



  • The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.

  • Performance is tested using time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000.

  • The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.

  • The host partitions are 4kb aligned (and performance is fine on the host, anyway).

  • Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.

  • Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.

  • Host has the deadline IO scheduler enabled and the guest has noop.

There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.



Update 1



And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).



Update 2



I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.










share|improve this question
















I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.



  • The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.

  • Performance is tested using time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000.

  • The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.

  • The host partitions are 4kb aligned (and performance is fine on the host, anyway).

  • Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.

  • Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.

  • Host has the deadline IO scheduler enabled and the guest has noop.

There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.



Update 1



And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).



Update 2



I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.







performance kvm-virtualization qcow2






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 2 at 8:21









poige

7,14211537




7,14211537










asked Jul 15 '12 at 6:36









El YoboEl Yobo

5031511




5031511












  • You didn't mention the distribution and software versions in use.

    – dyasny
    Jul 15 '12 at 7:42











  • Added some info on versions.

    – El Yobo
    Jul 15 '12 at 7:55











  • ah, as expected, ubuntu... any chance you can reproduce this on fedora?

    – dyasny
    Jul 15 '12 at 8:13











  • The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

    – El Yobo
    Jul 15 '12 at 8:18












  • my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

    – dyasny
    Jul 15 '12 at 8:51

















  • You didn't mention the distribution and software versions in use.

    – dyasny
    Jul 15 '12 at 7:42











  • Added some info on versions.

    – El Yobo
    Jul 15 '12 at 7:55











  • ah, as expected, ubuntu... any chance you can reproduce this on fedora?

    – dyasny
    Jul 15 '12 at 8:13











  • The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

    – El Yobo
    Jul 15 '12 at 8:18












  • my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

    – dyasny
    Jul 15 '12 at 8:51
















You didn't mention the distribution and software versions in use.

– dyasny
Jul 15 '12 at 7:42





You didn't mention the distribution and software versions in use.

– dyasny
Jul 15 '12 at 7:42













Added some info on versions.

– El Yobo
Jul 15 '12 at 7:55





Added some info on versions.

– El Yobo
Jul 15 '12 at 7:55













ah, as expected, ubuntu... any chance you can reproduce this on fedora?

– dyasny
Jul 15 '12 at 8:13





ah, as expected, ubuntu... any chance you can reproduce this on fedora?

– dyasny
Jul 15 '12 at 8:13













The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

– El Yobo
Jul 15 '12 at 8:18






The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

– El Yobo
Jul 15 '12 at 8:18














my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

– dyasny
Jul 15 '12 at 8:51





my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

– dyasny
Jul 15 '12 at 8:51










6 Answers
6






active

oldest

votes


















14














Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).






share|improve this answer


















  • 3





    Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

    – El Yobo
    Jul 15 '12 at 7:52






  • 1





    Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

    – El Yobo
    Jul 15 '12 at 8:15






  • 1





    18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

    – womble
    Jul 15 '12 at 8:19






  • 1





    I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

    – El Yobo
    Jul 15 '12 at 16:57






  • 1





    That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

    – lzap
    Nov 25 '13 at 20:43


















7














How to achieve top performance with QCOW2:



qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



<driver name='qemu' cache='unsafe' />


Some users reports that this configuration beats LVM/unsafe configuration in some tests.



For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.






share|improve this answer




















  • 2





    It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

    – shodanshok
    Apr 2 '15 at 17:18






  • 1





    As I've said: if this is not production instance (good for testing)

    – lzap
    Apr 3 '15 at 13:11











  • Fair enough. I missed it ;)

    – shodanshok
    Apr 3 '15 at 15:34


















6














I achieved great results for qcow2 image with this setting:



<driver name='qemu' type='raw' cache='none' io='native'/>


which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.






share|improve this answer


















  • 5





    You set a qcow2 image type to "raw"?

    – Alex
    Aug 18 '13 at 11:02











  • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

    – gertas
    Aug 18 '13 at 11:15


















2














If you're running your vms with a single command, for arguments you can use




kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




It got me from 3MB/s to 70MB/s






share|improve this answer






























    2














    On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



    On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



    A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
    On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.






    share|improve this answer






























      2














      I experienced exactly the same issue.
      Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



      Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



      <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none' io='native'/>
      <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      </disk>





      share|improve this answer























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "2"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f407842%2fincredibly-low-kvm-disk-performance-qcow2-disk-files-virtio%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        6 Answers
        6






        active

        oldest

        votes








        6 Answers
        6






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        14














        Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).






        share|improve this answer


















        • 3





          Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

          – El Yobo
          Jul 15 '12 at 7:52






        • 1





          Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

          – El Yobo
          Jul 15 '12 at 8:15






        • 1





          18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

          – womble
          Jul 15 '12 at 8:19






        • 1





          I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

          – El Yobo
          Jul 15 '12 at 16:57






        • 1





          That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

          – lzap
          Nov 25 '13 at 20:43















        14














        Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).






        share|improve this answer


















        • 3





          Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

          – El Yobo
          Jul 15 '12 at 7:52






        • 1





          Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

          – El Yobo
          Jul 15 '12 at 8:15






        • 1





          18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

          – womble
          Jul 15 '12 at 8:19






        • 1





          I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

          – El Yobo
          Jul 15 '12 at 16:57






        • 1





          That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

          – lzap
          Nov 25 '13 at 20:43













        14












        14








        14







        Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).






        share|improve this answer













        Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jul 15 '12 at 7:45









        womblewomble

        86.2k18147205




        86.2k18147205







        • 3





          Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

          – El Yobo
          Jul 15 '12 at 7:52






        • 1





          Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

          – El Yobo
          Jul 15 '12 at 8:15






        • 1





          18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

          – womble
          Jul 15 '12 at 8:19






        • 1





          I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

          – El Yobo
          Jul 15 '12 at 16:57






        • 1





          That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

          – lzap
          Nov 25 '13 at 20:43












        • 3





          Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

          – El Yobo
          Jul 15 '12 at 7:52






        • 1





          Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

          – El Yobo
          Jul 15 '12 at 8:15






        • 1





          18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

          – womble
          Jul 15 '12 at 8:19






        • 1





          I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

          – El Yobo
          Jul 15 '12 at 16:57






        • 1





          That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

          – lzap
          Nov 25 '13 at 20:43







        3




        3





        Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

        – El Yobo
        Jul 15 '12 at 7:52





        Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

        – El Yobo
        Jul 15 '12 at 7:52




        1




        1





        Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

        – El Yobo
        Jul 15 '12 at 8:15





        Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

        – El Yobo
        Jul 15 '12 at 8:15




        1




        1





        18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

        – womble
        Jul 15 '12 at 8:19





        18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

        – womble
        Jul 15 '12 at 8:19




        1




        1





        I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

        – El Yobo
        Jul 15 '12 at 16:57





        I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

        – El Yobo
        Jul 15 '12 at 16:57




        1




        1





        That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

        – lzap
        Nov 25 '13 at 20:43





        That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

        – lzap
        Nov 25 '13 at 20:43













        7














        How to achieve top performance with QCOW2:



        qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


        The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



        Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



        <driver name='qemu' cache='unsafe' />


        Some users reports that this configuration beats LVM/unsafe configuration in some tests.



        For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.






        share|improve this answer




















        • 2





          It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

          – shodanshok
          Apr 2 '15 at 17:18






        • 1





          As I've said: if this is not production instance (good for testing)

          – lzap
          Apr 3 '15 at 13:11











        • Fair enough. I missed it ;)

          – shodanshok
          Apr 3 '15 at 15:34















        7














        How to achieve top performance with QCOW2:



        qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


        The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



        Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



        <driver name='qemu' cache='unsafe' />


        Some users reports that this configuration beats LVM/unsafe configuration in some tests.



        For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.






        share|improve this answer




















        • 2





          It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

          – shodanshok
          Apr 2 '15 at 17:18






        • 1





          As I've said: if this is not production instance (good for testing)

          – lzap
          Apr 3 '15 at 13:11











        • Fair enough. I missed it ;)

          – shodanshok
          Apr 3 '15 at 15:34













        7












        7








        7







        How to achieve top performance with QCOW2:



        qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


        The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



        Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



        <driver name='qemu' cache='unsafe' />


        Some users reports that this configuration beats LVM/unsafe configuration in some tests.



        For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.






        share|improve this answer















        How to achieve top performance with QCOW2:



        qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


        The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



        Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



        <driver name='qemu' cache='unsafe' />


        Some users reports that this configuration beats LVM/unsafe configuration in some tests.



        For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Apr 7 '17 at 8:26

























        answered Nov 25 '13 at 21:04









        lzaplzap

        1,80111917




        1,80111917







        • 2





          It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

          – shodanshok
          Apr 2 '15 at 17:18






        • 1





          As I've said: if this is not production instance (good for testing)

          – lzap
          Apr 3 '15 at 13:11











        • Fair enough. I missed it ;)

          – shodanshok
          Apr 3 '15 at 15:34












        • 2





          It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

          – shodanshok
          Apr 2 '15 at 17:18






        • 1





          As I've said: if this is not production instance (good for testing)

          – lzap
          Apr 3 '15 at 13:11











        • Fair enough. I missed it ;)

          – shodanshok
          Apr 3 '15 at 15:34







        2




        2





        It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

        – shodanshok
        Apr 2 '15 at 17:18





        It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

        – shodanshok
        Apr 2 '15 at 17:18




        1




        1





        As I've said: if this is not production instance (good for testing)

        – lzap
        Apr 3 '15 at 13:11





        As I've said: if this is not production instance (good for testing)

        – lzap
        Apr 3 '15 at 13:11













        Fair enough. I missed it ;)

        – shodanshok
        Apr 3 '15 at 15:34





        Fair enough. I missed it ;)

        – shodanshok
        Apr 3 '15 at 15:34











        6














        I achieved great results for qcow2 image with this setting:



        <driver name='qemu' type='raw' cache='none' io='native'/>


        which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



        My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.






        share|improve this answer


















        • 5





          You set a qcow2 image type to "raw"?

          – Alex
          Aug 18 '13 at 11:02











        • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

          – gertas
          Aug 18 '13 at 11:15















        6














        I achieved great results for qcow2 image with this setting:



        <driver name='qemu' type='raw' cache='none' io='native'/>


        which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



        My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.






        share|improve this answer


















        • 5





          You set a qcow2 image type to "raw"?

          – Alex
          Aug 18 '13 at 11:02











        • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

          – gertas
          Aug 18 '13 at 11:15













        6












        6








        6







        I achieved great results for qcow2 image with this setting:



        <driver name='qemu' type='raw' cache='none' io='native'/>


        which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



        My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.






        share|improve this answer













        I achieved great results for qcow2 image with this setting:



        <driver name='qemu' type='raw' cache='none' io='native'/>


        which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



        My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jun 15 '13 at 21:29









        gertasgertas

        43467




        43467







        • 5





          You set a qcow2 image type to "raw"?

          – Alex
          Aug 18 '13 at 11:02











        • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

          – gertas
          Aug 18 '13 at 11:15












        • 5





          You set a qcow2 image type to "raw"?

          – Alex
          Aug 18 '13 at 11:02











        • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

          – gertas
          Aug 18 '13 at 11:15







        5




        5





        You set a qcow2 image type to "raw"?

        – Alex
        Aug 18 '13 at 11:02





        You set a qcow2 image type to "raw"?

        – Alex
        Aug 18 '13 at 11:02













        I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

        – gertas
        Aug 18 '13 at 11:15





        I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

        – gertas
        Aug 18 '13 at 11:15











        2














        If you're running your vms with a single command, for arguments you can use




        kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




        It got me from 3MB/s to 70MB/s






        share|improve this answer



























          2














          If you're running your vms with a single command, for arguments you can use




          kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




          It got me from 3MB/s to 70MB/s






          share|improve this answer

























            2












            2








            2







            If you're running your vms with a single command, for arguments you can use




            kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




            It got me from 3MB/s to 70MB/s






            share|improve this answer













            If you're running your vms with a single command, for arguments you can use




            kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




            It got me from 3MB/s to 70MB/s







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 2 '15 at 16:06









            Kourindou HimeKourindou Hime

            211




            211





















                2














                On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



                On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



                A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
                On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.






                share|improve this answer



























                  2














                  On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



                  On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



                  A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
                  On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.






                  share|improve this answer

























                    2












                    2








                    2







                    On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



                    On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



                    A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
                    On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.






                    share|improve this answer













                    On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



                    On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



                    A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
                    On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Apr 2 '15 at 17:23









                    shodanshokshodanshok

                    27.3k34990




                    27.3k34990





















                        2














                        I experienced exactly the same issue.
                        Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



                        Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



                        <disk type='file' device='disk'>
                        <driver name='qemu' type='qcow2' cache='none' io='native'/>
                        <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
                        <target dev='sda' bus='scsi'/>
                        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
                        </disk>





                        share|improve this answer



























                          2














                          I experienced exactly the same issue.
                          Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



                          Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



                          <disk type='file' device='disk'>
                          <driver name='qemu' type='qcow2' cache='none' io='native'/>
                          <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
                          <target dev='sda' bus='scsi'/>
                          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
                          </disk>





                          share|improve this answer

























                            2












                            2








                            2







                            I experienced exactly the same issue.
                            Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



                            Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



                            <disk type='file' device='disk'>
                            <driver name='qemu' type='qcow2' cache='none' io='native'/>
                            <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
                            <target dev='sda' bus='scsi'/>
                            <address type='drive' controller='0' bus='0' target='0' unit='0'/>
                            </disk>





                            share|improve this answer













                            I experienced exactly the same issue.
                            Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



                            Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



                            <disk type='file' device='disk'>
                            <driver name='qemu' type='qcow2' cache='none' io='native'/>
                            <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
                            <target dev='sda' bus='scsi'/>
                            <address type='drive' controller='0' bus='0' target='0' unit='0'/>
                            </disk>






                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Jul 24 '18 at 14:40









                            Greg WGreg W

                            211




                            211



























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Server Fault!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f407842%2fincredibly-low-kvm-disk-performance-qcow2-disk-files-virtio%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

                                Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

                                What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company