Incredibly low KVM disk performance (qcow2 disk files + virtio)Is there a big difference in working with Fedora and Debian servers?KVM guest io is much slower than host io: is that normal?Poor guest I/O performance KVM Ubuntu 12.04KVM + NFS poor disk performanceImprove Disk Performance of a Linux 2.4 Guest on KVM HostOptimizing disk performance on KVM VM configurationKVM Windows 7 graphics performance using remote desktopExtremely slow qemu storage performance with qcow2 imagesKVM bad IO sync performance in Linux guestkvm virtualized Windows get low disk performancesKVM direct disk access vs raw files

Does Lawful Interception of 4G / the proposed 5G provide a back door for hackers as well?

Why was Endgame Thanos so different than Infinity War Thanos?

Run script for 10 times until meets the condition, but break the loop if it meets the condition during iteration

Is it a bad idea to replace pull-up resistors with hard pull-ups?

How many are the non-negative integer solutions of 𝑥 + 𝑦 + 𝑤 + 𝑧 = 16 where x < y?

A cryptic tricolour

Extrude the faces of a cube symmetrically along XYZ

Create a list of all possible Boolean configurations of three constraints

How do I compare the result of "1d20+x, with advantage" to "1d20+y, without advantage", assuming x < y?

Extracting sublists that contain similar elements

How to slow yourself down (for playing nice with others)

On studying Computer Science vs. Software Engineering to become a proficient coder

Usefulness of complex chord names?

Why can't RGB or bicolour LEDs produce a decent yellow?

How to cope with regret and shame about not fully utilizing opportunities during PhD?

As programers say: Strive to be lazy

Early arrival in Australia, early hotel check in not available

How did Thanos not realise this had happened at the end of Endgame?

Is the schwa sound consistent?

How are Core iX names like Core i5, i7 related to Haswell, Ivy Bridge?

Setting the major mode of a new buffer interactively

Two researchers want to work on the same extension to my paper. Who to help?

Should these notes be played as a chord or one after another?

Exclude loop* snap devices from lsblk output?



Incredibly low KVM disk performance (qcow2 disk files + virtio)


Is there a big difference in working with Fedora and Debian servers?KVM guest io is much slower than host io: is that normal?Poor guest I/O performance KVM Ubuntu 12.04KVM + NFS poor disk performanceImprove Disk Performance of a Linux 2.4 Guest on KVM HostOptimizing disk performance on KVM VM configurationKVM Windows 7 graphics performance using remote desktopExtremely slow qemu storage performance with qcow2 imagesKVM bad IO sync performance in Linux guestkvm virtualized Windows get low disk performancesKVM direct disk access vs raw files






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








25















I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.



  • The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.

  • Performance is tested using time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000.

  • The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.

  • The host partitions are 4kb aligned (and performance is fine on the host, anyway).

  • Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.

  • Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.

  • Host has the deadline IO scheduler enabled and the guest has noop.

There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.



Update 1



And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).



Update 2



I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.










share|improve this question
























  • You didn't mention the distribution and software versions in use.

    – dyasny
    Jul 15 '12 at 7:42











  • Added some info on versions.

    – El Yobo
    Jul 15 '12 at 7:55











  • ah, as expected, ubuntu... any chance you can reproduce this on fedora?

    – dyasny
    Jul 15 '12 at 8:13











  • The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

    – El Yobo
    Jul 15 '12 at 8:18












  • my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

    – dyasny
    Jul 15 '12 at 8:51

















25















I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.



  • The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.

  • Performance is tested using time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000.

  • The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.

  • The host partitions are 4kb aligned (and performance is fine on the host, anyway).

  • Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.

  • Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.

  • Host has the deadline IO scheduler enabled and the guest has noop.

There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.



Update 1



And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).



Update 2



I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.










share|improve this question
























  • You didn't mention the distribution and software versions in use.

    – dyasny
    Jul 15 '12 at 7:42











  • Added some info on versions.

    – El Yobo
    Jul 15 '12 at 7:55











  • ah, as expected, ubuntu... any chance you can reproduce this on fedora?

    – dyasny
    Jul 15 '12 at 8:13











  • The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

    – El Yobo
    Jul 15 '12 at 8:18












  • my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

    – dyasny
    Jul 15 '12 at 8:51













25












25








25


9






I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.



  • The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.

  • Performance is tested using time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000.

  • The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.

  • The host partitions are 4kb aligned (and performance is fine on the host, anyway).

  • Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.

  • Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.

  • Host has the deadline IO scheduler enabled and the guest has noop.

There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.



Update 1



And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).



Update 2



I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.










share|improve this question
















I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.



  • The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.

  • Performance is tested using time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000.

  • The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.

  • The host partitions are 4kb aligned (and performance is fine on the host, anyway).

  • Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.

  • Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.

  • Host has the deadline IO scheduler enabled and the guest has noop.

There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.



Update 1



And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).



Update 2



I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.







performance kvm-virtualization qcow2






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 2 at 8:21









poige

7,14211537




7,14211537










asked Jul 15 '12 at 6:36









El YoboEl Yobo

5031511




5031511












  • You didn't mention the distribution and software versions in use.

    – dyasny
    Jul 15 '12 at 7:42











  • Added some info on versions.

    – El Yobo
    Jul 15 '12 at 7:55











  • ah, as expected, ubuntu... any chance you can reproduce this on fedora?

    – dyasny
    Jul 15 '12 at 8:13











  • The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

    – El Yobo
    Jul 15 '12 at 8:18












  • my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

    – dyasny
    Jul 15 '12 at 8:51

















  • You didn't mention the distribution and software versions in use.

    – dyasny
    Jul 15 '12 at 7:42











  • Added some info on versions.

    – El Yobo
    Jul 15 '12 at 7:55











  • ah, as expected, ubuntu... any chance you can reproduce this on fedora?

    – dyasny
    Jul 15 '12 at 8:13











  • The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

    – El Yobo
    Jul 15 '12 at 8:18












  • my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

    – dyasny
    Jul 15 '12 at 8:51
















You didn't mention the distribution and software versions in use.

– dyasny
Jul 15 '12 at 7:42





You didn't mention the distribution and software versions in use.

– dyasny
Jul 15 '12 at 7:42













Added some info on versions.

– El Yobo
Jul 15 '12 at 7:55





Added some info on versions.

– El Yobo
Jul 15 '12 at 7:55













ah, as expected, ubuntu... any chance you can reproduce this on fedora?

– dyasny
Jul 15 '12 at 8:13





ah, as expected, ubuntu... any chance you can reproduce this on fedora?

– dyasny
Jul 15 '12 at 8:13













The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

– El Yobo
Jul 15 '12 at 8:18






The server is in Germany and I'm currently in Mexico, so that could be a little tricky. And if it did suddenly work... I still wouldn't want to have to deal with a Fedora server ;) I have seen a few comments suggesting that Debian/Ubuntu systems did have more issues than Fedora/CentOS for KVM as much of the development work was done there.

– El Yobo
Jul 15 '12 at 8:18














my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

– dyasny
Jul 15 '12 at 8:51





my point exactly. and in any case, if you are after a server grade OS you need RHEL, not Ubuntu

– dyasny
Jul 15 '12 at 8:51










6 Answers
6






active

oldest

votes


















14














Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).






share|improve this answer


















  • 3





    Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

    – El Yobo
    Jul 15 '12 at 7:52






  • 1





    Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

    – El Yobo
    Jul 15 '12 at 8:15






  • 1





    18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

    – womble
    Jul 15 '12 at 8:19






  • 1





    I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

    – El Yobo
    Jul 15 '12 at 16:57






  • 1





    That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

    – lzap
    Nov 25 '13 at 20:43


















7














How to achieve top performance with QCOW2:



qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



<driver name='qemu' cache='unsafe' />


Some users reports that this configuration beats LVM/unsafe configuration in some tests.



For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.






share|improve this answer




















  • 2





    It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

    – shodanshok
    Apr 2 '15 at 17:18






  • 1





    As I've said: if this is not production instance (good for testing)

    – lzap
    Apr 3 '15 at 13:11











  • Fair enough. I missed it ;)

    – shodanshok
    Apr 3 '15 at 15:34


















6














I achieved great results for qcow2 image with this setting:



<driver name='qemu' type='raw' cache='none' io='native'/>


which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.






share|improve this answer


















  • 5





    You set a qcow2 image type to "raw"?

    – Alex
    Aug 18 '13 at 11:02











  • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

    – gertas
    Aug 18 '13 at 11:15


















2














If you're running your vms with a single command, for arguments you can use




kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




It got me from 3MB/s to 70MB/s






share|improve this answer






























    2














    On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



    On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



    A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
    On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.






    share|improve this answer






























      2














      I experienced exactly the same issue.
      Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



      Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



      <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none' io='native'/>
      <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      </disk>





      share|improve this answer























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "2"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f407842%2fincredibly-low-kvm-disk-performance-qcow2-disk-files-virtio%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        6 Answers
        6






        active

        oldest

        votes








        6 Answers
        6






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        14














        Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).






        share|improve this answer


















        • 3





          Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

          – El Yobo
          Jul 15 '12 at 7:52






        • 1





          Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

          – El Yobo
          Jul 15 '12 at 8:15






        • 1





          18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

          – womble
          Jul 15 '12 at 8:19






        • 1





          I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

          – El Yobo
          Jul 15 '12 at 16:57






        • 1





          That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

          – lzap
          Nov 25 '13 at 20:43















        14














        Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).






        share|improve this answer


















        • 3





          Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

          – El Yobo
          Jul 15 '12 at 7:52






        • 1





          Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

          – El Yobo
          Jul 15 '12 at 8:15






        • 1





          18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

          – womble
          Jul 15 '12 at 8:19






        • 1





          I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

          – El Yobo
          Jul 15 '12 at 16:57






        • 1





          That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

          – lzap
          Nov 25 '13 at 20:43













        14












        14








        14







        Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).






        share|improve this answer













        Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jul 15 '12 at 7:45









        womblewomble

        86.2k18147205




        86.2k18147205







        • 3





          Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

          – El Yobo
          Jul 15 '12 at 7:52






        • 1





          Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

          – El Yobo
          Jul 15 '12 at 8:15






        • 1





          18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

          – womble
          Jul 15 '12 at 8:19






        • 1





          I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

          – El Yobo
          Jul 15 '12 at 16:57






        • 1





          That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

          – lzap
          Nov 25 '13 at 20:43












        • 3





          Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

          – El Yobo
          Jul 15 '12 at 7:52






        • 1





          Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

          – El Yobo
          Jul 15 '12 at 8:15






        • 1





          18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

          – womble
          Jul 15 '12 at 8:19






        • 1





          I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

          – El Yobo
          Jul 15 '12 at 16:57






        • 1





          That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

          – lzap
          Nov 25 '13 at 20:43







        3




        3





        Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

        – El Yobo
        Jul 15 '12 at 7:52





        Obviously, but they're also not meant to be quite as crap as the numbers I'm getting either.

        – El Yobo
        Jul 15 '12 at 7:52




        1




        1





        Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

        – El Yobo
        Jul 15 '12 at 8:15





        Most examples out there show similar performance with qcow2, which seems to be a significant improvement over the older version. The KVM site itself has come numbers up at linux-kvm.org/page/Qcow2 which show comparable times for a range of cases.

        – El Yobo
        Jul 15 '12 at 8:15




        1




        1





        18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

        – womble
        Jul 15 '12 at 8:19





        18:35 (qcow2) vs 8:48 (raw) is "comparable times"?

        – womble
        Jul 15 '12 at 8:19




        1




        1





        I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

        – El Yobo
        Jul 15 '12 at 16:57





        I've switched them over to LVM backed raw images on top of RAID1, set the io scheduler to noop on the guest and deadline on the host and it now writes at 138 MB/s. I still don't know what it was that caused the qcow2 to have the 3MB/s speeds, but clearly it can be sidestepped by using raw, so thanks for pushing me in that direction.

        – El Yobo
        Jul 15 '12 at 16:57




        1




        1





        That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

        – lzap
        Nov 25 '13 at 20:43





        That is not quite true - latest patches in qemu speeds qcow2 a lot! We are almost on par.

        – lzap
        Nov 25 '13 at 20:43













        7














        How to achieve top performance with QCOW2:



        qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


        The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



        Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



        <driver name='qemu' cache='unsafe' />


        Some users reports that this configuration beats LVM/unsafe configuration in some tests.



        For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.






        share|improve this answer




















        • 2





          It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

          – shodanshok
          Apr 2 '15 at 17:18






        • 1





          As I've said: if this is not production instance (good for testing)

          – lzap
          Apr 3 '15 at 13:11











        • Fair enough. I missed it ;)

          – shodanshok
          Apr 3 '15 at 15:34















        7














        How to achieve top performance with QCOW2:



        qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


        The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



        Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



        <driver name='qemu' cache='unsafe' />


        Some users reports that this configuration beats LVM/unsafe configuration in some tests.



        For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.






        share|improve this answer




















        • 2





          It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

          – shodanshok
          Apr 2 '15 at 17:18






        • 1





          As I've said: if this is not production instance (good for testing)

          – lzap
          Apr 3 '15 at 13:11











        • Fair enough. I missed it ;)

          – shodanshok
          Apr 3 '15 at 15:34













        7












        7








        7







        How to achieve top performance with QCOW2:



        qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


        The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



        Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



        <driver name='qemu' cache='unsafe' />


        Some users reports that this configuration beats LVM/unsafe configuration in some tests.



        For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.






        share|improve this answer















        How to achieve top performance with QCOW2:



        qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ


        The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.



        Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):



        <driver name='qemu' cache='unsafe' />


        Some users reports that this configuration beats LVM/unsafe configuration in some tests.



        For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Apr 7 '17 at 8:26

























        answered Nov 25 '13 at 21:04









        lzaplzap

        1,80111917




        1,80111917







        • 2





          It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

          – shodanshok
          Apr 2 '15 at 17:18






        • 1





          As I've said: if this is not production instance (good for testing)

          – lzap
          Apr 3 '15 at 13:11











        • Fair enough. I missed it ;)

          – shodanshok
          Apr 3 '15 at 15:34












        • 2





          It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

          – shodanshok
          Apr 2 '15 at 17:18






        • 1





          As I've said: if this is not production instance (good for testing)

          – lzap
          Apr 3 '15 at 13:11











        • Fair enough. I missed it ;)

          – shodanshok
          Apr 3 '15 at 15:34







        2




        2





        It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

        – shodanshok
        Apr 2 '15 at 17:18





        It is not a good idea to use cache=unsafe: an unexpected host shutdown can wreak havoc on the entire guest filesystem. It is much better to use cache=writeback: similar performance, but much better reliability.

        – shodanshok
        Apr 2 '15 at 17:18




        1




        1





        As I've said: if this is not production instance (good for testing)

        – lzap
        Apr 3 '15 at 13:11





        As I've said: if this is not production instance (good for testing)

        – lzap
        Apr 3 '15 at 13:11













        Fair enough. I missed it ;)

        – shodanshok
        Apr 3 '15 at 15:34





        Fair enough. I missed it ;)

        – shodanshok
        Apr 3 '15 at 15:34











        6














        I achieved great results for qcow2 image with this setting:



        <driver name='qemu' type='raw' cache='none' io='native'/>


        which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



        My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.






        share|improve this answer


















        • 5





          You set a qcow2 image type to "raw"?

          – Alex
          Aug 18 '13 at 11:02











        • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

          – gertas
          Aug 18 '13 at 11:15















        6














        I achieved great results for qcow2 image with this setting:



        <driver name='qemu' type='raw' cache='none' io='native'/>


        which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



        My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.






        share|improve this answer


















        • 5





          You set a qcow2 image type to "raw"?

          – Alex
          Aug 18 '13 at 11:02











        • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

          – gertas
          Aug 18 '13 at 11:15













        6












        6








        6







        I achieved great results for qcow2 image with this setting:



        <driver name='qemu' type='raw' cache='none' io='native'/>


        which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



        My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.






        share|improve this answer













        I achieved great results for qcow2 image with this setting:



        <driver name='qemu' type='raw' cache='none' io='native'/>


        which disables guest caches and enables AIO (Asynchronous IO). Running your dd command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.



        My qemu-kvm version is 1.0+noroms-0ubuntu14.8 and kernel 3.2.0-41-generic from stock Ubuntu 12.04.2 LTS.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jun 15 '13 at 21:29









        gertasgertas

        43467




        43467







        • 5





          You set a qcow2 image type to "raw"?

          – Alex
          Aug 18 '13 at 11:02











        • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

          – gertas
          Aug 18 '13 at 11:15












        • 5





          You set a qcow2 image type to "raw"?

          – Alex
          Aug 18 '13 at 11:02











        • I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

          – gertas
          Aug 18 '13 at 11:15







        5




        5





        You set a qcow2 image type to "raw"?

        – Alex
        Aug 18 '13 at 11:02





        You set a qcow2 image type to "raw"?

        – Alex
        Aug 18 '13 at 11:02













        I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

        – gertas
        Aug 18 '13 at 11:15





        I guess I have copied older entry, I suppose speed benefits should be same for type='qcow2', could you check that before I edit? I have no more access to such configuration - I migrated to LXC with mount bind directories to achieve real native speeds in guests.

        – gertas
        Aug 18 '13 at 11:15











        2














        If you're running your vms with a single command, for arguments you can use




        kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




        It got me from 3MB/s to 70MB/s






        share|improve this answer



























          2














          If you're running your vms with a single command, for arguments you can use




          kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




          It got me from 3MB/s to 70MB/s






          share|improve this answer

























            2












            2








            2







            If you're running your vms with a single command, for arguments you can use




            kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




            It got me from 3MB/s to 70MB/s






            share|improve this answer













            If you're running your vms with a single command, for arguments you can use




            kvm -drive file=/path_to.qcow2,if=virtio,cache=off <...>




            It got me from 3MB/s to 70MB/s







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 2 '15 at 16:06









            Kourindou HimeKourindou Hime

            211




            211





















                2














                On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



                On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



                A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
                On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.






                share|improve this answer



























                  2














                  On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



                  On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



                  A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
                  On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.






                  share|improve this answer

























                    2












                    2








                    2







                    On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



                    On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



                    A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
                    On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.






                    share|improve this answer













                    On old Qemu/KVM versions, Qcow2 backend was very slow when not preallocated, more so if used without writeback cache enabled. See here for more information.



                    On more recent Qemu versions, Qcow2 files are much faster, even when using no preallocation (or metadata-only preallocation). Still, LVM volumes remain faster.



                    A note on the cache modes: writeback cache is the preferred mode, unless using a guest with no or disabled support for disk cache flush/barriers. In practice, Win2000+ guests and any Linux EXT4, XFS or EXT3+barrier mount options are fines.
                    On the other hand, cache=unsafe should never be used of production machines, as cache flushes are not propagated to the host system. An unexpected host shutdown can literally destroy guest's filesystem.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Apr 2 '15 at 17:23









                    shodanshokshodanshok

                    27.3k34990




                    27.3k34990





















                        2














                        I experienced exactly the same issue.
                        Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



                        Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



                        <disk type='file' device='disk'>
                        <driver name='qemu' type='qcow2' cache='none' io='native'/>
                        <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
                        <target dev='sda' bus='scsi'/>
                        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
                        </disk>





                        share|improve this answer



























                          2














                          I experienced exactly the same issue.
                          Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



                          Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



                          <disk type='file' device='disk'>
                          <driver name='qemu' type='qcow2' cache='none' io='native'/>
                          <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
                          <target dev='sda' bus='scsi'/>
                          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
                          </disk>





                          share|improve this answer

























                            2












                            2








                            2







                            I experienced exactly the same issue.
                            Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



                            Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



                            <disk type='file' device='disk'>
                            <driver name='qemu' type='qcow2' cache='none' io='native'/>
                            <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
                            <target dev='sda' bus='scsi'/>
                            <address type='drive' controller='0' bus='0' target='0' unit='0'/>
                            </disk>





                            share|improve this answer













                            I experienced exactly the same issue.
                            Within RHEL7 virtual machine I have LIO iSCSI target software to which other machines connect. As underlying storage (backstore) for my iSCSI LUNs I initially used LVM, but then switched to file based images.



                            Long story short: when backing storage is attached to virtio_blk (vda, vdb, etc.) storage controller - performance from iSCSI client connecting to the iSCSI target was in my environment ~ 20 IOPS, with throughput (depending on IO size) ~ 2-3 MiB/s. I changed virtual disk controller within virtual machine to SCSI and I'm able to get 1000+ IOPS and throughput 100+ MiB/s from my iSCSI clients.



                            <disk type='file' device='disk'>
                            <driver name='qemu' type='qcow2' cache='none' io='native'/>
                            <source file='/var/lib/libvirt/images/station1/station1-iscsi1-lun.img'/>
                            <target dev='sda' bus='scsi'/>
                            <address type='drive' controller='0' bus='0' target='0' unit='0'/>
                            </disk>






                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Jul 24 '18 at 14:40









                            Greg WGreg W

                            211




                            211



























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Server Fault!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f407842%2fincredibly-low-kvm-disk-performance-qcow2-disk-files-virtio%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

                                Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

                                Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020