Can I use ZFS to replicate (fast) EC2 instance store to (slow) EBS store?Can I re-attach a (ZFS) EBS volume when starting a new EC2 instance?trouble attaching ebs volume to ec2 instanceSwitching from Amazon EC2 instance-store to EBS VolumeMoving from Rackspace to EC2 - EBS vs. instance storeEC2 Instance Store to EBS Can't connect to SSHAmazon EBS store vs Instance Store with Termination turned off?Terminate EC2 instance and not root EBS (Attached root EBS to new instance)Point existing EBS to new EC2 instanceBooting an EC2 instance from an existing EBS volumeRestoring data after zfs destroy

Multi tool use
Multi tool use

When and why did journal article titles become descriptive, rather than creatively allusive?

Why does nature favour the Laplacian?

Do I have to worry about players making “bad” choices on level up?

When to use 1/Ka vs Kb

Why does the Betti number give the measure of k-dimensional holes?

Why do Ichisongas hate elephants and hippos?

How to replace the "space symbol" (squat-u) in listings?

Stark VS Thanos

Past Perfect Tense

How can I get precisely a certain cubic cm by changing the following factors?

How to create an ad-hoc wireless network in Ubuntu

How to pass attribute when redirecting from lwc to aura component

Will tsunami waves travel forever if there was no land?

Do I have an "anti-research" personality?

What is the difference between `a[bc]d` (brackets) and `ab,cd` (braces)?

What does "rf" mean in "rfkill"?

Sci-fi novel series with instant travel between planets through gates. A river runs through the gates

Why “le” behind?

Is it possible to Ready a spell to be cast just before the start of your next turn by having the trigger be an ally's attack?

Reverse the word in a string with the same order in javascript

What is the strongest case that can be made in favour of the UK regaining some control over fishing policy after Brexit?

Were there two appearances of Stan Lee?

Modify locally tikzset

Colliding particles and Activation energy



Can I use ZFS to replicate (fast) EC2 instance store to (slow) EBS store?


Can I re-attach a (ZFS) EBS volume when starting a new EC2 instance?trouble attaching ebs volume to ec2 instanceSwitching from Amazon EC2 instance-store to EBS VolumeMoving from Rackspace to EC2 - EBS vs. instance storeEC2 Instance Store to EBS Can't connect to SSHAmazon EBS store vs Instance Store with Termination turned off?Terminate EC2 instance and not root EBS (Attached root EBS to new instance)Point existing EBS to new EC2 instanceBooting an EC2 instance from an existing EBS volumeRestoring data after zfs destroy






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








2















I love the idea of using SSD EBS instance stores as L2ARC and ZIL for a zpool backed by EBS.



Going further (and into more dangerous territory), could I instead create a zpool mirror with the 2 instance stores:



zpool create vol1 mirror xvdb xvdc



and then use ZFS snapshotting/replication to keep a "warm"/eventually consistent spare on EBS?



  • I would be OK with losing a few seconds of data

  • I don't want to add the EBS as a hot spare, because that would limit the speed of the whole pool









share|improve this question






















  • I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.

    – ewwhite
    Jun 27 '14 at 21:44











  • This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.

    – sivann
    May 29 '15 at 8:20

















2















I love the idea of using SSD EBS instance stores as L2ARC and ZIL for a zpool backed by EBS.



Going further (and into more dangerous territory), could I instead create a zpool mirror with the 2 instance stores:



zpool create vol1 mirror xvdb xvdc



and then use ZFS snapshotting/replication to keep a "warm"/eventually consistent spare on EBS?



  • I would be OK with losing a few seconds of data

  • I don't want to add the EBS as a hot spare, because that would limit the speed of the whole pool









share|improve this question






















  • I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.

    – ewwhite
    Jun 27 '14 at 21:44











  • This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.

    – sivann
    May 29 '15 at 8:20













2












2








2


1






I love the idea of using SSD EBS instance stores as L2ARC and ZIL for a zpool backed by EBS.



Going further (and into more dangerous territory), could I instead create a zpool mirror with the 2 instance stores:



zpool create vol1 mirror xvdb xvdc



and then use ZFS snapshotting/replication to keep a "warm"/eventually consistent spare on EBS?



  • I would be OK with losing a few seconds of data

  • I don't want to add the EBS as a hot spare, because that would limit the speed of the whole pool









share|improve this question














I love the idea of using SSD EBS instance stores as L2ARC and ZIL for a zpool backed by EBS.



Going further (and into more dangerous territory), could I instead create a zpool mirror with the 2 instance stores:



zpool create vol1 mirror xvdb xvdc



and then use ZFS snapshotting/replication to keep a "warm"/eventually consistent spare on EBS?



  • I would be OK with losing a few seconds of data

  • I don't want to add the EBS as a hot spare, because that would limit the speed of the whole pool






amazon-ec2 zfs amazon-ebs






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jun 27 '14 at 21:28









Seamus AbshereSeamus Abshere

498510




498510












  • I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.

    – ewwhite
    Jun 27 '14 at 21:44











  • This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.

    – sivann
    May 29 '15 at 8:20

















  • I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.

    – ewwhite
    Jun 27 '14 at 21:44











  • This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.

    – sivann
    May 29 '15 at 8:20
















I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.

– ewwhite
Jun 27 '14 at 21:44





I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.

– ewwhite
Jun 27 '14 at 21:44













This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.

– sivann
May 29 '15 at 8:20





This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.

– sivann
May 29 '15 at 8:20










3 Answers
3






active

oldest

votes


















1














Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.



What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.



Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).






share|improve this answer






























    1














    I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.






    share|improve this answer






























      1














      I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws cli to grow the EBS snapshot, fdisk, parted and zpool online -e (expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.






      share|improve this answer























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "2"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f608543%2fcan-i-use-zfs-to-replicate-fast-ec2-instance-store-to-slow-ebs-store%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        1














        Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.



        What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.



        Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).






        share|improve this answer



























          1














          Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.



          What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.



          Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).






          share|improve this answer

























            1












            1








            1







            Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.



            What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.



            Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).






            share|improve this answer













            Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.



            What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.



            Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered May 29 '15 at 7:09









            Berend de BoerBerend de Boer

            1213




            1213























                1














                I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.






                share|improve this answer



























                  1














                  I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.






                  share|improve this answer

























                    1












                    1








                    1







                    I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.






                    share|improve this answer













                    I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered May 29 '15 at 9:13









                    shodanshokshodanshok

                    26.9k34889




                    26.9k34889





















                        1














                        I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws cli to grow the EBS snapshot, fdisk, parted and zpool online -e (expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.






                        share|improve this answer



























                          1














                          I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws cli to grow the EBS snapshot, fdisk, parted and zpool online -e (expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.






                          share|improve this answer

























                            1












                            1








                            1







                            I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws cli to grow the EBS snapshot, fdisk, parted and zpool online -e (expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.






                            share|improve this answer













                            I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws cli to grow the EBS snapshot, fdisk, parted and zpool online -e (expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Apr 21 at 21:33









                            soyayixsoyayix

                            212




                            212



























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Server Fault!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f608543%2fcan-i-use-zfs-to-replicate-fast-ec2-instance-store-to-slow-ebs-store%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                CcT WNR d0f4MtMlGg9U1XmEDEi,9Oh,Eg2hsBBdVBVb98,98SleIrZlhI6xm9HX3A5,kDSGPLr9 3dAveuy1cz e AOvT
                                hptOE2wh8O3gUqr7 FVqRvrWxL4SnjMlxGHpplPS7 t5y,pVSU,d7L5tMhHXJbPFnp1 Zs Nq9Mle47JqpmtzLXBoUevtSqQ21f5Q

                                Popular posts from this blog

                                RemoteApp sporadic failureWindows 2008 RemoteAPP client disconnects within a matter of minutesWhat is the minimum version of RDP supported by Server 2012 RDS?How to configure a Remoteapp server to increase stabilityMicrosoft RemoteApp Active SessionRDWeb TS connection broken for some users post RemoteApp certificate changeRemote Desktop Licensing, RemoteAPPRDS 2012 R2 some users are not able to logon after changed date and time on Connection BrokersWhat happens during Remote Desktop logon, and is there any logging?After installing RDS on WinServer 2016 I still can only connect with two users?RD Connection via RDGW to Session host is not connecting

                                Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

                                Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020