Can I use ZFS to replicate (fast) EC2 instance store to (slow) EBS store?Can I re-attach a (ZFS) EBS volume when starting a new EC2 instance?trouble attaching ebs volume to ec2 instanceSwitching from Amazon EC2 instance-store to EBS VolumeMoving from Rackspace to EC2 - EBS vs. instance storeEC2 Instance Store to EBS Can't connect to SSHAmazon EBS store vs Instance Store with Termination turned off?Terminate EC2 instance and not root EBS (Attached root EBS to new instance)Point existing EBS to new EC2 instanceBooting an EC2 instance from an existing EBS volumeRestoring data after zfs destroy
When and why did journal article titles become descriptive, rather than creatively allusive?
Why does nature favour the Laplacian?
Do I have to worry about players making “bad” choices on level up?
When to use 1/Ka vs Kb
Why does the Betti number give the measure of k-dimensional holes?
Why do Ichisongas hate elephants and hippos?
How to replace the "space symbol" (squat-u) in listings?
Stark VS Thanos
Past Perfect Tense
How can I get precisely a certain cubic cm by changing the following factors?
How to create an ad-hoc wireless network in Ubuntu
How to pass attribute when redirecting from lwc to aura component
Will tsunami waves travel forever if there was no land?
Do I have an "anti-research" personality?
What is the difference between `a[bc]d` (brackets) and `ab,cd` (braces)?
What does "rf" mean in "rfkill"?
Sci-fi novel series with instant travel between planets through gates. A river runs through the gates
Why “le” behind?
Is it possible to Ready a spell to be cast just before the start of your next turn by having the trigger be an ally's attack?
Reverse the word in a string with the same order in javascript
What is the strongest case that can be made in favour of the UK regaining some control over fishing policy after Brexit?
Were there two appearances of Stan Lee?
Modify locally tikzset
Colliding particles and Activation energy
Can I use ZFS to replicate (fast) EC2 instance store to (slow) EBS store?
Can I re-attach a (ZFS) EBS volume when starting a new EC2 instance?trouble attaching ebs volume to ec2 instanceSwitching from Amazon EC2 instance-store to EBS VolumeMoving from Rackspace to EC2 - EBS vs. instance storeEC2 Instance Store to EBS Can't connect to SSHAmazon EBS store vs Instance Store with Termination turned off?Terminate EC2 instance and not root EBS (Attached root EBS to new instance)Point existing EBS to new EC2 instanceBooting an EC2 instance from an existing EBS volumeRestoring data after zfs destroy
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I love the idea of using SSD EBS instance stores as L2ARC and ZIL for a zpool backed by EBS.
Going further (and into more dangerous territory), could I instead create a zpool mirror with the 2 instance stores:
zpool create vol1 mirror xvdb xvdc
and then use ZFS snapshotting/replication to keep a "warm"/eventually consistent spare on EBS?
- I would be OK with losing a few seconds of data
- I don't want to add the EBS as a hot spare, because that would limit the speed of the whole pool
amazon-ec2 zfs amazon-ebs
add a comment |
I love the idea of using SSD EBS instance stores as L2ARC and ZIL for a zpool backed by EBS.
Going further (and into more dangerous territory), could I instead create a zpool mirror with the 2 instance stores:
zpool create vol1 mirror xvdb xvdc
and then use ZFS snapshotting/replication to keep a "warm"/eventually consistent spare on EBS?
- I would be OK with losing a few seconds of data
- I don't want to add the EBS as a hot spare, because that would limit the speed of the whole pool
amazon-ec2 zfs amazon-ebs
I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.
– ewwhite
Jun 27 '14 at 21:44
This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.
– sivann
May 29 '15 at 8:20
add a comment |
I love the idea of using SSD EBS instance stores as L2ARC and ZIL for a zpool backed by EBS.
Going further (and into more dangerous territory), could I instead create a zpool mirror with the 2 instance stores:
zpool create vol1 mirror xvdb xvdc
and then use ZFS snapshotting/replication to keep a "warm"/eventually consistent spare on EBS?
- I would be OK with losing a few seconds of data
- I don't want to add the EBS as a hot spare, because that would limit the speed of the whole pool
amazon-ec2 zfs amazon-ebs
I love the idea of using SSD EBS instance stores as L2ARC and ZIL for a zpool backed by EBS.
Going further (and into more dangerous territory), could I instead create a zpool mirror with the 2 instance stores:
zpool create vol1 mirror xvdb xvdc
and then use ZFS snapshotting/replication to keep a "warm"/eventually consistent spare on EBS?
- I would be OK with losing a few seconds of data
- I don't want to add the EBS as a hot spare, because that would limit the speed of the whole pool
amazon-ec2 zfs amazon-ebs
amazon-ec2 zfs amazon-ebs
asked Jun 27 '14 at 21:28
Seamus AbshereSeamus Abshere
498510
498510
I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.
– ewwhite
Jun 27 '14 at 21:44
This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.
– sivann
May 29 '15 at 8:20
add a comment |
I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.
– ewwhite
Jun 27 '14 at 21:44
This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.
– sivann
May 29 '15 at 8:20
I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.
– ewwhite
Jun 27 '14 at 21:44
I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.
– ewwhite
Jun 27 '14 at 21:44
This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.
– sivann
May 29 '15 at 8:20
This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.
– sivann
May 29 '15 at 8:20
add a comment |
3 Answers
3
active
oldest
votes
Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.
What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.
Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).
add a comment |
I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.
add a comment |
I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws
cli to grow the EBS snapshot, fdisk
, parted
and zpool online -e
(expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f608543%2fcan-i-use-zfs-to-replicate-fast-ec2-instance-store-to-slow-ebs-store%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.
What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.
Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).
add a comment |
Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.
What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.
Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).
add a comment |
Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.
What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.
Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).
Seems to me a perfectly acceptable setup if your spare writes to EBS (and you snapshot that), and you have some failover scenario, as obviously restoring your instance store cost time.
What you describe was actually the only setup we had before we had EBS. People survived for years doing exactly that.
Finally, Netflix moved away from EBS backed disks due to extra risk of failure. They just replicate using instance storage (using Casandra).
answered May 29 '15 at 7:09
Berend de BoerBerend de Boer
1213
1213
add a comment |
add a comment |
I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.
add a comment |
I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.
add a comment |
I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.
I would not mix such different disks in a mirrored volumes. I would rather use frequent send/receive iterations to have consistent, point-in-time backups of the main volume.
answered May 29 '15 at 9:13
shodanshokshodanshok
26.9k34889
26.9k34889
add a comment |
add a comment |
I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws
cli to grow the EBS snapshot, fdisk
, parted
and zpool online -e
(expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.
add a comment |
I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws
cli to grow the EBS snapshot, fdisk
, parted
and zpool online -e
(expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.
add a comment |
I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws
cli to grow the EBS snapshot, fdisk
, parted
and zpool online -e
(expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.
I would suggest not to mirror the drives. Instead, create 2 zpools with one drive each, one with the ephemeral drive then another with the EBS drive. Create a dataset then zfs send at frequent intervals to the EBS zpool from the ephemeral zpool snapshots. You can easily grow the EBS drive and zpool while your zpool is online using aws
cli to grow the EBS snapshot, fdisk
, parted
and zpool online -e
(expand). With snapshots rotation you could save space. For example, keep only the last 24hrs snapshot - if you perform a snapshot and send/recv in 10 minutes interval you would keep at a minimum 144 snapshots in a day.
answered Apr 21 at 21:33
soyayixsoyayix
212
212
add a comment |
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f608543%2fcan-i-use-zfs-to-replicate-fast-ec2-instance-store-to-slow-ebs-store%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
I wouldn't recommend. The cloud isn't really a good application for ZFS in this manner.
– ewwhite
Jun 27 '14 at 21:44
This is contradicting. Not getting into implementation details, how can a slow storage keep-up replicating a fast one? It can't. Use EBS with S3 snapshots. That's the single most important benefit of using an expensive AWS server, S3 snapshot capability. If you don't want that, just use a cheap VPS from elswhere.
– sivann
May 29 '15 at 8:20