Secure offsite backup, even in the case of hacker root accessRecommendation for backup plan needed for small businessLinux to windows ssh/scp/sftp differential backup solutions?How to discourage a hacker with root access from deleting remote backups?General Advice about an archive solution. ~15tb and growing.NetApp Backup Strategy - Snapshots to SnapMirrors to Tape?How to do offsite AWS RDS PostgreSQL backups?Using SSH keys from backup on another machine to access server
Is there any effect in D&D 5e that cannot be undone?
Redirecting output only on a successful command call
Is it a bad idea to have a pen name with only an initial for a surname?
Can you cover a cube with copies of this shape?
Interview was just a one hour panel. Got an offer the next day; do I accept or is this a red flag?
Right indicator flash-frequency has increased and rear-right bulb is out
Is my research statement supposed to lead to papers in top journals?
What are the mechanical differences between Adapt and Monstrosity?
Can "Es tut mir leid" be used to express empathy rather than remorse?
How to know whether to write accidentals as sharps or flats?
How can the US president give an order to a civilian?
How can Caller ID be faked?
Will users know a CardView is clickable?
Leaving job close to major deadlines
Someone who is granted access to information but not expected to read it
Is there a term for someone whose preferred policies are a mix of Left and Right?
Huge Heap Table and table compression on SQL Server 2016
Time at 1G acceleration to travel 100000 light years
How to ask if I can mow my neighbor's lawn
What is this plant I saw for sale at a Romanian farmer's market?
Does knowing the surface area of all faces uniquely determine a tetrahedron?
Why do you need to heat the pan before heating the olive oil?
Manager wants to hire me; HR does not. How to proceed?
How do I run a script as sudo at boot time on Ubuntu 18.04 Server?
Secure offsite backup, even in the case of hacker root access
Recommendation for backup plan needed for small businessLinux to windows ssh/scp/sftp differential backup solutions?How to discourage a hacker with root access from deleting remote backups?General Advice about an archive solution. ~15tb and growing.NetApp Backup Strategy - Snapshots to SnapMirrors to Tape?How to do offsite AWS RDS PostgreSQL backups?Using SSH keys from backup on another machine to access server
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I'm looking for a way to implement a more secure way of doing an offsite backup that will also protect my data against the situation where a malicious hacker has gained root access to my server. Even though the chance of that happening is smaller than other kinds of risks if SSH and password security is properly set up and the system is kept properly up-to-date, the amount of damage that can be permanently done is really high and therefore I'd like to find a solution to limit that.
I've already tried two ways of offsite backups:
a simple root-writable webdav mount (and configured in fstab) where the backed up data is copied onto. Problem: not really an offsite backup because the connection - and moreover access - to the offsite location is constantly left open as a folder in the filesystem. This is sufficient protection against many kinds of attacks if the mount has limited access privileges (read root only access), but doesn't protect against a malicious person with root access.
Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's stored on the host if the malicious user has root access to the host.
As a solution I'm thinking about these potential ways, but I don't know how and with what:
- Backups can only be written or appended to the destination but not deleted.
- The use of backup software that handles the offsite backups and doesn't support mass deletion of the offsite backups from the first host.
Solutions that aren't really interesting in my situation:
- An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host (due to technical limitations).
Can anyone give advice on how to implement a proper offsite backup for my case?
backup offsite-backup
add a comment |
I'm looking for a way to implement a more secure way of doing an offsite backup that will also protect my data against the situation where a malicious hacker has gained root access to my server. Even though the chance of that happening is smaller than other kinds of risks if SSH and password security is properly set up and the system is kept properly up-to-date, the amount of damage that can be permanently done is really high and therefore I'd like to find a solution to limit that.
I've already tried two ways of offsite backups:
a simple root-writable webdav mount (and configured in fstab) where the backed up data is copied onto. Problem: not really an offsite backup because the connection - and moreover access - to the offsite location is constantly left open as a folder in the filesystem. This is sufficient protection against many kinds of attacks if the mount has limited access privileges (read root only access), but doesn't protect against a malicious person with root access.
Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's stored on the host if the malicious user has root access to the host.
As a solution I'm thinking about these potential ways, but I don't know how and with what:
- Backups can only be written or appended to the destination but not deleted.
- The use of backup software that handles the offsite backups and doesn't support mass deletion of the offsite backups from the first host.
Solutions that aren't really interesting in my situation:
- An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host (due to technical limitations).
Can anyone give advice on how to implement a proper offsite backup for my case?
backup offsite-backup
7
First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).
– TheDESTROS
May 31 at 8:33
2
30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.
– Damon
Jun 1 at 17:01
@TheDESTROS Answer in answers, please, not in comments.
– wizzwizz4
Jun 2 at 10:38
I don't think anonymous FTP should be used anymore.
– Lucas Ramage
Jun 5 at 16:04
add a comment |
I'm looking for a way to implement a more secure way of doing an offsite backup that will also protect my data against the situation where a malicious hacker has gained root access to my server. Even though the chance of that happening is smaller than other kinds of risks if SSH and password security is properly set up and the system is kept properly up-to-date, the amount of damage that can be permanently done is really high and therefore I'd like to find a solution to limit that.
I've already tried two ways of offsite backups:
a simple root-writable webdav mount (and configured in fstab) where the backed up data is copied onto. Problem: not really an offsite backup because the connection - and moreover access - to the offsite location is constantly left open as a folder in the filesystem. This is sufficient protection against many kinds of attacks if the mount has limited access privileges (read root only access), but doesn't protect against a malicious person with root access.
Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's stored on the host if the malicious user has root access to the host.
As a solution I'm thinking about these potential ways, but I don't know how and with what:
- Backups can only be written or appended to the destination but not deleted.
- The use of backup software that handles the offsite backups and doesn't support mass deletion of the offsite backups from the first host.
Solutions that aren't really interesting in my situation:
- An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host (due to technical limitations).
Can anyone give advice on how to implement a proper offsite backup for my case?
backup offsite-backup
I'm looking for a way to implement a more secure way of doing an offsite backup that will also protect my data against the situation where a malicious hacker has gained root access to my server. Even though the chance of that happening is smaller than other kinds of risks if SSH and password security is properly set up and the system is kept properly up-to-date, the amount of damage that can be permanently done is really high and therefore I'd like to find a solution to limit that.
I've already tried two ways of offsite backups:
a simple root-writable webdav mount (and configured in fstab) where the backed up data is copied onto. Problem: not really an offsite backup because the connection - and moreover access - to the offsite location is constantly left open as a folder in the filesystem. This is sufficient protection against many kinds of attacks if the mount has limited access privileges (read root only access), but doesn't protect against a malicious person with root access.
Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's stored on the host if the malicious user has root access to the host.
As a solution I'm thinking about these potential ways, but I don't know how and with what:
- Backups can only be written or appended to the destination but not deleted.
- The use of backup software that handles the offsite backups and doesn't support mass deletion of the offsite backups from the first host.
Solutions that aren't really interesting in my situation:
- An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host (due to technical limitations).
Can anyone give advice on how to implement a proper offsite backup for my case?
backup offsite-backup
backup offsite-backup
edited Jun 3 at 19:55
terdon
13210
13210
asked May 31 at 8:05
EarthMindEarthMind
7642921
7642921
7
First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).
– TheDESTROS
May 31 at 8:33
2
30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.
– Damon
Jun 1 at 17:01
@TheDESTROS Answer in answers, please, not in comments.
– wizzwizz4
Jun 2 at 10:38
I don't think anonymous FTP should be used anymore.
– Lucas Ramage
Jun 5 at 16:04
add a comment |
7
First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).
– TheDESTROS
May 31 at 8:33
2
30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.
– Damon
Jun 1 at 17:01
@TheDESTROS Answer in answers, please, not in comments.
– wizzwizz4
Jun 2 at 10:38
I don't think anonymous FTP should be used anymore.
– Lucas Ramage
Jun 5 at 16:04
7
7
First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).
– TheDESTROS
May 31 at 8:33
First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).
– TheDESTROS
May 31 at 8:33
2
2
30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.
– Damon
Jun 1 at 17:01
30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.
– Damon
Jun 1 at 17:01
@TheDESTROS Answer in answers, please, not in comments.
– wizzwizz4
Jun 2 at 10:38
@TheDESTROS Answer in answers, please, not in comments.
– wizzwizz4
Jun 2 at 10:38
I don't think anonymous FTP should be used anymore.
– Lucas Ramage
Jun 5 at 16:04
I don't think anonymous FTP should be used anymore.
– Lucas Ramage
Jun 5 at 16:04
add a comment |
7 Answers
7
active
oldest
votes
All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.
What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.
To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.
Any advice on where to search for a guide to set up a read-only access solution?
– EarthMind
May 31 at 16:08
5
This is how I backup things over ssh: the backup server will ssh into the server to be backed up.
– Michael Hampton♦
May 31 at 17:49
4
rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups
– Sampo Sarrala
Jun 1 at 2:39
10
+1 - "pull" instead of "push"
– Criggie
Jun 1 at 6:48
1
This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.
– Dubu
Jun 4 at 8:36
|
show 4 more comments
Immutable Storage
One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.
There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.
When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.
*AWS S3**
I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.
S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.
You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.
S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.
Suggested Software
I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.
Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.
There are dozens of pieces of software that can upload to cloud storage.
Protected Storage
With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.
You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.
See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.
– user71659
May 31 at 23:54
I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.
– Tim
Jun 1 at 2:45
1
Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.
– Scott Dudley
Jun 3 at 13:44
add a comment |
Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.
2
One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.
– Tim
Jun 1 at 21:23
So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.
– Mast
Jun 3 at 6:45
add a comment |
Solutions that aren't really interesting in my situation:
An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.
The fundamental problem is that if you can remotely access your backups then so can the hacker.
(Due to technical limitation)
Technical limitations are made to be overcome.
Can anyone give advice on how to implement a proper offsite backup for my case?
Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.
1
I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.
– Greg
Jun 1 at 23:12
@Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.
– EarthMind
Jun 2 at 5:23
This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.
– arp
Jun 3 at 15:53
@asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.
– RonJohn
Jun 3 at 15:58
add a comment |
You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.
There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.
Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.
1
I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.
– Criggie
Jun 1 at 6:47
add a comment |
Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.
You can use option command in your authorized_keys. You fix the command allowed in remote.
how to add commands in ssh authorized_keys
Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.
add a comment |
A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f969606%2fsecure-offsite-backup-even-in-the-case-of-hacker-root-access%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.
What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.
To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.
Any advice on where to search for a guide to set up a read-only access solution?
– EarthMind
May 31 at 16:08
5
This is how I backup things over ssh: the backup server will ssh into the server to be backed up.
– Michael Hampton♦
May 31 at 17:49
4
rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups
– Sampo Sarrala
Jun 1 at 2:39
10
+1 - "pull" instead of "push"
– Criggie
Jun 1 at 6:48
1
This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.
– Dubu
Jun 4 at 8:36
|
show 4 more comments
All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.
What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.
To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.
Any advice on where to search for a guide to set up a read-only access solution?
– EarthMind
May 31 at 16:08
5
This is how I backup things over ssh: the backup server will ssh into the server to be backed up.
– Michael Hampton♦
May 31 at 17:49
4
rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups
– Sampo Sarrala
Jun 1 at 2:39
10
+1 - "pull" instead of "push"
– Criggie
Jun 1 at 6:48
1
This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.
– Dubu
Jun 4 at 8:36
|
show 4 more comments
All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.
What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.
To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.
All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.
What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.
To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.
answered May 31 at 8:44
Esa JokinenEsa Jokinen
24.9k23662
24.9k23662
Any advice on where to search for a guide to set up a read-only access solution?
– EarthMind
May 31 at 16:08
5
This is how I backup things over ssh: the backup server will ssh into the server to be backed up.
– Michael Hampton♦
May 31 at 17:49
4
rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups
– Sampo Sarrala
Jun 1 at 2:39
10
+1 - "pull" instead of "push"
– Criggie
Jun 1 at 6:48
1
This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.
– Dubu
Jun 4 at 8:36
|
show 4 more comments
Any advice on where to search for a guide to set up a read-only access solution?
– EarthMind
May 31 at 16:08
5
This is how I backup things over ssh: the backup server will ssh into the server to be backed up.
– Michael Hampton♦
May 31 at 17:49
4
rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups
– Sampo Sarrala
Jun 1 at 2:39
10
+1 - "pull" instead of "push"
– Criggie
Jun 1 at 6:48
1
This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.
– Dubu
Jun 4 at 8:36
Any advice on where to search for a guide to set up a read-only access solution?
– EarthMind
May 31 at 16:08
Any advice on where to search for a guide to set up a read-only access solution?
– EarthMind
May 31 at 16:08
5
5
This is how I backup things over ssh: the backup server will ssh into the server to be backed up.
– Michael Hampton♦
May 31 at 17:49
This is how I backup things over ssh: the backup server will ssh into the server to be backed up.
– Michael Hampton♦
May 31 at 17:49
4
4
rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups
– Sampo Sarrala
Jun 1 at 2:39
rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups
– Sampo Sarrala
Jun 1 at 2:39
10
10
+1 - "pull" instead of "push"
– Criggie
Jun 1 at 6:48
+1 - "pull" instead of "push"
– Criggie
Jun 1 at 6:48
1
1
This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.
– Dubu
Jun 4 at 8:36
This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.
– Dubu
Jun 4 at 8:36
|
show 4 more comments
Immutable Storage
One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.
There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.
When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.
*AWS S3**
I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.
S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.
You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.
S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.
Suggested Software
I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.
Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.
There are dozens of pieces of software that can upload to cloud storage.
Protected Storage
With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.
You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.
See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.
– user71659
May 31 at 23:54
I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.
– Tim
Jun 1 at 2:45
1
Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.
– Scott Dudley
Jun 3 at 13:44
add a comment |
Immutable Storage
One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.
There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.
When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.
*AWS S3**
I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.
S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.
You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.
S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.
Suggested Software
I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.
Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.
There are dozens of pieces of software that can upload to cloud storage.
Protected Storage
With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.
You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.
See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.
– user71659
May 31 at 23:54
I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.
– Tim
Jun 1 at 2:45
1
Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.
– Scott Dudley
Jun 3 at 13:44
add a comment |
Immutable Storage
One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.
There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.
When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.
*AWS S3**
I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.
S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.
You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.
S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.
Suggested Software
I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.
Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.
There are dozens of pieces of software that can upload to cloud storage.
Protected Storage
With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.
You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.
Immutable Storage
One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.
There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.
When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.
*AWS S3**
I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.
S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.
You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.
S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.
Suggested Software
I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.
Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.
There are dozens of pieces of software that can upload to cloud storage.
Protected Storage
With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.
You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.
edited Jun 3 at 19:52
answered May 31 at 9:37
TimTim
18.8k41951
18.8k41951
See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.
– user71659
May 31 at 23:54
I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.
– Tim
Jun 1 at 2:45
1
Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.
– Scott Dudley
Jun 3 at 13:44
add a comment |
See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.
– user71659
May 31 at 23:54
I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.
– Tim
Jun 1 at 2:45
1
Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.
– Scott Dudley
Jun 3 at 13:44
See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.
– user71659
May 31 at 23:54
See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.
– user71659
May 31 at 23:54
I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.
– Tim
Jun 1 at 2:45
I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.
– Tim
Jun 1 at 2:45
1
1
Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.
– Scott Dudley
Jun 3 at 13:44
Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.
– Scott Dudley
Jun 3 at 13:44
add a comment |
Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.
2
One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.
– Tim
Jun 1 at 21:23
So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.
– Mast
Jun 3 at 6:45
add a comment |
Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.
2
One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.
– Tim
Jun 1 at 21:23
So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.
– Mast
Jun 3 at 6:45
add a comment |
Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.
Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.
edited Jun 2 at 3:48
Nonny Moose
1033
1033
answered May 31 at 21:53
JacobJacob
811
811
2
One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.
– Tim
Jun 1 at 21:23
So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.
– Mast
Jun 3 at 6:45
add a comment |
2
One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.
– Tim
Jun 1 at 21:23
So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.
– Mast
Jun 3 at 6:45
2
2
One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.
– Tim
Jun 1 at 21:23
One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.
– Tim
Jun 1 at 21:23
So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.
– Mast
Jun 3 at 6:45
So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.
– Mast
Jun 3 at 6:45
add a comment |
Solutions that aren't really interesting in my situation:
An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.
The fundamental problem is that if you can remotely access your backups then so can the hacker.
(Due to technical limitation)
Technical limitations are made to be overcome.
Can anyone give advice on how to implement a proper offsite backup for my case?
Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.
1
I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.
– Greg
Jun 1 at 23:12
@Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.
– EarthMind
Jun 2 at 5:23
This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.
– arp
Jun 3 at 15:53
@asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.
– RonJohn
Jun 3 at 15:58
add a comment |
Solutions that aren't really interesting in my situation:
An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.
The fundamental problem is that if you can remotely access your backups then so can the hacker.
(Due to technical limitation)
Technical limitations are made to be overcome.
Can anyone give advice on how to implement a proper offsite backup for my case?
Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.
1
I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.
– Greg
Jun 1 at 23:12
@Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.
– EarthMind
Jun 2 at 5:23
This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.
– arp
Jun 3 at 15:53
@asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.
– RonJohn
Jun 3 at 15:58
add a comment |
Solutions that aren't really interesting in my situation:
An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.
The fundamental problem is that if you can remotely access your backups then so can the hacker.
(Due to technical limitation)
Technical limitations are made to be overcome.
Can anyone give advice on how to implement a proper offsite backup for my case?
Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.
Solutions that aren't really interesting in my situation:
An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.
The fundamental problem is that if you can remotely access your backups then so can the hacker.
(Due to technical limitation)
Technical limitations are made to be overcome.
Can anyone give advice on how to implement a proper offsite backup for my case?
Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.
answered Jun 1 at 3:51
RonJohnRonJohn
1825
1825
1
I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.
– Greg
Jun 1 at 23:12
@Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.
– EarthMind
Jun 2 at 5:23
This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.
– arp
Jun 3 at 15:53
@asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.
– RonJohn
Jun 3 at 15:58
add a comment |
1
I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.
– Greg
Jun 1 at 23:12
@Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.
– EarthMind
Jun 2 at 5:23
This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.
– arp
Jun 3 at 15:53
@asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.
– RonJohn
Jun 3 at 15:58
1
1
I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.
– Greg
Jun 1 at 23:12
I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.
– Greg
Jun 1 at 23:12
@Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.
– EarthMind
Jun 2 at 5:23
@Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.
– EarthMind
Jun 2 at 5:23
This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.
– arp
Jun 3 at 15:53
This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.
– arp
Jun 3 at 15:53
@asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.
– RonJohn
Jun 3 at 15:58
@asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.
– RonJohn
Jun 3 at 15:58
add a comment |
You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.
There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.
Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.
1
I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.
– Criggie
Jun 1 at 6:47
add a comment |
You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.
There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.
Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.
1
I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.
– Criggie
Jun 1 at 6:47
add a comment |
You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.
There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.
Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.
You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.
There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.
Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.
answered May 31 at 17:45
BlueriverBlueriver
1313
1313
1
I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.
– Criggie
Jun 1 at 6:47
add a comment |
1
I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.
– Criggie
Jun 1 at 6:47
1
1
I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.
– Criggie
Jun 1 at 6:47
I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.
– Criggie
Jun 1 at 6:47
add a comment |
Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.
You can use option command in your authorized_keys. You fix the command allowed in remote.
how to add commands in ssh authorized_keys
Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.
add a comment |
Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.
You can use option command in your authorized_keys. You fix the command allowed in remote.
how to add commands in ssh authorized_keys
Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.
add a comment |
Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.
You can use option command in your authorized_keys. You fix the command allowed in remote.
how to add commands in ssh authorized_keys
Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.
Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.
You can use option command in your authorized_keys. You fix the command allowed in remote.
how to add commands in ssh authorized_keys
Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.
answered Jun 1 at 18:44
SnorkySnorky
212
212
add a comment |
add a comment |
A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.
add a comment |
A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.
add a comment |
A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.
A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.
answered Jun 1 at 18:05
johnjohn
111
111
add a comment |
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f969606%2fsecure-offsite-backup-even-in-the-case-of-hacker-root-access%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
7
First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).
– TheDESTROS
May 31 at 8:33
2
30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.
– Damon
Jun 1 at 17:01
@TheDESTROS Answer in answers, please, not in comments.
– wizzwizz4
Jun 2 at 10:38
I don't think anonymous FTP should be used anymore.
– Lucas Ramage
Jun 5 at 16:04