Secure offsite backup, even in the case of hacker root accessRecommendation for backup plan needed for small businessLinux to windows ssh/scp/sftp differential backup solutions?How to discourage a hacker with root access from deleting remote backups?General Advice about an archive solution. ~15tb and growing.NetApp Backup Strategy - Snapshots to SnapMirrors to Tape?How to do offsite AWS RDS PostgreSQL backups?Using SSH keys from backup on another machine to access server

Is there any effect in D&D 5e that cannot be undone?

Redirecting output only on a successful command call

Is it a bad idea to have a pen name with only an initial for a surname?

Can you cover a cube with copies of this shape?

Interview was just a one hour panel. Got an offer the next day; do I accept or is this a red flag?

Right indicator flash-frequency has increased and rear-right bulb is out

Is my research statement supposed to lead to papers in top journals?

What are the mechanical differences between Adapt and Monstrosity?

Can "Es tut mir leid" be used to express empathy rather than remorse?

How to know whether to write accidentals as sharps or flats?

How can the US president give an order to a civilian?

How can Caller ID be faked?

Will users know a CardView is clickable?

Leaving job close to major deadlines

Someone who is granted access to information but not expected to read it

Is there a term for someone whose preferred policies are a mix of Left and Right?

Huge Heap Table and table compression on SQL Server 2016

Time at 1G acceleration to travel 100000 light years

How to ask if I can mow my neighbor's lawn

What is this plant I saw for sale at a Romanian farmer's market?

Does knowing the surface area of all faces uniquely determine a tetrahedron?

Why do you need to heat the pan before heating the olive oil?

Manager wants to hire me; HR does not. How to proceed?

How do I run a script as sudo at boot time on Ubuntu 18.04 Server?



Secure offsite backup, even in the case of hacker root access


Recommendation for backup plan needed for small businessLinux to windows ssh/scp/sftp differential backup solutions?How to discourage a hacker with root access from deleting remote backups?General Advice about an archive solution. ~15tb and growing.NetApp Backup Strategy - Snapshots to SnapMirrors to Tape?How to do offsite AWS RDS PostgreSQL backups?Using SSH keys from backup on another machine to access server






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








23















I'm looking for a way to implement a more secure way of doing an offsite backup that will also protect my data against the situation where a malicious hacker has gained root access to my server. Even though the chance of that happening is smaller than other kinds of risks if SSH and password security is properly set up and the system is kept properly up-to-date, the amount of damage that can be permanently done is really high and therefore I'd like to find a solution to limit that.



I've already tried two ways of offsite backups:



  • a simple root-writable webdav mount (and configured in fstab) where the backed up data is copied onto. Problem: not really an offsite backup because the connection - and moreover access - to the offsite location is constantly left open as a folder in the filesystem. This is sufficient protection against many kinds of attacks if the mount has limited access privileges (read root only access), but doesn't protect against a malicious person with root access.


  • Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's stored on the host if the malicious user has root access to the host.


As a solution I'm thinking about these potential ways, but I don't know how and with what:



  • Backups can only be written or appended to the destination but not deleted.

  • The use of backup software that handles the offsite backups and doesn't support mass deletion of the offsite backups from the first host.

Solutions that aren't really interesting in my situation:



  • An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host (due to technical limitations).

Can anyone give advice on how to implement a proper offsite backup for my case?










share|improve this question



















  • 7





    First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).

    – TheDESTROS
    May 31 at 8:33






  • 2





    30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.

    – Damon
    Jun 1 at 17:01












  • @TheDESTROS Answer in answers, please, not in comments.

    – wizzwizz4
    Jun 2 at 10:38











  • I don't think anonymous FTP should be used anymore.

    – Lucas Ramage
    Jun 5 at 16:04

















23















I'm looking for a way to implement a more secure way of doing an offsite backup that will also protect my data against the situation where a malicious hacker has gained root access to my server. Even though the chance of that happening is smaller than other kinds of risks if SSH and password security is properly set up and the system is kept properly up-to-date, the amount of damage that can be permanently done is really high and therefore I'd like to find a solution to limit that.



I've already tried two ways of offsite backups:



  • a simple root-writable webdav mount (and configured in fstab) where the backed up data is copied onto. Problem: not really an offsite backup because the connection - and moreover access - to the offsite location is constantly left open as a folder in the filesystem. This is sufficient protection against many kinds of attacks if the mount has limited access privileges (read root only access), but doesn't protect against a malicious person with root access.


  • Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's stored on the host if the malicious user has root access to the host.


As a solution I'm thinking about these potential ways, but I don't know how and with what:



  • Backups can only be written or appended to the destination but not deleted.

  • The use of backup software that handles the offsite backups and doesn't support mass deletion of the offsite backups from the first host.

Solutions that aren't really interesting in my situation:



  • An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host (due to technical limitations).

Can anyone give advice on how to implement a proper offsite backup for my case?










share|improve this question



















  • 7





    First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).

    – TheDESTROS
    May 31 at 8:33






  • 2





    30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.

    – Damon
    Jun 1 at 17:01












  • @TheDESTROS Answer in answers, please, not in comments.

    – wizzwizz4
    Jun 2 at 10:38











  • I don't think anonymous FTP should be used anymore.

    – Lucas Ramage
    Jun 5 at 16:04













23












23








23


5






I'm looking for a way to implement a more secure way of doing an offsite backup that will also protect my data against the situation where a malicious hacker has gained root access to my server. Even though the chance of that happening is smaller than other kinds of risks if SSH and password security is properly set up and the system is kept properly up-to-date, the amount of damage that can be permanently done is really high and therefore I'd like to find a solution to limit that.



I've already tried two ways of offsite backups:



  • a simple root-writable webdav mount (and configured in fstab) where the backed up data is copied onto. Problem: not really an offsite backup because the connection - and moreover access - to the offsite location is constantly left open as a folder in the filesystem. This is sufficient protection against many kinds of attacks if the mount has limited access privileges (read root only access), but doesn't protect against a malicious person with root access.


  • Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's stored on the host if the malicious user has root access to the host.


As a solution I'm thinking about these potential ways, but I don't know how and with what:



  • Backups can only be written or appended to the destination but not deleted.

  • The use of backup software that handles the offsite backups and doesn't support mass deletion of the offsite backups from the first host.

Solutions that aren't really interesting in my situation:



  • An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host (due to technical limitations).

Can anyone give advice on how to implement a proper offsite backup for my case?










share|improve this question
















I'm looking for a way to implement a more secure way of doing an offsite backup that will also protect my data against the situation where a malicious hacker has gained root access to my server. Even though the chance of that happening is smaller than other kinds of risks if SSH and password security is properly set up and the system is kept properly up-to-date, the amount of damage that can be permanently done is really high and therefore I'd like to find a solution to limit that.



I've already tried two ways of offsite backups:



  • a simple root-writable webdav mount (and configured in fstab) where the backed up data is copied onto. Problem: not really an offsite backup because the connection - and moreover access - to the offsite location is constantly left open as a folder in the filesystem. This is sufficient protection against many kinds of attacks if the mount has limited access privileges (read root only access), but doesn't protect against a malicious person with root access.


  • Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's stored on the host if the malicious user has root access to the host.


As a solution I'm thinking about these potential ways, but I don't know how and with what:



  • Backups can only be written or appended to the destination but not deleted.

  • The use of backup software that handles the offsite backups and doesn't support mass deletion of the offsite backups from the first host.

Solutions that aren't really interesting in my situation:



  • An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host (due to technical limitations).

Can anyone give advice on how to implement a proper offsite backup for my case?







backup offsite-backup






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jun 3 at 19:55









terdon

13210




13210










asked May 31 at 8:05









EarthMindEarthMind

7642921




7642921







  • 7





    First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).

    – TheDESTROS
    May 31 at 8:33






  • 2





    30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.

    – Damon
    Jun 1 at 17:01












  • @TheDESTROS Answer in answers, please, not in comments.

    – wizzwizz4
    Jun 2 at 10:38











  • I don't think anonymous FTP should be used anymore.

    – Lucas Ramage
    Jun 5 at 16:04












  • 7





    First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).

    – TheDESTROS
    May 31 at 8:33






  • 2





    30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.

    – Damon
    Jun 1 at 17:01












  • @TheDESTROS Answer in answers, please, not in comments.

    – wizzwizz4
    Jun 2 at 10:38











  • I don't think anonymous FTP should be used anymore.

    – Lucas Ramage
    Jun 5 at 16:04







7




7





First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).

– TheDESTROS
May 31 at 8:33





First you make a local backup inside the server. Then, from another server you download the backup to yourself (without mounting directories).

– TheDESTROS
May 31 at 8:33




2




2





30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.

– Damon
Jun 1 at 17:01






30 or 40 years ago, there existed FTP servers with an anonymous "incoming" directory. You could upload files but not overwrite or list them. Worked simply by setting the directory's permissions accordingly. So... local root or not, no difference.

– Damon
Jun 1 at 17:01














@TheDESTROS Answer in answers, please, not in comments.

– wizzwizz4
Jun 2 at 10:38





@TheDESTROS Answer in answers, please, not in comments.

– wizzwizz4
Jun 2 at 10:38













I don't think anonymous FTP should be used anymore.

– Lucas Ramage
Jun 5 at 16:04





I don't think anonymous FTP should be used anymore.

– Lucas Ramage
Jun 5 at 16:04










7 Answers
7






active

oldest

votes


















53














All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.



What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.



To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.






share|improve this answer























  • Any advice on where to search for a guide to set up a read-only access solution?

    – EarthMind
    May 31 at 16:08






  • 5





    This is how I backup things over ssh: the backup server will ssh into the server to be backed up.

    – Michael Hampton
    May 31 at 17:49






  • 4





    rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups

    – Sampo Sarrala
    Jun 1 at 2:39







  • 10





    +1 - "pull" instead of "push"

    – Criggie
    Jun 1 at 6:48






  • 1





    This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.

    – Dubu
    Jun 4 at 8:36


















9














Immutable Storage



One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.



There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.



When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.



*AWS S3**



I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.



S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.



You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.



S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.



Suggested Software



I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.



Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.



There are dozens of pieces of software that can upload to cloud storage.



Protected Storage



With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.



You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.






share|improve this answer

























  • See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.

    – user71659
    May 31 at 23:54











  • I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.

    – Tim
    Jun 1 at 2:45






  • 1





    Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.

    – Scott Dudley
    Jun 3 at 13:44


















8














Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.






share|improve this answer




















  • 2





    One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.

    – Tim
    Jun 1 at 21:23











  • So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.

    – Mast
    Jun 3 at 6:45


















7















Solutions that aren't really interesting in my situation:



An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.




The fundamental problem is that if you can remotely access your backups then so can the hacker.




(Due to technical limitation)




Technical limitations are made to be overcome.




Can anyone give advice on how to implement a proper offsite backup for my case?




Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.






share|improve this answer


















  • 1





    I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.

    – Greg
    Jun 1 at 23:12











  • @Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.

    – EarthMind
    Jun 2 at 5:23











  • This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.

    – arp
    Jun 3 at 15:53











  • @asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.

    – RonJohn
    Jun 3 at 15:58


















3














You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.



There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.



Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.






share|improve this answer


















  • 1





    I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.

    – Criggie
    Jun 1 at 6:47


















2















Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.




You can use option command in your authorized_keys. You fix the command allowed in remote.



how to add commands in ssh authorized_keys



Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.






share|improve this answer






























    1














    A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.






    share|improve this answer























      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "2"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f969606%2fsecure-offsite-backup-even-in-the-case-of-hacker-root-access%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      7 Answers
      7






      active

      oldest

      votes








      7 Answers
      7






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      53














      All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.



      What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.



      To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.






      share|improve this answer























      • Any advice on where to search for a guide to set up a read-only access solution?

        – EarthMind
        May 31 at 16:08






      • 5





        This is how I backup things over ssh: the backup server will ssh into the server to be backed up.

        – Michael Hampton
        May 31 at 17:49






      • 4





        rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups

        – Sampo Sarrala
        Jun 1 at 2:39







      • 10





        +1 - "pull" instead of "push"

        – Criggie
        Jun 1 at 6:48






      • 1





        This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.

        – Dubu
        Jun 4 at 8:36















      53














      All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.



      What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.



      To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.






      share|improve this answer























      • Any advice on where to search for a guide to set up a read-only access solution?

        – EarthMind
        May 31 at 16:08






      • 5





        This is how I backup things over ssh: the backup server will ssh into the server to be backed up.

        – Michael Hampton
        May 31 at 17:49






      • 4





        rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups

        – Sampo Sarrala
        Jun 1 at 2:39







      • 10





        +1 - "pull" instead of "push"

        – Criggie
        Jun 1 at 6:48






      • 1





        This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.

        – Dubu
        Jun 4 at 8:36













      53












      53








      53







      All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.



      What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.



      To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.






      share|improve this answer













      All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too.



      What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack.



      To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered May 31 at 8:44









      Esa JokinenEsa Jokinen

      24.9k23662




      24.9k23662












      • Any advice on where to search for a guide to set up a read-only access solution?

        – EarthMind
        May 31 at 16:08






      • 5





        This is how I backup things over ssh: the backup server will ssh into the server to be backed up.

        – Michael Hampton
        May 31 at 17:49






      • 4





        rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups

        – Sampo Sarrala
        Jun 1 at 2:39







      • 10





        +1 - "pull" instead of "push"

        – Criggie
        Jun 1 at 6:48






      • 1





        This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.

        – Dubu
        Jun 4 at 8:36

















      • Any advice on where to search for a guide to set up a read-only access solution?

        – EarthMind
        May 31 at 16:08






      • 5





        This is how I backup things over ssh: the backup server will ssh into the server to be backed up.

        – Michael Hampton
        May 31 at 17:49






      • 4





        rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups

        – Sampo Sarrala
        Jun 1 at 2:39







      • 10





        +1 - "pull" instead of "push"

        – Criggie
        Jun 1 at 6:48






      • 1





        This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.

        – Dubu
        Jun 4 at 8:36
















      Any advice on where to search for a guide to set up a read-only access solution?

      – EarthMind
      May 31 at 16:08





      Any advice on where to search for a guide to set up a read-only access solution?

      – EarthMind
      May 31 at 16:08




      5




      5





      This is how I backup things over ssh: the backup server will ssh into the server to be backed up.

      – Michael Hampton
      May 31 at 17:49





      This is how I backup things over ssh: the backup server will ssh into the server to be backed up.

      – Michael Hampton
      May 31 at 17:49




      4




      4





      rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups

      – Sampo Sarrala
      Jun 1 at 2:39






      rsync over ssh is also good option and allows for incremental backups. straight scp is better suited for full backups

      – Sampo Sarrala
      Jun 1 at 2:39





      10




      10





      +1 - "pull" instead of "push"

      – Criggie
      Jun 1 at 6:48





      +1 - "pull" instead of "push"

      – Criggie
      Jun 1 at 6:48




      1




      1





      This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.

      – Dubu
      Jun 4 at 8:36





      This is also how backup solutions like BackupPC or IBM Tivoli Storage Manager (aka Spectrum Protect) work.

      – Dubu
      Jun 4 at 8:36













      9














      Immutable Storage



      One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.



      There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.



      When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.



      *AWS S3**



      I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.



      S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.



      You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.



      S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.



      Suggested Software



      I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.



      Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.



      There are dozens of pieces of software that can upload to cloud storage.



      Protected Storage



      With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.



      You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.






      share|improve this answer

























      • See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.

        – user71659
        May 31 at 23:54











      • I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.

        – Tim
        Jun 1 at 2:45






      • 1





        Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.

        – Scott Dudley
        Jun 3 at 13:44















      9














      Immutable Storage



      One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.



      There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.



      When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.



      *AWS S3**



      I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.



      S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.



      You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.



      S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.



      Suggested Software



      I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.



      Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.



      There are dozens of pieces of software that can upload to cloud storage.



      Protected Storage



      With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.



      You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.






      share|improve this answer

























      • See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.

        – user71659
        May 31 at 23:54











      • I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.

        – Tim
        Jun 1 at 2:45






      • 1





        Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.

        – Scott Dudley
        Jun 3 at 13:44













      9












      9








      9







      Immutable Storage



      One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.



      There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.



      When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.



      *AWS S3**



      I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.



      S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.



      You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.



      S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.



      Suggested Software



      I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.



      Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.



      There are dozens of pieces of software that can upload to cloud storage.



      Protected Storage



      With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.



      You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.






      share|improve this answer















      Immutable Storage



      One good option is to make your backup storage immutable, or at least provide reliable versioning which gives you effectively immutability. To be clear: immutable means unable to be changed, or permanent.



      There are multiple services that can do this for you. AWS S3, BackBlaze B2, and I suspect Azure and Google both offer a similar service. You could probably set up a server to do this, but I'm not sure how.



      When you have an immutable / version controlled repository you can restore your backup to any point, so if your host is compromised you can still restore as at any point in time.



      *AWS S3**



      I'm most familiar with AWS S3. S3 provides versioned, encrypted storage, with a high level of durability.



      S3 supports versioning, which gives you effective immutability. You can choose to use lifecycle rules to delete old version of files after a time period you can configure. You can also archive versions to cold storage (glacier cold archive), which costs about $1/TB/month.



      You can use the intelligent storage tiering class to reduce costs. I choose to use a lifecycle rule to move all of the static data to infrequent access class, which is durable and moderate (hot) performance but doesn't have the scalability or performance of S3 standard.



      S3 uses IAM (identity access management, i.e. user management) users and policies. This gives you very granular control of what your backup software can do with your storage. You can give the backup user permission for uploads but deny update and delete. You can also require multi-factor authentication to delete files, or even provide an object lock so that files can't be deleted.



      Suggested Software



      I create incremental backups using Restic. Restic uploads the new files to your storage location. I believe (but I could be incorrect) that it creates new files, but in general operation it doesn't update or delete any files.



      Borg is another option. I used to use Borg, but I found that with my moderate sized backups of hundreds of MB it effectively uploaded all my data each day from EC2 to S3. To me this isn't incremental, so I stopped using it. I did find documentation on this, but don't have the link.



      There are dozens of pieces of software that can upload to cloud storage.



      Protected Storage



      With some backup software you could try giving the IAM user permission to write new files but not update existing files. It's easy to make this restriction with AWS IAM, but as per the comment below Restic will not work with those permissions. You can also have multi-factor authentication required for deleting files from S3.



      You could have another IAM user, run from say your PC, which does the periodic clean scrub of the archive, discarding versions as per the policy you set.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jun 3 at 19:52

























      answered May 31 at 9:37









      TimTim

      18.8k41951




      18.8k41951












      • See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.

        – user71659
        May 31 at 23:54











      • I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.

        – Tim
        Jun 1 at 2:45






      • 1





        Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.

        – Scott Dudley
        Jun 3 at 13:44

















      • See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.

        – user71659
        May 31 at 23:54











      • I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.

        – Tim
        Jun 1 at 2:45






      • 1





        Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.

        – Scott Dudley
        Jun 3 at 13:44
















      See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.

      – user71659
      May 31 at 23:54





      See also S3 Object Lock. It can be configured such that nobody, not even the root user, can delete or overwrite an object for the defined period.

      – user71659
      May 31 at 23:54













      I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.

      – Tim
      Jun 1 at 2:45





      I suspect object lock may be a bit much for backups, as sometimes you will want to delete files. It can expire so you can delete files later.

      – Tim
      Jun 1 at 2:45




      1




      1





      Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.

      – Scott Dudley
      Jun 3 at 13:44





      Restic likes to create and remove files in the "locks" directory to control exclusive access, so if you take away the permission to remove files on the back end, it breaks. One solution proposed here is to use two buckets (one read/write bucket for locks, and one append-only bucket for everything else). It then uses a local tinyproxy instance to send stuff through one of two Rclone instances depending on the request path, and each Rclone dispatches commands to the appropriate bucket.

      – Scott Dudley
      Jun 3 at 13:44











      8














      Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.






      share|improve this answer




















      • 2





        One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.

        – Tim
        Jun 1 at 21:23











      • So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.

        – Mast
        Jun 3 at 6:45















      8














      Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.






      share|improve this answer




















      • 2





        One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.

        – Tim
        Jun 1 at 21:23











      • So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.

        – Mast
        Jun 3 at 6:45













      8












      8








      8







      Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.






      share|improve this answer















      Borg Backup supports append-only remote repositories. Any compromise of the server being backed up can result only in creating new backups, not overwriting only old ones.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jun 2 at 3:48









      Nonny Moose

      1033




      1033










      answered May 31 at 21:53









      JacobJacob

      811




      811







      • 2





        One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.

        – Tim
        Jun 1 at 21:23











      • So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.

        – Mast
        Jun 3 at 6:45












      • 2





        One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.

        – Tim
        Jun 1 at 21:23











      • So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.

        – Mast
        Jun 3 at 6:45







      2




      2





      One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.

      – Tim
      Jun 1 at 21:23





      One thing I don't like about Borg is if your incremental backup is under some given size it just uploads it all each backup. I moved to Restic because it was inefficient with bandwidth. I don't know what the threshold is, but enough that it was mildly annoying.

      – Tim
      Jun 1 at 21:23













      So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.

      – Mast
      Jun 3 at 6:45





      So, who removes the old back-ups in such a system? I've tried only adding and never removing stuff to harddrives once, turns out they quickly run out of storage.

      – Mast
      Jun 3 at 6:45











      7















      Solutions that aren't really interesting in my situation:



      An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.




      The fundamental problem is that if you can remotely access your backups then so can the hacker.




      (Due to technical limitation)




      Technical limitations are made to be overcome.




      Can anyone give advice on how to implement a proper offsite backup for my case?




      Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.






      share|improve this answer


















      • 1





        I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.

        – Greg
        Jun 1 at 23:12











      • @Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.

        – EarthMind
        Jun 2 at 5:23











      • This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.

        – arp
        Jun 3 at 15:53











      • @asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.

        – RonJohn
        Jun 3 at 15:58















      7















      Solutions that aren't really interesting in my situation:



      An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.




      The fundamental problem is that if you can remotely access your backups then so can the hacker.




      (Due to technical limitation)




      Technical limitations are made to be overcome.




      Can anyone give advice on how to implement a proper offsite backup for my case?




      Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.






      share|improve this answer


















      • 1





        I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.

        – Greg
        Jun 1 at 23:12











      • @Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.

        – EarthMind
        Jun 2 at 5:23











      • This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.

        – arp
        Jun 3 at 15:53











      • @asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.

        – RonJohn
        Jun 3 at 15:58













      7












      7








      7








      Solutions that aren't really interesting in my situation:



      An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.




      The fundamental problem is that if you can remotely access your backups then so can the hacker.




      (Due to technical limitation)




      Technical limitations are made to be overcome.




      Can anyone give advice on how to implement a proper offsite backup for my case?




      Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.






      share|improve this answer














      Solutions that aren't really interesting in my situation:



      An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host.




      The fundamental problem is that if you can remotely access your backups then so can the hacker.




      (Due to technical limitation)




      Technical limitations are made to be overcome.




      Can anyone give advice on how to implement a proper offsite backup for my case?




      Tape drives have been providing secure, off-site protection against all sorts of disasters -- including hackers -- for almost 70 years.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Jun 1 at 3:51









      RonJohnRonJohn

      1825




      1825







      • 1





        I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.

        – Greg
        Jun 1 at 23:12











      • @Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.

        – EarthMind
        Jun 2 at 5:23











      • This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.

        – arp
        Jun 3 at 15:53











      • @asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.

        – RonJohn
        Jun 3 at 15:58












      • 1





        I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.

        – Greg
        Jun 1 at 23:12











      • @Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.

        – EarthMind
        Jun 2 at 5:23











      • This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.

        – arp
        Jun 3 at 15:53











      • @asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.

        – RonJohn
        Jun 3 at 15:58







      1




      1





      I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.

      – Greg
      Jun 1 at 23:12





      I don't understand why this answer is not higher up. The best way to prevent from online attack is to take it offline. Tape is simple and proven.

      – Greg
      Jun 1 at 23:12













      @Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.

      – EarthMind
      Jun 2 at 5:23





      @Greg It's not a solution for every, like in my case due to the limitations of the service I'm using, which only allows webdav, Borg, SMB and NFS connections. Plus it's a very expensive (compared to decent alternatives) solution and not interesting in every case. I'm not seeing myself backup my € 10/m VPS with an expensive offline backup solution. If the data would be gone, it's not the end of the world for me. It's good to see recommendations of different price ranges here, but this solutions is the least interesting for my use case.

      – EarthMind
      Jun 2 at 5:23













      This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.

      – arp
      Jun 3 at 15:53





      This. Backup onto physical media and rotate the physical media through a secure off-site location, ideally one with a different risk profile for natural disasters.

      – arp
      Jun 3 at 15:53













      @asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.

      – RonJohn
      Jun 3 at 15:58





      @asp two of my sysadmins (I'm a DBA) kept the tapes in their car trunks... One of them was late to work at the WTC on 9/11 (these systems were at different DCs), so on 9/12 or 9/13 (I forget which) he drove to the backup DC with his week-old tapes.

      – RonJohn
      Jun 3 at 15:58











      3














      You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.



      There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.



      Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.






      share|improve this answer


















      • 1





        I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.

        – Criggie
        Jun 1 at 6:47















      3














      You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.



      There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.



      Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.






      share|improve this answer


















      • 1





        I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.

        – Criggie
        Jun 1 at 6:47













      3












      3








      3







      You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.



      There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.



      Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.






      share|improve this answer













      You can use storage services like AWS S3 (or probably Google's or Azure's equivalent) where you can give your root account PUT permissions to your bucket but not DELETE permissions. That way, you can use a push model and the attacker won't be able to delete the backup.



      There are further security measures that you can take with AWS, like requiring MFA to perform DELETEs on the bucket, but allow PUTs and GETs without MFA. That way, you can both back your data up and retrieve it to restore your services without using your MFA device, which might be useful in some extreme (and probably too obscure to even mention) case where accessing the MFA device could compromise it.



      Also, an out of scope comment you might find interesting/useful, there are several ways to configure S3 and similar services for automatic failover in case the main data source is offline.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered May 31 at 17:45









      BlueriverBlueriver

      1313




      1313







      • 1





        I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.

        – Criggie
        Jun 1 at 6:47












      • 1





        I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.

        – Criggie
        Jun 1 at 6:47







      1




      1





      I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.

      – Criggie
      Jun 1 at 6:47





      I'd recommend creating a dedicated "push" client with write and no delete access in IAM. Also, turn on versioning on the bucket , so earlier versions are still available. As a cost saving, "retire" old versions to Glacier.

      – Criggie
      Jun 1 at 6:47











      2















      Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.




      You can use option command in your authorized_keys. You fix the command allowed in remote.



      how to add commands in ssh authorized_keys



      Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.






      share|improve this answer



























        2















        Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.




        You can use option command in your authorized_keys. You fix the command allowed in remote.



        how to add commands in ssh authorized_keys



        Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.






        share|improve this answer

























          2












          2








          2








          Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.




          You can use option command in your authorized_keys. You fix the command allowed in remote.



          how to add commands in ssh authorized_keys



          Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.






          share|improve this answer














          Borg backup through SSH with key authentication. Problem: connection to that offsite server can be done with the key that's store on the host if the malicious user has root access to the host.




          You can use option command in your authorized_keys. You fix the command allowed in remote.



          how to add commands in ssh authorized_keys



          Even if an attacker recovers the login root, he will not be able to do anything other than the defined command.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jun 1 at 18:44









          SnorkySnorky

          212




          212





















              1














              A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.






              share|improve this answer



























                1














                A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.






                share|improve this answer

























                  1












                  1








                  1







                  A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.






                  share|improve this answer













                  A technique you could set up is using syncthing between your server and a remote backup server, and letting the remote backup server do snapshots or whatever on its end so that erasure server side doesn't result in erasure offsite.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Jun 1 at 18:05









                  johnjohn

                  111




                  111



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Server Fault!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f969606%2fsecure-offsite-backup-even-in-the-case-of-hacker-root-access%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

                      Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

                      Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020