Synology NAS - rsync messing up versioning / deduplicationWhich is faster, and why: transferring several small files or few large files?Rsync failing with “file too large”Rsync: windows 7, synology: login error and permission denied errorWHS (Windows Home Server) as an backup agent for SynologyMitigating low usb disk space error in backup scenarioNAS RAID5 & Crash ScenariosMultiple servers Rsync into one NAS simultaneausBest RAID config for Data Backup NAS?Combining a Windows Server with a Cloud NASMoving from internal raid to nas for video editingSynology: How to restore data from an accidentally deleted volumeX (BTRFS)?Synology NAS to Mac transfer: rsync creates identical folders on the same disk (NAS rsync and Mac rsync yield different hexdumps)

siunitx error: Invalid numerical input

Can an Eldritch Knight use Action Surge and thus Arcane Charge even when surprised?

Did the first version of Linux developed by Linus Torvalds have a GUI?

How to make a setting relevant?

Is it possible to express disjunction through conjunction and implication?

Is it recommended against to open-source the code of a webapp?

What risks are there when you clear your cookies instead of logging off?

Is the term 'open source' a trademark?

Efficient integer floor function in C++

Avoiding cliches when writing gods

Company did not petition for visa in a timely manner. Is asking me to work from overseas, but wants me to take a paycut

Why is the relationship between frequency and pitch exponential?

Implement Homestuck's Catenative Doomsday Dice Cascader

Do you need type ratings for private flying?

How would a aircraft visually signal in distress?

Do the English have an ancient (obsolete) verb for the action of the book opening?

What can plausibly explain many of my very long and low-tech bridges?

How bad would a partial hash leak be, realistically?

2.8 is missing the Carve option in the Boolean Modifier

How to translate “Me doing X” like in online posts?

Are "living" organ banks practical?

Quickest way to find characteristic polynomial from a given matrix

What are the words for people who cause trouble believing they know better?

What LISP compilers and interpreters were available for 8-bit machines?



Synology NAS - rsync messing up versioning / deduplication


Which is faster, and why: transferring several small files or few large files?Rsync failing with “file too large”Rsync: windows 7, synology: login error and permission denied errorWHS (Windows Home Server) as an backup agent for SynologyMitigating low usb disk space error in backup scenarioNAS RAID5 & Crash ScenariosMultiple servers Rsync into one NAS simultaneausBest RAID config for Data Backup NAS?Combining a Windows Server with a Cloud NASMoving from internal raid to nas for video editingSynology: How to restore data from an accidentally deleted volumeX (BTRFS)?Synology NAS to Mac transfer: rsync creates identical folders on the same disk (NAS rsync and Mac rsync yield different hexdumps)






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








3















Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?



Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.



Detailed info:



At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.



All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.



Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.










share|improve this question




























    3















    Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?



    Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.



    Detailed info:



    At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.



    All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.



    Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.










    share|improve this question
























      3












      3








      3








      Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?



      Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.



      Detailed info:



      At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.



      All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.



      Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.










      share|improve this question














      Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?



      Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.



      Detailed info:



      At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.



      All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.



      Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.







      rsync synology backup






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jan 2 '14 at 14:03









      AmbidexAmbidex

      1215




      1215




















          1 Answer
          1






          active

          oldest

          votes


















          0














          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.






          share|improve this answer

























          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45












          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "2"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f564622%2fsynology-nas-rsync-messing-up-versioning-deduplication%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.






          share|improve this answer

























          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45
















          0














          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.






          share|improve this answer

























          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45














          0












          0








          0







          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.






          share|improve this answer















          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited May 23 '17 at 12:41









          Community

          1




          1










          answered Apr 1 '15 at 10:36









          David WDavid W

          2,39242856




          2,39242856












          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45


















          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45

















          I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

          – Michal
          Oct 24 '16 at 12:45






          I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

          – Michal
          Oct 24 '16 at 12:45


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Server Fault!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f564622%2fsynology-nas-rsync-messing-up-versioning-deduplication%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

          Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

          What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company