Synology NAS - rsync messing up versioning / deduplicationWhich is faster, and why: transferring several small files or few large files?Rsync failing with “file too large”Rsync: windows 7, synology: login error and permission denied errorWHS (Windows Home Server) as an backup agent for SynologyMitigating low usb disk space error in backup scenarioNAS RAID5 & Crash ScenariosMultiple servers Rsync into one NAS simultaneausBest RAID config for Data Backup NAS?Combining a Windows Server with a Cloud NASMoving from internal raid to nas for video editingSynology: How to restore data from an accidentally deleted volumeX (BTRFS)?Synology NAS to Mac transfer: rsync creates identical folders on the same disk (NAS rsync and Mac rsync yield different hexdumps)

siunitx error: Invalid numerical input

Can an Eldritch Knight use Action Surge and thus Arcane Charge even when surprised?

Did the first version of Linux developed by Linus Torvalds have a GUI?

How to make a setting relevant?

Is it possible to express disjunction through conjunction and implication?

Is it recommended against to open-source the code of a webapp?

What risks are there when you clear your cookies instead of logging off?

Is the term 'open source' a trademark?

Efficient integer floor function in C++

Avoiding cliches when writing gods

Company did not petition for visa in a timely manner. Is asking me to work from overseas, but wants me to take a paycut

Why is the relationship between frequency and pitch exponential?

Implement Homestuck's Catenative Doomsday Dice Cascader

Do you need type ratings for private flying?

How would a aircraft visually signal in distress?

Do the English have an ancient (obsolete) verb for the action of the book opening?

What can plausibly explain many of my very long and low-tech bridges?

How bad would a partial hash leak be, realistically?

2.8 is missing the Carve option in the Boolean Modifier

How to translate “Me doing X” like in online posts?

Are "living" organ banks practical?

Quickest way to find characteristic polynomial from a given matrix

What are the words for people who cause trouble believing they know better?

What LISP compilers and interpreters were available for 8-bit machines?



Synology NAS - rsync messing up versioning / deduplication


Which is faster, and why: transferring several small files or few large files?Rsync failing with “file too large”Rsync: windows 7, synology: login error and permission denied errorWHS (Windows Home Server) as an backup agent for SynologyMitigating low usb disk space error in backup scenarioNAS RAID5 & Crash ScenariosMultiple servers Rsync into one NAS simultaneausBest RAID config for Data Backup NAS?Combining a Windows Server with a Cloud NASMoving from internal raid to nas for video editingSynology: How to restore data from an accidentally deleted volumeX (BTRFS)?Synology NAS to Mac transfer: rsync creates identical folders on the same disk (NAS rsync and Mac rsync yield different hexdumps)






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








3















Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?



Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.



Detailed info:



At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.



All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.



Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.










share|improve this question




























    3















    Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?



    Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.



    Detailed info:



    At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.



    All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.



    Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.










    share|improve this question
























      3












      3








      3








      Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?



      Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.



      Detailed info:



      At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.



      All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.



      Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.










      share|improve this question














      Is it true that Synology DSM 4.3's default rsync implementation is not able to handle "vast" amounts of data and could mess up versioning / deduplication? Could it be that any of the variables (see detailed info below) could make this so much more difficult?



      Edit: I'm looking for nothing more then an answer if the above claims are non-sense or could be true.



      Detailed info:



      At work, we've got an Synology NAS running at the office. This NAS is used by a few designers where they directly work from. They have projects running which consist of high resolution stock photos, large PSD's, PDF's and what not. We have a folder which is approx. 430GB in size which only consists of the currently running projects. This folder is supposed to be backupped in a datacenter, weekly through our internet connection.



      All of our IT is being handled by a third party, which claims that our backup is beginning to form a certain size ("100GB+") where the default implementation of the DSM (4.3) rsync is unable to handle the vast amount of data to the online backup (on one of their machines in their datacenter). They say the backup consists about 10TB of data because rsync has problems with "versioning / de-duplication" (retention: 30 days) and goes haywire.



      Because of this, they suggest using a "professional online backup service", which cranks up our costs per GB to the online backup significantly.







      rsync synology backup






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jan 2 '14 at 14:03









      AmbidexAmbidex

      1215




      1215




















          1 Answer
          1






          active

          oldest

          votes


















          0














          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.






          share|improve this answer

























          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45












          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "2"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f564622%2fsynology-nas-rsync-messing-up-versioning-deduplication%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.






          share|improve this answer

























          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45
















          0














          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.






          share|improve this answer

























          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45














          0












          0








          0







          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.






          share|improve this answer















          Rsync in and of itself doesn't choke on large file sizes or "too many" files. Depending on your situation, it could be (but is unlikely) that the rsync job each week is taking more than 1 week to complete, causing a new rsync job to begin before the previous rsync job finished.



          It is common knowledge among IT folks that transferring tons of little files takes a whole lot more time than transferring a few very large files with all else equals (same internet speed, same amount of data, etc... Take a look at this ("Transferring millions of images") as an example discussion on Stack Overflow, as well as this ("Which is faster, and why: transferring several small files or few large files?") as an example discussion here on Serverfault.



          So the issue may be that you should compress the files/folders before running rsync, and then copying the compressed file to your off-site data center. That would save you in off-site data storage costs anyway, although it does open up another can of worms.



          Your first step would be, of course, to figure out how long it takes the rsync job to run. Then figure out if you need to change your backup methodology by either compressing the data beforehand or moving to an alternative backup solution.



          By the way, as of this posting, Synology DSM 5.1 is the latest version, and 5.2 is in beta. You should update to DSM 5.1 if you haven't already. This would surely not hurt your situation.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited May 23 '17 at 12:41









          Community

          1




          1










          answered Apr 1 '15 at 10:36









          David WDavid W

          2,39242856




          2,39242856












          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45


















          • I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

            – Michal
            Oct 24 '16 at 12:45

















          I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

          – Michal
          Oct 24 '16 at 12:45






          I'll give + 2 for update to 5.1 and for small vs big files.......... but I don't agree with compression. Since it's clearly stated they are using deduplication and versioning...... for dedup ratio will be well 1:1,2 at most for compressed data........ and not to mention versioning....... unless you are referring to something that is really and fully supportive for that way....... it's not always about having copy but copy in consistent state....... but most importantly RSYNC doesn't have limitations..... (only possibility would be not enough ram -but in very old versions of rsync on target...)

          – Michal
          Oct 24 '16 at 12:45


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Server Fault!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f564622%2fsynology-nas-rsync-messing-up-versioning-deduplication%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

          Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

          Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020