Why don't linux distributions default to mounting tmpfs with infinite inodes?Can I increase inode count in Linux?Switch from Solaris to Linux cut backup speed 80%. Help me get the old speed back?cross-syncing home directories with disconnected operation (Linux/Ubuntu)XFS: no space left on device, (but I have 850GB available!)e2fsck extremely slow, although enough memory existsRe-creating a filesystem on runtime (need more inodes)du, df and ls report wrong free / used spaceHow do I replace the root filesystem (ext4) with a new one in LVM with one which has more inodes?Setting up tmpfs `/run/lock` for hundreds of thousands of 0 byte lock files, and dealing with the inode limitOptimizing storage of 10-20 million files on RAID5 (currently using LVM+XFS)Linux - set tmpfs ram size rather than overall size

The difference between Rad1 and Rfd1

Why won't the ground take my seed?

Why does this function call behave sensibly after calling it through a typecasted function pointer?

What is the line crossing the Pacific Ocean that is shown on maps?

How to fix a dry solder pin in BGA package?

How well known and how commonly used was Huffman coding in 1979?

A player is constantly pestering me about rules, what do I do as a DM?

Golf the smallest circle!

Hit Pipe with Mower and now it won't turn

Could Sauron have read Tom Bombadil's mind if Tom had held the Palantir?

Why are 120V general receptacle circuits limited to 20A?

How do I reference other list in calculated column?

Quacks of Quedlingburg Crow Skull Set 2 Keep Drawing

What shortcut does ⌦ symbol in Camunda macOS app indicate and how to invoke it?

I am having a problem understanding the different behavior of $("button").click() and $("button")[0].click()

Children's short story about material that accelerates away from gravity

Finding or mounting boot partition to create /boot/ssh

What is a macro? Difference between macro and function?

Was touching your nose a greeting in second millenium Mesopotamia?

How hard is it to sell a home which is currently mortgaged?

Can a single server be associated with multiple domains?

Why transcripts instead of degree certificates?

How to start learning the piano again

Should I report a leak of confidential HR information?



Why don't linux distributions default to mounting tmpfs with infinite inodes?


Can I increase inode count in Linux?Switch from Solaris to Linux cut backup speed 80%. Help me get the old speed back?cross-syncing home directories with disconnected operation (Linux/Ubuntu)XFS: no space left on device, (but I have 850GB available!)e2fsck extremely slow, although enough memory existsRe-creating a filesystem on runtime (need more inodes)du, df and ls report wrong free / used spaceHow do I replace the root filesystem (ext4) with a new one in LVM with one which has more inodes?Setting up tmpfs `/run/lock` for hundreds of thousands of 0 byte lock files, and dealing with the inode limitOptimizing storage of 10-20 million files on RAID5 (currently using LVM+XFS)Linux - set tmpfs ram size rather than overall size






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








4















According to this answer it is possible to mount at least tmpfs with "infinite" inodes.



Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:



  • The tmpfs partition is 50% used by volume

  • 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)

  • tmpfs was mounted with nr_inodes=1000

  • all 1000 of those inodes are taken up by the inodes currently written

This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.



It seems to me that setting nr_inodes=0 (aka infinite inodes) would make this situation go away.



  • Is there a reason that infinite inodes is not the default?

  • What reasons are there to limit the number of inodes on a filesystem?









share|improve this question



















  • 2





    tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.

    – MadHatter
    Jul 18 '13 at 14:45











  • I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal

    – quodlibetor
    Jul 18 '13 at 17:58











  • There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).

    – jlliagre
    Jul 18 '13 at 21:12


















4















According to this answer it is possible to mount at least tmpfs with "infinite" inodes.



Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:



  • The tmpfs partition is 50% used by volume

  • 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)

  • tmpfs was mounted with nr_inodes=1000

  • all 1000 of those inodes are taken up by the inodes currently written

This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.



It seems to me that setting nr_inodes=0 (aka infinite inodes) would make this situation go away.



  • Is there a reason that infinite inodes is not the default?

  • What reasons are there to limit the number of inodes on a filesystem?









share|improve this question



















  • 2





    tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.

    – MadHatter
    Jul 18 '13 at 14:45











  • I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal

    – quodlibetor
    Jul 18 '13 at 17:58











  • There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).

    – jlliagre
    Jul 18 '13 at 21:12














4












4








4


1






According to this answer it is possible to mount at least tmpfs with "infinite" inodes.



Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:



  • The tmpfs partition is 50% used by volume

  • 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)

  • tmpfs was mounted with nr_inodes=1000

  • all 1000 of those inodes are taken up by the inodes currently written

This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.



It seems to me that setting nr_inodes=0 (aka infinite inodes) would make this situation go away.



  • Is there a reason that infinite inodes is not the default?

  • What reasons are there to limit the number of inodes on a filesystem?









share|improve this question
















According to this answer it is possible to mount at least tmpfs with "infinite" inodes.



Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:



  • The tmpfs partition is 50% used by volume

  • 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)

  • tmpfs was mounted with nr_inodes=1000

  • all 1000 of those inodes are taken up by the inodes currently written

This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.



It seems to me that setting nr_inodes=0 (aka infinite inodes) would make this situation go away.



  • Is there a reason that infinite inodes is not the default?

  • What reasons are there to limit the number of inodes on a filesystem?






linux filesystems inode






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 13 '17 at 12:14









Community

1




1










asked Jul 18 '13 at 14:35









quodlibetorquodlibetor

1722 gold badges2 silver badges11 bronze badges




1722 gold badges2 silver badges11 bronze badges







  • 2





    tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.

    – MadHatter
    Jul 18 '13 at 14:45











  • I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal

    – quodlibetor
    Jul 18 '13 at 17:58











  • There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).

    – jlliagre
    Jul 18 '13 at 21:12













  • 2





    tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.

    – MadHatter
    Jul 18 '13 at 14:45











  • I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal

    – quodlibetor
    Jul 18 '13 at 17:58











  • There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).

    – jlliagre
    Jul 18 '13 at 21:12








2




2





tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.

– MadHatter
Jul 18 '13 at 14:45





tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.

– MadHatter
Jul 18 '13 at 14:45













I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal

– quodlibetor
Jul 18 '13 at 17:58





I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal

– quodlibetor
Jul 18 '13 at 17:58













There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).

– jlliagre
Jul 18 '13 at 21:12






There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).

– jlliagre
Jul 18 '13 at 21:12











3 Answers
3






active

oldest

votes


















8














Usually (ex: ext2, ext3, ext4, ufs), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.



Some filesystems like xfs have the ratio of space used by inodes a tunable so it can be increased at any time.



Modern file systems like ZFS or btrfs have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.




Edit: narrowing the answer to the updated question.



With tmpfs, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs. If you are in that case, the best practice is to adjust the nr_inode parameter to a value large enough for all the files to fit but not use 0 (=unlimited). tmpfs documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:



if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.


However, it is unclear how this could happen given the fact tmpfs RAM usage is by default limited to 50% of the RAM:



size: The limit of allocated bytes for this tmpfs instance. The 
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.





share|improve this answer

























  • so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.

    – quodlibetor
    Jul 18 '13 at 18:10


















0














Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.






share|improve this answer























  • They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.

    – quodlibetor
    Jul 18 '13 at 17:56











  • No, the number is frozen at file system creation time, for most file systems, as jiliagre says.

    – MadHatter
    Jul 20 '13 at 15:45


















0














The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.



A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.



$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt


Therefore, to prevent memory exhaustion, both size= and nr_inodes= have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.






share|improve this answer























  • I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.

    – anx
    Jun 10 at 9:50













Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f524419%2fwhy-dont-linux-distributions-default-to-mounting-tmpfs-with-infinite-inodes%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









8














Usually (ex: ext2, ext3, ext4, ufs), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.



Some filesystems like xfs have the ratio of space used by inodes a tunable so it can be increased at any time.



Modern file systems like ZFS or btrfs have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.




Edit: narrowing the answer to the updated question.



With tmpfs, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs. If you are in that case, the best practice is to adjust the nr_inode parameter to a value large enough for all the files to fit but not use 0 (=unlimited). tmpfs documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:



if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.


However, it is unclear how this could happen given the fact tmpfs RAM usage is by default limited to 50% of the RAM:



size: The limit of allocated bytes for this tmpfs instance. The 
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.





share|improve this answer

























  • so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.

    – quodlibetor
    Jul 18 '13 at 18:10















8














Usually (ex: ext2, ext3, ext4, ufs), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.



Some filesystems like xfs have the ratio of space used by inodes a tunable so it can be increased at any time.



Modern file systems like ZFS or btrfs have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.




Edit: narrowing the answer to the updated question.



With tmpfs, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs. If you are in that case, the best practice is to adjust the nr_inode parameter to a value large enough for all the files to fit but not use 0 (=unlimited). tmpfs documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:



if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.


However, it is unclear how this could happen given the fact tmpfs RAM usage is by default limited to 50% of the RAM:



size: The limit of allocated bytes for this tmpfs instance. The 
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.





share|improve this answer

























  • so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.

    – quodlibetor
    Jul 18 '13 at 18:10













8












8








8







Usually (ex: ext2, ext3, ext4, ufs), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.



Some filesystems like xfs have the ratio of space used by inodes a tunable so it can be increased at any time.



Modern file systems like ZFS or btrfs have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.




Edit: narrowing the answer to the updated question.



With tmpfs, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs. If you are in that case, the best practice is to adjust the nr_inode parameter to a value large enough for all the files to fit but not use 0 (=unlimited). tmpfs documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:



if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.


However, it is unclear how this could happen given the fact tmpfs RAM usage is by default limited to 50% of the RAM:



size: The limit of allocated bytes for this tmpfs instance. The 
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.





share|improve this answer















Usually (ex: ext2, ext3, ext4, ufs), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.



Some filesystems like xfs have the ratio of space used by inodes a tunable so it can be increased at any time.



Modern file systems like ZFS or btrfs have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.




Edit: narrowing the answer to the updated question.



With tmpfs, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs. If you are in that case, the best practice is to adjust the nr_inode parameter to a value large enough for all the files to fit but not use 0 (=unlimited). tmpfs documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:



if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.


However, it is unclear how this could happen given the fact tmpfs RAM usage is by default limited to 50% of the RAM:



size: The limit of allocated bytes for this tmpfs instance. The 
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.






share|improve this answer














share|improve this answer



share|improve this answer








edited Jul 18 '13 at 21:45

























answered Jul 18 '13 at 15:29









jlliagrejlliagre

8,06513 silver badges34 bronze badges




8,06513 silver badges34 bronze badges












  • so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.

    – quodlibetor
    Jul 18 '13 at 18:10

















  • so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.

    – quodlibetor
    Jul 18 '13 at 18:10
















so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.

– quodlibetor
Jul 18 '13 at 18:10





so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.

– quodlibetor
Jul 18 '13 at 18:10













0














Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.






share|improve this answer























  • They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.

    – quodlibetor
    Jul 18 '13 at 17:56











  • No, the number is frozen at file system creation time, for most file systems, as jiliagre says.

    – MadHatter
    Jul 20 '13 at 15:45















0














Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.






share|improve this answer























  • They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.

    – quodlibetor
    Jul 18 '13 at 17:56











  • No, the number is frozen at file system creation time, for most file systems, as jiliagre says.

    – MadHatter
    Jul 20 '13 at 15:45













0












0








0







Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.






share|improve this answer













Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.







share|improve this answer












share|improve this answer



share|improve this answer










answered Jul 18 '13 at 17:51







user160910



















  • They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.

    – quodlibetor
    Jul 18 '13 at 17:56











  • No, the number is frozen at file system creation time, for most file systems, as jiliagre says.

    – MadHatter
    Jul 20 '13 at 15:45

















  • They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.

    – quodlibetor
    Jul 18 '13 at 17:56











  • No, the number is frozen at file system creation time, for most file systems, as jiliagre says.

    – MadHatter
    Jul 20 '13 at 15:45
















They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.

– quodlibetor
Jul 18 '13 at 17:56





They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.

– quodlibetor
Jul 18 '13 at 17:56













No, the number is frozen at file system creation time, for most file systems, as jiliagre says.

– MadHatter
Jul 20 '13 at 15:45





No, the number is frozen at file system creation time, for most file systems, as jiliagre says.

– MadHatter
Jul 20 '13 at 15:45











0














The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.



A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.



$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt


Therefore, to prevent memory exhaustion, both size= and nr_inodes= have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.






share|improve this answer























  • I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.

    – anx
    Jun 10 at 9:50















0














The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.



A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.



$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt


Therefore, to prevent memory exhaustion, both size= and nr_inodes= have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.






share|improve this answer























  • I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.

    – anx
    Jun 10 at 9:50













0












0








0







The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.



A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.



$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt


Therefore, to prevent memory exhaustion, both size= and nr_inodes= have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.






share|improve this answer













The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.



A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.



$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt


Therefore, to prevent memory exhaustion, both size= and nr_inodes= have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.







share|improve this answer












share|improve this answer



share|improve this answer










answered Jun 10 at 9:45









anxanx

2,1601 gold badge8 silver badges25 bronze badges




2,1601 gold badge8 silver badges25 bronze badges












  • I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.

    – anx
    Jun 10 at 9:50

















  • I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.

    – anx
    Jun 10 at 9:50
















I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.

– anx
Jun 10 at 9:50





I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.

– anx
Jun 10 at 9:50

















draft saved

draft discarded
















































Thanks for contributing an answer to Server Fault!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f524419%2fwhy-dont-linux-distributions-default-to-mounting-tmpfs-with-infinite-inodes%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company