Why don't linux distributions default to mounting tmpfs with infinite inodes?Can I increase inode count in Linux?Switch from Solaris to Linux cut backup speed 80%. Help me get the old speed back?cross-syncing home directories with disconnected operation (Linux/Ubuntu)XFS: no space left on device, (but I have 850GB available!)e2fsck extremely slow, although enough memory existsRe-creating a filesystem on runtime (need more inodes)du, df and ls report wrong free / used spaceHow do I replace the root filesystem (ext4) with a new one in LVM with one which has more inodes?Setting up tmpfs `/run/lock` for hundreds of thousands of 0 byte lock files, and dealing with the inode limitOptimizing storage of 10-20 million files on RAID5 (currently using LVM+XFS)Linux - set tmpfs ram size rather than overall size
The difference between Rad1 and Rfd1
Why won't the ground take my seed?
Why does this function call behave sensibly after calling it through a typecasted function pointer?
What is the line crossing the Pacific Ocean that is shown on maps?
How to fix a dry solder pin in BGA package?
How well known and how commonly used was Huffman coding in 1979?
A player is constantly pestering me about rules, what do I do as a DM?
Golf the smallest circle!
Hit Pipe with Mower and now it won't turn
Could Sauron have read Tom Bombadil's mind if Tom had held the Palantir?
Why are 120V general receptacle circuits limited to 20A?
How do I reference other list in calculated column?
Quacks of Quedlingburg Crow Skull Set 2 Keep Drawing
What shortcut does ⌦ symbol in Camunda macOS app indicate and how to invoke it?
I am having a problem understanding the different behavior of $("button").click() and $("button")[0].click()
Children's short story about material that accelerates away from gravity
Finding or mounting boot partition to create /boot/ssh
What is a macro? Difference between macro and function?
Was touching your nose a greeting in second millenium Mesopotamia?
How hard is it to sell a home which is currently mortgaged?
Can a single server be associated with multiple domains?
Why transcripts instead of degree certificates?
How to start learning the piano again
Should I report a leak of confidential HR information?
Why don't linux distributions default to mounting tmpfs with infinite inodes?
Can I increase inode count in Linux?Switch from Solaris to Linux cut backup speed 80%. Help me get the old speed back?cross-syncing home directories with disconnected operation (Linux/Ubuntu)XFS: no space left on device, (but I have 850GB available!)e2fsck extremely slow, although enough memory existsRe-creating a filesystem on runtime (need more inodes)du, df and ls report wrong free / used spaceHow do I replace the root filesystem (ext4) with a new one in LVM with one which has more inodes?Setting up tmpfs `/run/lock` for hundreds of thousands of 0 byte lock files, and dealing with the inode limitOptimizing storage of 10-20 million files on RAID5 (currently using LVM+XFS)Linux - set tmpfs ram size rather than overall size
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
According to this answer it is possible to mount at least tmpfs with "infinite" inodes.
Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:
- The tmpfs partition is 50% used by volume
- 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)
- tmpfs was mounted with
nr_inodes=1000
- all 1000 of those inodes are taken up by the inodes currently written
This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.
It seems to me that setting nr_inodes=0
(aka infinite inodes) would make this situation go away.
- Is there a reason that infinite inodes is not the default?
- What reasons are there to limit the number of inodes on a filesystem?
linux filesystems inode
add a comment |
According to this answer it is possible to mount at least tmpfs with "infinite" inodes.
Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:
- The tmpfs partition is 50% used by volume
- 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)
- tmpfs was mounted with
nr_inodes=1000
- all 1000 of those inodes are taken up by the inodes currently written
This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.
It seems to me that setting nr_inodes=0
(aka infinite inodes) would make this situation go away.
- Is there a reason that infinite inodes is not the default?
- What reasons are there to limit the number of inodes on a filesystem?
linux filesystems inode
2
tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.
– MadHatter
Jul 18 '13 at 14:45
I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal
– quodlibetor
Jul 18 '13 at 17:58
There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).
– jlliagre
Jul 18 '13 at 21:12
add a comment |
According to this answer it is possible to mount at least tmpfs with "infinite" inodes.
Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:
- The tmpfs partition is 50% used by volume
- 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)
- tmpfs was mounted with
nr_inodes=1000
- all 1000 of those inodes are taken up by the inodes currently written
This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.
It seems to me that setting nr_inodes=0
(aka infinite inodes) would make this situation go away.
- Is there a reason that infinite inodes is not the default?
- What reasons are there to limit the number of inodes on a filesystem?
linux filesystems inode
According to this answer it is possible to mount at least tmpfs with "infinite" inodes.
Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:
- The tmpfs partition is 50% used by volume
- 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)
- tmpfs was mounted with
nr_inodes=1000
- all 1000 of those inodes are taken up by the inodes currently written
This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.
It seems to me that setting nr_inodes=0
(aka infinite inodes) would make this situation go away.
- Is there a reason that infinite inodes is not the default?
- What reasons are there to limit the number of inodes on a filesystem?
linux filesystems inode
linux filesystems inode
edited Apr 13 '17 at 12:14
Community♦
1
1
asked Jul 18 '13 at 14:35
quodlibetorquodlibetor
1722 gold badges2 silver badges11 bronze badges
1722 gold badges2 silver badges11 bronze badges
2
tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.
– MadHatter
Jul 18 '13 at 14:45
I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal
– quodlibetor
Jul 18 '13 at 17:58
There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).
– jlliagre
Jul 18 '13 at 21:12
add a comment |
2
tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.
– MadHatter
Jul 18 '13 at 14:45
I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal
– quodlibetor
Jul 18 '13 at 17:58
There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).
– jlliagre
Jul 18 '13 at 21:12
2
2
tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.
– MadHatter
Jul 18 '13 at 14:45
tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.
– MadHatter
Jul 18 '13 at 14:45
I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal
– quodlibetor
Jul 18 '13 at 17:58
I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal
– quodlibetor
Jul 18 '13 at 17:58
There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).
– jlliagre
Jul 18 '13 at 21:12
There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).
– jlliagre
Jul 18 '13 at 21:12
add a comment |
3 Answers
3
active
oldest
votes
Usually (ex: ext2
, ext3
, ext4
, ufs
), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.
Some filesystems like xfs
have the ratio of space used by inodes a tunable so it can be increased at any time.
Modern file systems like ZFS
or btrfs
have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.
Edit: narrowing the answer to the updated question.
With tmpfs
, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs
. If you are in that case, the best practice is to adjust the nr_inode
parameter to a value large enough for all the files to fit but not use 0
(=unlimited). tmpfs
documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:
if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.
However, it is unclear how this could happen given the fact tmpfs
RAM usage is by default limited to 50% of the RAM:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.
– quodlibetor
Jul 18 '13 at 18:10
add a comment |
Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.
They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.
– quodlibetor
Jul 18 '13 at 17:56
No, the number is frozen at file system creation time, for most file systems, as jiliagre says.
– MadHatter
Jul 20 '13 at 15:45
add a comment |
The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.
A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.
$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt
Therefore, to prevent memory exhaustion, both size=
and nr_inodes=
have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.
I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.
– anx
Jun 10 at 9:50
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f524419%2fwhy-dont-linux-distributions-default-to-mounting-tmpfs-with-infinite-inodes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Usually (ex: ext2
, ext3
, ext4
, ufs
), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.
Some filesystems like xfs
have the ratio of space used by inodes a tunable so it can be increased at any time.
Modern file systems like ZFS
or btrfs
have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.
Edit: narrowing the answer to the updated question.
With tmpfs
, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs
. If you are in that case, the best practice is to adjust the nr_inode
parameter to a value large enough for all the files to fit but not use 0
(=unlimited). tmpfs
documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:
if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.
However, it is unclear how this could happen given the fact tmpfs
RAM usage is by default limited to 50% of the RAM:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.
– quodlibetor
Jul 18 '13 at 18:10
add a comment |
Usually (ex: ext2
, ext3
, ext4
, ufs
), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.
Some filesystems like xfs
have the ratio of space used by inodes a tunable so it can be increased at any time.
Modern file systems like ZFS
or btrfs
have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.
Edit: narrowing the answer to the updated question.
With tmpfs
, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs
. If you are in that case, the best practice is to adjust the nr_inode
parameter to a value large enough for all the files to fit but not use 0
(=unlimited). tmpfs
documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:
if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.
However, it is unclear how this could happen given the fact tmpfs
RAM usage is by default limited to 50% of the RAM:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.
– quodlibetor
Jul 18 '13 at 18:10
add a comment |
Usually (ex: ext2
, ext3
, ext4
, ufs
), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.
Some filesystems like xfs
have the ratio of space used by inodes a tunable so it can be increased at any time.
Modern file systems like ZFS
or btrfs
have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.
Edit: narrowing the answer to the updated question.
With tmpfs
, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs
. If you are in that case, the best practice is to adjust the nr_inode
parameter to a value large enough for all the files to fit but not use 0
(=unlimited). tmpfs
documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:
if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.
However, it is unclear how this could happen given the fact tmpfs
RAM usage is by default limited to 50% of the RAM:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
Usually (ex: ext2
, ext3
, ext4
, ufs
), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.
Some filesystems like xfs
have the ratio of space used by inodes a tunable so it can be increased at any time.
Modern file systems like ZFS
or btrfs
have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.
Edit: narrowing the answer to the updated question.
With tmpfs
, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs
. If you are in that case, the best practice is to adjust the nr_inode
parameter to a value large enough for all the files to fit but not use 0
(=unlimited). tmpfs
documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:
if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.
However, it is unclear how this could happen given the fact tmpfs
RAM usage is by default limited to 50% of the RAM:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
edited Jul 18 '13 at 21:45
answered Jul 18 '13 at 15:29
jlliagrejlliagre
8,06513 silver badges34 bronze badges
8,06513 silver badges34 bronze badges
so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.
– quodlibetor
Jul 18 '13 at 18:10
add a comment |
so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.
– quodlibetor
Jul 18 '13 at 18:10
so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.
– quodlibetor
Jul 18 '13 at 18:10
so I actually ran into this on tmpfs and didn't realize that it doesn't apply to other filesystems. Thanks.
– quodlibetor
Jul 18 '13 at 18:10
add a comment |
Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.
They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.
– quodlibetor
Jul 18 '13 at 17:56
No, the number is frozen at file system creation time, for most file systems, as jiliagre says.
– MadHatter
Jul 20 '13 at 15:45
add a comment |
Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.
They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.
– quodlibetor
Jul 18 '13 at 17:56
No, the number is frozen at file system creation time, for most file systems, as jiliagre says.
– MadHatter
Jul 20 '13 at 15:45
add a comment |
Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.
Like Madhatter said, inodes take up some space, and it isn't a trivial amount when talking about using an infinite number of them.
answered Jul 18 '13 at 17:51
user160910
They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.
– quodlibetor
Jul 18 '13 at 17:56
No, the number is frozen at file system creation time, for most file systems, as jiliagre says.
– MadHatter
Jul 20 '13 at 15:45
add a comment |
They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.
– quodlibetor
Jul 18 '13 at 17:56
No, the number is frozen at file system creation time, for most file systems, as jiliagre says.
– MadHatter
Jul 20 '13 at 15:45
They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.
– quodlibetor
Jul 18 '13 at 17:56
They're created dynamically, though, right? It seems like refusing to write to the disk because you are writing too much metadata while you still have 30% free space is counter-productive, even if the disk is 60% inodes, if it's still only 70% full (i.e. there is only 10% data) refusing to write the remaining 20% because I have too much metadata is strange.
– quodlibetor
Jul 18 '13 at 17:56
No, the number is frozen at file system creation time, for most file systems, as jiliagre says.
– MadHatter
Jul 20 '13 at 15:45
No, the number is frozen at file system creation time, for most file systems, as jiliagre says.
– MadHatter
Jul 20 '13 at 15:45
add a comment |
The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.
A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.
$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt
Therefore, to prevent memory exhaustion, both size=
and nr_inodes=
have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.
I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.
– anx
Jun 10 at 9:50
add a comment |
The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.
A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.
$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt
Therefore, to prevent memory exhaustion, both size=
and nr_inodes=
have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.
I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.
– anx
Jun 10 at 9:50
add a comment |
The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.
A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.
$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt
Therefore, to prevent memory exhaustion, both size=
and nr_inodes=
have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.
The memory consumption for tmpfs inodes is not counted towards the allocated blocks of the mount. There can be no tmpfs usage that is "90% inodes", only the "real" data is counted.
A tmpfs mount of any size will continue to appear "empty" as long as none of its files contain any bytes. Total memory consumption for maintaining the mount can greatly exceed any size limit.
$ find /mnt | wc -l
60000
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
tmpfs 4.0K 0 4.0K 0% /mnt
Therefore, to prevent memory exhaustion, both size=
and nr_inodes=
have to be limited. If a very large or no inode limit was set by default, a runaway process might stall the system without making the source of the issue easily determinable.
answered Jun 10 at 9:45
anxanx
2,1601 gold badge8 silver badges25 bronze badges
2,1601 gold badge8 silver badges25 bronze badges
I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.
– anx
Jun 10 at 9:50
add a comment |
I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.
– anx
Jun 10 at 9:50
I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.
– anx
Jun 10 at 9:50
I have been unable to mount any tmpfs with nr_inodes set below 2 or above 2^31-1.
– anx
Jun 10 at 9:50
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f524419%2fwhy-dont-linux-distributions-default-to-mounting-tmpfs-with-infinite-inodes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
tmpfs is not the default file system because it doesn't survive a reboot; it would be a very bad choice for a default FS. Other FSes don't have infinite inodes because they take up space, so a file system filled to the brim with inodes couldn't actually hold any data.
– MadHatter
Jul 18 '13 at 14:45
I didn't know that the dynamic limit was only available to tmpfs and new filesystems. My question then becomes "why does tmpfs limit the number of inodes it can hold?" I'm going to update the question with a a rationale from my comment to @Gregg Leventhal
– quodlibetor
Jul 18 '13 at 17:58
There is no dynamic limit with modern filesystems like ZFS and btrfs, there is just no limit at all (outside of course the physical available disk space).
– jlliagre
Jul 18 '13 at 21:12