Disk full, du tells different. How to further investigate?du vs. df differenceNo free disk spacegit reports “No space left on device”, but df -h says there is is 2.5G availableRoot volume /dev/mapper/centos-root fullUbuntu thinks disk is full, df says disk usage is at 100% but also says 132G/400G is in useno space on ext4 partition left with files not summing up to used space and with sufficient inodesCentos Server no disk space but df says there is plentyIncorrect Disk space usage in Linux serverCentOS 7 disk spaceCent OS 7.2 Disk (df -h) 100% full but directory size (du -sch /* ) notEmail when Linux server disk is full?rm on a directory with millions of filesHow do you increase a KVM guest's disk space?Linux server's disk fullGot error 28 from storage engine (disk full) but my disk are not fullUbuntu mysql error: Disk is full writingdf vs du. Is my disk really full?calculating days until disk is fullinput/output error at clean ext3 partition - how to check what wrong with data blockWhat's a good choice for a general-purpose “non-advanced” filesystem with Linux R/W and Windows/BSD R/O support?

Disabling quote conversion in docstrings

GitLab account hacked and repo wiped

Why is my arithmetic with a long long int behaving this way?

Would a "Permanence" spell in 5e be overpowered?

As a GM, is it bad form to ask for a moment to think when improvising?

What's the 2-minute timer on mobile Deutsche Bahn tickets?

What is a precise issue with allowing getters?

Where are the "shires" in the UK?

Can full drive backup be used instead of MSSQL database backup?

Why does blending blueberries, milk, banana and vanilla extract cause the mixture to have a yogurty consistency?

All of my Firefox add-ons been disabled suddenly, how can I re-enable them?

Switch Function Not working Properly

Why are oscilloscope input impedances so low?

Is there a closed form, or cleaner way of writing this function?

about academic proof-reading, what to do in this situation?

In linear regression why does regularisation penalise the parameter values as well?

What happens if I accidentally leave an app running and click "Install Now" in Software Updater?

Speed up this NIntegrate

Which "exotic salt" can lower water's freezing point by –70 °C?

Where to draw the line between quantum mechanics theory and its interpretation(s)?

Sci-fi/fantasy book - ships on steel runners skating across ice sheets

Is 'contemporary' ambiguous and if so is there a better word?

Would a small hole in a Faraday cage drastically reduce its effectiveness at blocking interference?

How to display number in triangular pattern with plus sign



Disk full, du tells different. How to further investigate?


du vs. df differenceNo free disk spacegit reports “No space left on device”, but df -h says there is is 2.5G availableRoot volume /dev/mapper/centos-root fullUbuntu thinks disk is full, df says disk usage is at 100% but also says 132G/400G is in useno space on ext4 partition left with files not summing up to used space and with sufficient inodesCentos Server no disk space but df says there is plentyIncorrect Disk space usage in Linux serverCentOS 7 disk spaceCent OS 7.2 Disk (df -h) 100% full but directory size (du -sch /* ) notEmail when Linux server disk is full?rm on a directory with millions of filesHow do you increase a KVM guest's disk space?Linux server's disk fullGot error 28 from storage engine (disk full) but my disk are not fullUbuntu mysql error: Disk is full writingdf vs du. Is my disk really full?calculating days until disk is fullinput/output error at clean ext3 partition - how to check what wrong with data blockWhat's a good choice for a general-purpose “non-advanced” filesystem with Linux R/W and Windows/BSD R/O support?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








102















I have a SCSI disk in a server (hardware Raid 1), 32G, ext3 filesytem. df tells me that the disk is 100% full. If I delete 1G this is correctly shown.



However, if I run a du -h -x / then du tells me that only 12G are used (I use -x because of some Samba mounts).



So my question is not about subtle differences between the du and df commands but about how I can find out what causes this huge difference?



I rebooted the machine for a fsck that went w/out errors. Should I run badblocks? lsof shows me no open deleted files, lost+found is empty and there is no obvious warn/err/fail statement in the messages file.



Feel free to ask for further details of the setup.










share|improve this question

















  • 3





    This is very close to the question: linux - du vs. df difference (serverfault.com/questions/57098/du-vs-df-difference). The solution was files under a mount point as OldTroll answered.

    – Chris Ting
    May 30 '11 at 16:45

















102















I have a SCSI disk in a server (hardware Raid 1), 32G, ext3 filesytem. df tells me that the disk is 100% full. If I delete 1G this is correctly shown.



However, if I run a du -h -x / then du tells me that only 12G are used (I use -x because of some Samba mounts).



So my question is not about subtle differences between the du and df commands but about how I can find out what causes this huge difference?



I rebooted the machine for a fsck that went w/out errors. Should I run badblocks? lsof shows me no open deleted files, lost+found is empty and there is no obvious warn/err/fail statement in the messages file.



Feel free to ask for further details of the setup.










share|improve this question

















  • 3





    This is very close to the question: linux - du vs. df difference (serverfault.com/questions/57098/du-vs-df-difference). The solution was files under a mount point as OldTroll answered.

    – Chris Ting
    May 30 '11 at 16:45













102












102








102


33






I have a SCSI disk in a server (hardware Raid 1), 32G, ext3 filesytem. df tells me that the disk is 100% full. If I delete 1G this is correctly shown.



However, if I run a du -h -x / then du tells me that only 12G are used (I use -x because of some Samba mounts).



So my question is not about subtle differences between the du and df commands but about how I can find out what causes this huge difference?



I rebooted the machine for a fsck that went w/out errors. Should I run badblocks? lsof shows me no open deleted files, lost+found is empty and there is no obvious warn/err/fail statement in the messages file.



Feel free to ask for further details of the setup.










share|improve this question














I have a SCSI disk in a server (hardware Raid 1), 32G, ext3 filesytem. df tells me that the disk is 100% full. If I delete 1G this is correctly shown.



However, if I run a du -h -x / then du tells me that only 12G are used (I use -x because of some Samba mounts).



So my question is not about subtle differences between the du and df commands but about how I can find out what causes this huge difference?



I rebooted the machine for a fsck that went w/out errors. Should I run badblocks? lsof shows me no open deleted files, lost+found is empty and there is no obvious warn/err/fail statement in the messages file.



Feel free to ask for further details of the setup.







linux ext3 disk-space-utilization scsi






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked May 30 '11 at 12:29









initallinitall

99521118




99521118







  • 3





    This is very close to the question: linux - du vs. df difference (serverfault.com/questions/57098/du-vs-df-difference). The solution was files under a mount point as OldTroll answered.

    – Chris Ting
    May 30 '11 at 16:45












  • 3





    This is very close to the question: linux - du vs. df difference (serverfault.com/questions/57098/du-vs-df-difference). The solution was files under a mount point as OldTroll answered.

    – Chris Ting
    May 30 '11 at 16:45







3




3





This is very close to the question: linux - du vs. df difference (serverfault.com/questions/57098/du-vs-df-difference). The solution was files under a mount point as OldTroll answered.

– Chris Ting
May 30 '11 at 16:45





This is very close to the question: linux - du vs. df difference (serverfault.com/questions/57098/du-vs-df-difference). The solution was files under a mount point as OldTroll answered.

– Chris Ting
May 30 '11 at 16:45










17 Answers
17






active

oldest

votes


















94














Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk. I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them).






share|improve this answer


















  • 1





    You can find these hidden files without needing to unmount directories. Take a look at Marcel G's answer below which explains how.

    – mhsekhavat
    Jul 23 '17 at 7:39












  • You should show the CLI commands to do this in your answer

    – Jonathan
    Oct 10 '18 at 17:26






  • 1





    DO CHECK even if you think that it does not make sense for you!

    – Chris
    Oct 26 '18 at 14:32







  • 1





    Note: this answer is talking about files located underneath mount points (i.e. hidden on the original filesystem), not within mount points. (Don't be an idiot like me.)

    – mwfearnley
    Nov 27 '18 at 15:18


















86














Just stumbled on this page when trying to track down an issue on a local server.



In my case the df -h and du -sh mismatched by about 50% of the hard disk size.



This was caused by apache (httpd) keeping large log files in memory which had been deleted from disk.



This was tracked down by running lsof | grep "/var" | grep deleted where /var was the partition I needed to clean up.



The output showed lines like this:
httpd 32617 nobody 106w REG 9,4 1835222944 688166 /var/log/apache/awstats_log (deleted)



The situation was then resolved by restarting apache (service httpd restart), and cleared up 2gb of disk space, by allowing the locks on deleted files to be cleared.






share|improve this answer

























  • For me, the locks where not released even after I stopped the program (zombies?). I had to kill -9 'pid' to release the locks. eg: For your httpd it would have been kill -9 32617.

    – Micka
    Jun 17 '15 at 8:57






  • 5





    Minor note: You may have to run lsof as sudo or not all open file descriptors will show up

    – ChrisWue
    Aug 8 '16 at 1:34











  • I ran into this with H2, which was adding several gigs to a logfile every day. Instead of restarting H2 (slow), I used sudo truncate -s0 /proc/(h2 PID)/(descriptor number obtained from ls /proc/h2pid/fd).

    – Desty
    Sep 26 '16 at 14:57











  • In my case, even when restart httpd space doesn't released. When I ran /etc/init.d/rsyslog restart it worked :D

    – Thanh Nguyen Van
    Sep 29 '16 at 1:11






  • 1





    You can skip the greps and just do lsof -a +L1 /var, where -a means AND all conditions (default is OR), +L1 means list only files with link counts less than 1 (i.e., deleted files with open file descriptors), and /var constrains to files under that mount point

    – kbolino
    May 13 '18 at 2:50



















47














I agree with OldTroll's answer as the most probable cause for your "missing" space.



On Linux you can easily remount the whole root partition (or any other partition for that matter) to another place in you filesystem say /mnt for example, just issue a



mount -o bind / /mnt


then you can do a



du -h /mnt


and see what uses up your space.



Ps: sorry for adding a new answer and not a comment but I needed some formatting for this post to be readable.






share|improve this answer


















  • 3





    Thanks so much for this tip. Allowed me to find and delete my large, "hidden" files without downtime!

    – choover
    Feb 28 '13 at 13:47











  • Thanks - this showed that docker was filling up my hard drive with diffs in /var/lib/docker/aufs/diff/

    – naught101
    Aug 5 '15 at 3:29


















24














See what df -i says. It could be that you are out of inodes, which might happen if there are a large number of small files in that filesystem, which uses up all the available inodes without consuming all the available space.






share|improve this answer




















  • 1





    The size of a file and the amount of space it takes on a filesystem are two separate things. The smaller the files tend to be, the bigger the discrepancy between them. If you write a script that sums up the sizes of files and compare it to the du -s of the same subtree, you're going to get a good idea if that's the case here.

    – Marcin
    May 30 '11 at 15:00


















19














In my case this had to do with large deleted files. It was fairly painful to solve before I found this page, which set me on the correct path.



I finally solved the problem by using lsof | grep deleted, which showed me which program was holding two very large log files (totalling 5GB of my available 8GB root partition).






share|improve this answer




















  • 1





    This answer makes me wonder why you are storing log files on the root partition, especially one that small... but to each their own, I suppose...

    – a CVn
    Nov 14 '14 at 18:47











  • I had a similar issue , I had restarted all the applications that were using the deleted file , I guess there was a zombie process still holding on to a large deleted file

    – user1965449
    Dec 15 '15 at 2:37











  • This was the case for us, a log processing linux app known as filebeat kept files open.

    – Pykler
    Dec 7 '16 at 20:53











  • @Pykler For us it was filebeat as well. Thanks for the tip!

    – Martijn Heemels
    Jan 29 at 9:30


















5














Files that are open by a program do not actually go away (stop consuming disk space) when you delete them, they go away when the program closes them. A program might have a huge temporary file that you (and du) can't see. If it's a zombie program, you might need to reboot to clear those files.






share|improve this answer























  • OP said he'd rebooted the system and the problem persisted.

    – OldTroll
    May 30 '11 at 12:58











  • I had zombies that wouldn't release the locks on the files, I kill -9 'pid' them to release the locks and get the disk space back.

    – Micka
    Jun 17 '15 at 8:58


















4














Try this to see if a dead/hung process is locked while still writing to the disk:
lsof | grep "/mnt"



Then try killing off any PIDs which are stuck (especially look for lines ending in "(deleted"))






share|improve this answer























  • Thanks! I was able to find that the SFTP server process was holding the deleted file

    – lyomi
    Aug 30 '13 at 4:43


















4














This is the easiest method I have found to date to find large files!



Here is a example if your root mount is full / (mount /root)
Example:



cd / (so you are in root)



ls | xargs du -hs



Example Output:




9.4M bin
63M boot
4.0K cgroup
680K dev
31M etc
6.3G home
313M lib
32M lib64
16K lost+found
61G media
4.0K mnt
113M opt
du: cannot access `proc/6102/task/6102/fd/4': No such file or directory
0 proc
19M root
840K run
19M sbin
4.0K selinux
4.0K srv
25G store
26M tmp


then you would notice that store is large do a
cd /store



and run again



ls | xargs du -hs




Example output:
109M backup
358M fnb
4.0G iso
8.0K ks
16K lost+found
47M root
11M scripts
79M tmp
21G vms


in this case the vms directory is the space hog.






share|improve this answer


















  • 1





    Why not using simpler tools like baobab? (see marzocca.net/linux/baobab/baobab-getting-started.html)

    – Yvan
    May 5 '15 at 7:11







  • 2





    Hm ls + xargs seems like overkill, du -sh /* works just fine by itself

    – ChrisWue
    Aug 8 '16 at 1:35






  • 1





    if you don't know about ncdu ... you'll thank me later: dev.yorhel.nl/ncdu

    – Troy Folger
    Dec 5 '16 at 22:56


















2














For me, I needed to run sudo du as there were a large amount of docker files under /var/lib/docker that a non-sudo user doesn't have permission to read.






share|improve this answer























  • This was my problem. I forgot I switched storage systems in docker and the old volumes were still hanging around.

    – Richard Nienaber
    Jan 9 at 9:21


















1














So I had this problem in Centos 7 as well and found a solution after trying a bunch of stuff like bleachbit and cleaning /usr and /var even though they only showed about 7G each. Was still showing 50G of 50G used in the root partition but only showed 9G of file usage. Ran a live ubuntu cd and unmounted the offending 50G partition, opened terminal and ran xfs_check and xfs_repair on the partition. I then remounted the partition and my lost+found directory had expanded to 40G. Sorted the lost+found by size and found a 38G text log file for steam that eventualy just repeated a mp3 error. Removed the large file and now have space and my disks usage agrees with my root partition size. I would still like to know how to get the steam log to not grow so big again.






share|improve this answer























  • Did this happen to you at work? serverfault.com/help/on-topic

    – chicks
    May 4 '17 at 20:37











  • No just on my home computer.

    – Justin Chadwick
    May 6 '17 at 2:57






  • 3





    xfs_fsr fixed this issue for us

    – Druska
    Aug 17 '17 at 18:34


















0














if the mounted disk is a shared folder on a windows machine, then it seems that df will show the size and disk use of the entire windows disk, but du will show only the part of the disk that you have access too. (and is mounted). so in this case the problem must be fixed on the windows machine.






share|improve this answer






























    0














    One more possibility to consider - you are almost guaranteed to see a big discrepancy if you are using docker, and you run df/du inside a container that is using volume mounts. In the case of a directory mounted to a volume on the docker host, df will report the HOST's df totals. This is obvious if you think about it, but when you get a report of a "runaway container filling the disk!", make sure you verify the container's filespace consumption with something like du -hs <dir>.






    share|improve this answer






























      0














      A similar thing happened to us in production, disk usage went to 98%. Did the following investigation :



      a) df -i for checking the inode usage, inode usage was 6% so not much smaller files



      b) Mounting root and checking hidden files. Could not file any extra files. du results were same as before mount.



      c) Finally, checked nginxlogs. It was configured to write to disk but a developer deleted the log file directly causing nginx to keep all the logs in-memory. As the file /var/log/nginx/access.log was deleted from disk using rm it was not visible using du but the file was getting accessed by nginx and hence it was still held open






      share|improve this answer
































        0














        I had the same problem that is mentioned in this topic, but in one VPS.
        So I have tested everything that is described in this topic but without success.
        The solution was a contact for support with our VPS provider who performed a quota recalculation and corrected the space difference of df -h and du-sh /.






        share|improve this answer






























          0














          I ran into this problem on a FreeBSD box today. The issue was that it was an artifact of vi (not vim, not sure if vim would create this problem). The file was consuming space but hadn't fully been written to disk.



          You can check that with:



          $ fstat -f /path/to/mount/point |sort -nk8 |tail


          This looks at all open files and sorts (numerically via -n) by the 8th column (key, -k8), showing the last ten items.



          In my case, the final (largest) entry looked like this:



          bob vi 12345 4 /var 97267 -rwx------ 1569454080 rw


          This meant process (PID) 12345 was consuming 1.46G (the eighth column divided by 1024³) of disk despite the lack of du noticing it. vi is horrible at viewing extremely large files; even 100MB is large for it. 1.5G (or however large that file actually was) is ridiculous.



          The solution was to sudo kill -HUP 12345 (if that didn't work, I'd sudo kill 12345 and if that also fails, the dreaded kill -9 would come into play).



          Avoid text editors on large files. Sample workarounds for quick skimming:



          Assuming reasonable line lengths:



          • head -n1000 big.log; tail -n1000 big.log |vim -R -

          • wc -l big.log |awk -v n=2000 'NR==FNRL=$1;nextFNR%int(L/n)==1' - big.log |vim -R -

          Assuming unreasonably large line(s):



          • head -c8000 big.log; tail -c8000 big.log |vim -R -

          These use vim -R in place of view because vim is nearly always better ... when it's installed. Feel free to pipe them into view or vi -R instead.



          If you're opening such a large file to actually edit it, consider sed or awk or some other programmatic approach.






          share|improve this answer






























            0














            check if your server have ossec agent installed. Or some proccess is using the deleted log files. In my a time ago was ossec agent.






            share|improve this answer


















            • 1





              OP mentioned that the machine was rebooted, so there should be no deleted files left.

              – RalfFriedl
              Mar 5 at 18:06


















            -3














            check the /lost+found, I had a system (centos 7) and some of file in the /lost+found ate up all the space.






            share|improve this answer

























            • How would this account for the difference in reported disk usage as described in the question?

              – roaima
              Nov 30 '16 at 23:36











            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "2"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f275206%2fdisk-full-du-tells-different-how-to-further-investigate%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            17 Answers
            17






            active

            oldest

            votes








            17 Answers
            17






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            94














            Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk. I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them).






            share|improve this answer


















            • 1





              You can find these hidden files without needing to unmount directories. Take a look at Marcel G's answer below which explains how.

              – mhsekhavat
              Jul 23 '17 at 7:39












            • You should show the CLI commands to do this in your answer

              – Jonathan
              Oct 10 '18 at 17:26






            • 1





              DO CHECK even if you think that it does not make sense for you!

              – Chris
              Oct 26 '18 at 14:32







            • 1





              Note: this answer is talking about files located underneath mount points (i.e. hidden on the original filesystem), not within mount points. (Don't be an idiot like me.)

              – mwfearnley
              Nov 27 '18 at 15:18















            94














            Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk. I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them).






            share|improve this answer


















            • 1





              You can find these hidden files without needing to unmount directories. Take a look at Marcel G's answer below which explains how.

              – mhsekhavat
              Jul 23 '17 at 7:39












            • You should show the CLI commands to do this in your answer

              – Jonathan
              Oct 10 '18 at 17:26






            • 1





              DO CHECK even if you think that it does not make sense for you!

              – Chris
              Oct 26 '18 at 14:32







            • 1





              Note: this answer is talking about files located underneath mount points (i.e. hidden on the original filesystem), not within mount points. (Don't be an idiot like me.)

              – mwfearnley
              Nov 27 '18 at 15:18













            94












            94








            94







            Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk. I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them).






            share|improve this answer













            Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk. I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them).







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered May 30 '11 at 12:35









            OldTrollOldTroll

            1,2611018




            1,2611018







            • 1





              You can find these hidden files without needing to unmount directories. Take a look at Marcel G's answer below which explains how.

              – mhsekhavat
              Jul 23 '17 at 7:39












            • You should show the CLI commands to do this in your answer

              – Jonathan
              Oct 10 '18 at 17:26






            • 1





              DO CHECK even if you think that it does not make sense for you!

              – Chris
              Oct 26 '18 at 14:32







            • 1





              Note: this answer is talking about files located underneath mount points (i.e. hidden on the original filesystem), not within mount points. (Don't be an idiot like me.)

              – mwfearnley
              Nov 27 '18 at 15:18












            • 1





              You can find these hidden files without needing to unmount directories. Take a look at Marcel G's answer below which explains how.

              – mhsekhavat
              Jul 23 '17 at 7:39












            • You should show the CLI commands to do this in your answer

              – Jonathan
              Oct 10 '18 at 17:26






            • 1





              DO CHECK even if you think that it does not make sense for you!

              – Chris
              Oct 26 '18 at 14:32







            • 1





              Note: this answer is talking about files located underneath mount points (i.e. hidden on the original filesystem), not within mount points. (Don't be an idiot like me.)

              – mwfearnley
              Nov 27 '18 at 15:18







            1




            1





            You can find these hidden files without needing to unmount directories. Take a look at Marcel G's answer below which explains how.

            – mhsekhavat
            Jul 23 '17 at 7:39






            You can find these hidden files without needing to unmount directories. Take a look at Marcel G's answer below which explains how.

            – mhsekhavat
            Jul 23 '17 at 7:39














            You should show the CLI commands to do this in your answer

            – Jonathan
            Oct 10 '18 at 17:26





            You should show the CLI commands to do this in your answer

            – Jonathan
            Oct 10 '18 at 17:26




            1




            1





            DO CHECK even if you think that it does not make sense for you!

            – Chris
            Oct 26 '18 at 14:32






            DO CHECK even if you think that it does not make sense for you!

            – Chris
            Oct 26 '18 at 14:32





            1




            1





            Note: this answer is talking about files located underneath mount points (i.e. hidden on the original filesystem), not within mount points. (Don't be an idiot like me.)

            – mwfearnley
            Nov 27 '18 at 15:18





            Note: this answer is talking about files located underneath mount points (i.e. hidden on the original filesystem), not within mount points. (Don't be an idiot like me.)

            – mwfearnley
            Nov 27 '18 at 15:18













            86














            Just stumbled on this page when trying to track down an issue on a local server.



            In my case the df -h and du -sh mismatched by about 50% of the hard disk size.



            This was caused by apache (httpd) keeping large log files in memory which had been deleted from disk.



            This was tracked down by running lsof | grep "/var" | grep deleted where /var was the partition I needed to clean up.



            The output showed lines like this:
            httpd 32617 nobody 106w REG 9,4 1835222944 688166 /var/log/apache/awstats_log (deleted)



            The situation was then resolved by restarting apache (service httpd restart), and cleared up 2gb of disk space, by allowing the locks on deleted files to be cleared.






            share|improve this answer

























            • For me, the locks where not released even after I stopped the program (zombies?). I had to kill -9 'pid' to release the locks. eg: For your httpd it would have been kill -9 32617.

              – Micka
              Jun 17 '15 at 8:57






            • 5





              Minor note: You may have to run lsof as sudo or not all open file descriptors will show up

              – ChrisWue
              Aug 8 '16 at 1:34











            • I ran into this with H2, which was adding several gigs to a logfile every day. Instead of restarting H2 (slow), I used sudo truncate -s0 /proc/(h2 PID)/(descriptor number obtained from ls /proc/h2pid/fd).

              – Desty
              Sep 26 '16 at 14:57











            • In my case, even when restart httpd space doesn't released. When I ran /etc/init.d/rsyslog restart it worked :D

              – Thanh Nguyen Van
              Sep 29 '16 at 1:11






            • 1





              You can skip the greps and just do lsof -a +L1 /var, where -a means AND all conditions (default is OR), +L1 means list only files with link counts less than 1 (i.e., deleted files with open file descriptors), and /var constrains to files under that mount point

              – kbolino
              May 13 '18 at 2:50
















            86














            Just stumbled on this page when trying to track down an issue on a local server.



            In my case the df -h and du -sh mismatched by about 50% of the hard disk size.



            This was caused by apache (httpd) keeping large log files in memory which had been deleted from disk.



            This was tracked down by running lsof | grep "/var" | grep deleted where /var was the partition I needed to clean up.



            The output showed lines like this:
            httpd 32617 nobody 106w REG 9,4 1835222944 688166 /var/log/apache/awstats_log (deleted)



            The situation was then resolved by restarting apache (service httpd restart), and cleared up 2gb of disk space, by allowing the locks on deleted files to be cleared.






            share|improve this answer

























            • For me, the locks where not released even after I stopped the program (zombies?). I had to kill -9 'pid' to release the locks. eg: For your httpd it would have been kill -9 32617.

              – Micka
              Jun 17 '15 at 8:57






            • 5





              Minor note: You may have to run lsof as sudo or not all open file descriptors will show up

              – ChrisWue
              Aug 8 '16 at 1:34











            • I ran into this with H2, which was adding several gigs to a logfile every day. Instead of restarting H2 (slow), I used sudo truncate -s0 /proc/(h2 PID)/(descriptor number obtained from ls /proc/h2pid/fd).

              – Desty
              Sep 26 '16 at 14:57











            • In my case, even when restart httpd space doesn't released. When I ran /etc/init.d/rsyslog restart it worked :D

              – Thanh Nguyen Van
              Sep 29 '16 at 1:11






            • 1





              You can skip the greps and just do lsof -a +L1 /var, where -a means AND all conditions (default is OR), +L1 means list only files with link counts less than 1 (i.e., deleted files with open file descriptors), and /var constrains to files under that mount point

              – kbolino
              May 13 '18 at 2:50














            86












            86








            86







            Just stumbled on this page when trying to track down an issue on a local server.



            In my case the df -h and du -sh mismatched by about 50% of the hard disk size.



            This was caused by apache (httpd) keeping large log files in memory which had been deleted from disk.



            This was tracked down by running lsof | grep "/var" | grep deleted where /var was the partition I needed to clean up.



            The output showed lines like this:
            httpd 32617 nobody 106w REG 9,4 1835222944 688166 /var/log/apache/awstats_log (deleted)



            The situation was then resolved by restarting apache (service httpd restart), and cleared up 2gb of disk space, by allowing the locks on deleted files to be cleared.






            share|improve this answer















            Just stumbled on this page when trying to track down an issue on a local server.



            In my case the df -h and du -sh mismatched by about 50% of the hard disk size.



            This was caused by apache (httpd) keeping large log files in memory which had been deleted from disk.



            This was tracked down by running lsof | grep "/var" | grep deleted where /var was the partition I needed to clean up.



            The output showed lines like this:
            httpd 32617 nobody 106w REG 9,4 1835222944 688166 /var/log/apache/awstats_log (deleted)



            The situation was then resolved by restarting apache (service httpd restart), and cleared up 2gb of disk space, by allowing the locks on deleted files to be cleared.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Feb 15 at 10:29









            DDS

            1208




            1208










            answered Mar 12 '14 at 11:10









            KHobbitsKHobbits

            86162




            86162












            • For me, the locks where not released even after I stopped the program (zombies?). I had to kill -9 'pid' to release the locks. eg: For your httpd it would have been kill -9 32617.

              – Micka
              Jun 17 '15 at 8:57






            • 5





              Minor note: You may have to run lsof as sudo or not all open file descriptors will show up

              – ChrisWue
              Aug 8 '16 at 1:34











            • I ran into this with H2, which was adding several gigs to a logfile every day. Instead of restarting H2 (slow), I used sudo truncate -s0 /proc/(h2 PID)/(descriptor number obtained from ls /proc/h2pid/fd).

              – Desty
              Sep 26 '16 at 14:57











            • In my case, even when restart httpd space doesn't released. When I ran /etc/init.d/rsyslog restart it worked :D

              – Thanh Nguyen Van
              Sep 29 '16 at 1:11






            • 1





              You can skip the greps and just do lsof -a +L1 /var, where -a means AND all conditions (default is OR), +L1 means list only files with link counts less than 1 (i.e., deleted files with open file descriptors), and /var constrains to files under that mount point

              – kbolino
              May 13 '18 at 2:50


















            • For me, the locks where not released even after I stopped the program (zombies?). I had to kill -9 'pid' to release the locks. eg: For your httpd it would have been kill -9 32617.

              – Micka
              Jun 17 '15 at 8:57






            • 5





              Minor note: You may have to run lsof as sudo or not all open file descriptors will show up

              – ChrisWue
              Aug 8 '16 at 1:34











            • I ran into this with H2, which was adding several gigs to a logfile every day. Instead of restarting H2 (slow), I used sudo truncate -s0 /proc/(h2 PID)/(descriptor number obtained from ls /proc/h2pid/fd).

              – Desty
              Sep 26 '16 at 14:57











            • In my case, even when restart httpd space doesn't released. When I ran /etc/init.d/rsyslog restart it worked :D

              – Thanh Nguyen Van
              Sep 29 '16 at 1:11






            • 1





              You can skip the greps and just do lsof -a +L1 /var, where -a means AND all conditions (default is OR), +L1 means list only files with link counts less than 1 (i.e., deleted files with open file descriptors), and /var constrains to files under that mount point

              – kbolino
              May 13 '18 at 2:50

















            For me, the locks where not released even after I stopped the program (zombies?). I had to kill -9 'pid' to release the locks. eg: For your httpd it would have been kill -9 32617.

            – Micka
            Jun 17 '15 at 8:57





            For me, the locks where not released even after I stopped the program (zombies?). I had to kill -9 'pid' to release the locks. eg: For your httpd it would have been kill -9 32617.

            – Micka
            Jun 17 '15 at 8:57




            5




            5





            Minor note: You may have to run lsof as sudo or not all open file descriptors will show up

            – ChrisWue
            Aug 8 '16 at 1:34





            Minor note: You may have to run lsof as sudo or not all open file descriptors will show up

            – ChrisWue
            Aug 8 '16 at 1:34













            I ran into this with H2, which was adding several gigs to a logfile every day. Instead of restarting H2 (slow), I used sudo truncate -s0 /proc/(h2 PID)/(descriptor number obtained from ls /proc/h2pid/fd).

            – Desty
            Sep 26 '16 at 14:57





            I ran into this with H2, which was adding several gigs to a logfile every day. Instead of restarting H2 (slow), I used sudo truncate -s0 /proc/(h2 PID)/(descriptor number obtained from ls /proc/h2pid/fd).

            – Desty
            Sep 26 '16 at 14:57













            In my case, even when restart httpd space doesn't released. When I ran /etc/init.d/rsyslog restart it worked :D

            – Thanh Nguyen Van
            Sep 29 '16 at 1:11





            In my case, even when restart httpd space doesn't released. When I ran /etc/init.d/rsyslog restart it worked :D

            – Thanh Nguyen Van
            Sep 29 '16 at 1:11




            1




            1





            You can skip the greps and just do lsof -a +L1 /var, where -a means AND all conditions (default is OR), +L1 means list only files with link counts less than 1 (i.e., deleted files with open file descriptors), and /var constrains to files under that mount point

            – kbolino
            May 13 '18 at 2:50






            You can skip the greps and just do lsof -a +L1 /var, where -a means AND all conditions (default is OR), +L1 means list only files with link counts less than 1 (i.e., deleted files with open file descriptors), and /var constrains to files under that mount point

            – kbolino
            May 13 '18 at 2:50












            47














            I agree with OldTroll's answer as the most probable cause for your "missing" space.



            On Linux you can easily remount the whole root partition (or any other partition for that matter) to another place in you filesystem say /mnt for example, just issue a



            mount -o bind / /mnt


            then you can do a



            du -h /mnt


            and see what uses up your space.



            Ps: sorry for adding a new answer and not a comment but I needed some formatting for this post to be readable.






            share|improve this answer


















            • 3





              Thanks so much for this tip. Allowed me to find and delete my large, "hidden" files without downtime!

              – choover
              Feb 28 '13 at 13:47











            • Thanks - this showed that docker was filling up my hard drive with diffs in /var/lib/docker/aufs/diff/

              – naught101
              Aug 5 '15 at 3:29















            47














            I agree with OldTroll's answer as the most probable cause for your "missing" space.



            On Linux you can easily remount the whole root partition (or any other partition for that matter) to another place in you filesystem say /mnt for example, just issue a



            mount -o bind / /mnt


            then you can do a



            du -h /mnt


            and see what uses up your space.



            Ps: sorry for adding a new answer and not a comment but I needed some formatting for this post to be readable.






            share|improve this answer


















            • 3





              Thanks so much for this tip. Allowed me to find and delete my large, "hidden" files without downtime!

              – choover
              Feb 28 '13 at 13:47











            • Thanks - this showed that docker was filling up my hard drive with diffs in /var/lib/docker/aufs/diff/

              – naught101
              Aug 5 '15 at 3:29













            47












            47








            47







            I agree with OldTroll's answer as the most probable cause for your "missing" space.



            On Linux you can easily remount the whole root partition (or any other partition for that matter) to another place in you filesystem say /mnt for example, just issue a



            mount -o bind / /mnt


            then you can do a



            du -h /mnt


            and see what uses up your space.



            Ps: sorry for adding a new answer and not a comment but I needed some formatting for this post to be readable.






            share|improve this answer













            I agree with OldTroll's answer as the most probable cause for your "missing" space.



            On Linux you can easily remount the whole root partition (or any other partition for that matter) to another place in you filesystem say /mnt for example, just issue a



            mount -o bind / /mnt


            then you can do a



            du -h /mnt


            and see what uses up your space.



            Ps: sorry for adding a new answer and not a comment but I needed some formatting for this post to be readable.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered May 30 '11 at 13:54









            Marcel GMarcel G

            1,6791123




            1,6791123







            • 3





              Thanks so much for this tip. Allowed me to find and delete my large, "hidden" files without downtime!

              – choover
              Feb 28 '13 at 13:47











            • Thanks - this showed that docker was filling up my hard drive with diffs in /var/lib/docker/aufs/diff/

              – naught101
              Aug 5 '15 at 3:29












            • 3





              Thanks so much for this tip. Allowed me to find and delete my large, "hidden" files without downtime!

              – choover
              Feb 28 '13 at 13:47











            • Thanks - this showed that docker was filling up my hard drive with diffs in /var/lib/docker/aufs/diff/

              – naught101
              Aug 5 '15 at 3:29







            3




            3





            Thanks so much for this tip. Allowed me to find and delete my large, "hidden" files without downtime!

            – choover
            Feb 28 '13 at 13:47





            Thanks so much for this tip. Allowed me to find and delete my large, "hidden" files without downtime!

            – choover
            Feb 28 '13 at 13:47













            Thanks - this showed that docker was filling up my hard drive with diffs in /var/lib/docker/aufs/diff/

            – naught101
            Aug 5 '15 at 3:29





            Thanks - this showed that docker was filling up my hard drive with diffs in /var/lib/docker/aufs/diff/

            – naught101
            Aug 5 '15 at 3:29











            24














            See what df -i says. It could be that you are out of inodes, which might happen if there are a large number of small files in that filesystem, which uses up all the available inodes without consuming all the available space.






            share|improve this answer




















            • 1





              The size of a file and the amount of space it takes on a filesystem are two separate things. The smaller the files tend to be, the bigger the discrepancy between them. If you write a script that sums up the sizes of files and compare it to the du -s of the same subtree, you're going to get a good idea if that's the case here.

              – Marcin
              May 30 '11 at 15:00















            24














            See what df -i says. It could be that you are out of inodes, which might happen if there are a large number of small files in that filesystem, which uses up all the available inodes without consuming all the available space.






            share|improve this answer




















            • 1





              The size of a file and the amount of space it takes on a filesystem are two separate things. The smaller the files tend to be, the bigger the discrepancy between them. If you write a script that sums up the sizes of files and compare it to the du -s of the same subtree, you're going to get a good idea if that's the case here.

              – Marcin
              May 30 '11 at 15:00













            24












            24








            24







            See what df -i says. It could be that you are out of inodes, which might happen if there are a large number of small files in that filesystem, which uses up all the available inodes without consuming all the available space.






            share|improve this answer















            See what df -i says. It could be that you are out of inodes, which might happen if there are a large number of small files in that filesystem, which uses up all the available inodes without consuming all the available space.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Dec 15 '15 at 8:17









            HBruijn

            57.4k1190153




            57.4k1190153










            answered May 30 '11 at 14:10









            eirescoteirescot

            48428




            48428







            • 1





              The size of a file and the amount of space it takes on a filesystem are two separate things. The smaller the files tend to be, the bigger the discrepancy between them. If you write a script that sums up the sizes of files and compare it to the du -s of the same subtree, you're going to get a good idea if that's the case here.

              – Marcin
              May 30 '11 at 15:00












            • 1





              The size of a file and the amount of space it takes on a filesystem are two separate things. The smaller the files tend to be, the bigger the discrepancy between them. If you write a script that sums up the sizes of files and compare it to the du -s of the same subtree, you're going to get a good idea if that's the case here.

              – Marcin
              May 30 '11 at 15:00







            1




            1





            The size of a file and the amount of space it takes on a filesystem are two separate things. The smaller the files tend to be, the bigger the discrepancy between them. If you write a script that sums up the sizes of files and compare it to the du -s of the same subtree, you're going to get a good idea if that's the case here.

            – Marcin
            May 30 '11 at 15:00





            The size of a file and the amount of space it takes on a filesystem are two separate things. The smaller the files tend to be, the bigger the discrepancy between them. If you write a script that sums up the sizes of files and compare it to the du -s of the same subtree, you're going to get a good idea if that's the case here.

            – Marcin
            May 30 '11 at 15:00











            19














            In my case this had to do with large deleted files. It was fairly painful to solve before I found this page, which set me on the correct path.



            I finally solved the problem by using lsof | grep deleted, which showed me which program was holding two very large log files (totalling 5GB of my available 8GB root partition).






            share|improve this answer




















            • 1





              This answer makes me wonder why you are storing log files on the root partition, especially one that small... but to each their own, I suppose...

              – a CVn
              Nov 14 '14 at 18:47











            • I had a similar issue , I had restarted all the applications that were using the deleted file , I guess there was a zombie process still holding on to a large deleted file

              – user1965449
              Dec 15 '15 at 2:37











            • This was the case for us, a log processing linux app known as filebeat kept files open.

              – Pykler
              Dec 7 '16 at 20:53











            • @Pykler For us it was filebeat as well. Thanks for the tip!

              – Martijn Heemels
              Jan 29 at 9:30















            19














            In my case this had to do with large deleted files. It was fairly painful to solve before I found this page, which set me on the correct path.



            I finally solved the problem by using lsof | grep deleted, which showed me which program was holding two very large log files (totalling 5GB of my available 8GB root partition).






            share|improve this answer




















            • 1





              This answer makes me wonder why you are storing log files on the root partition, especially one that small... but to each their own, I suppose...

              – a CVn
              Nov 14 '14 at 18:47











            • I had a similar issue , I had restarted all the applications that were using the deleted file , I guess there was a zombie process still holding on to a large deleted file

              – user1965449
              Dec 15 '15 at 2:37











            • This was the case for us, a log processing linux app known as filebeat kept files open.

              – Pykler
              Dec 7 '16 at 20:53











            • @Pykler For us it was filebeat as well. Thanks for the tip!

              – Martijn Heemels
              Jan 29 at 9:30













            19












            19








            19







            In my case this had to do with large deleted files. It was fairly painful to solve before I found this page, which set me on the correct path.



            I finally solved the problem by using lsof | grep deleted, which showed me which program was holding two very large log files (totalling 5GB of my available 8GB root partition).






            share|improve this answer















            In my case this had to do with large deleted files. It was fairly painful to solve before I found this page, which set me on the correct path.



            I finally solved the problem by using lsof | grep deleted, which showed me which program was holding two very large log files (totalling 5GB of my available 8GB root partition).







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 14 '14 at 19:44









            a CVn

            3,2522155




            3,2522155










            answered Nov 14 '14 at 18:15









            AdrianAdrian

            19112




            19112







            • 1





              This answer makes me wonder why you are storing log files on the root partition, especially one that small... but to each their own, I suppose...

              – a CVn
              Nov 14 '14 at 18:47











            • I had a similar issue , I had restarted all the applications that were using the deleted file , I guess there was a zombie process still holding on to a large deleted file

              – user1965449
              Dec 15 '15 at 2:37











            • This was the case for us, a log processing linux app known as filebeat kept files open.

              – Pykler
              Dec 7 '16 at 20:53











            • @Pykler For us it was filebeat as well. Thanks for the tip!

              – Martijn Heemels
              Jan 29 at 9:30












            • 1





              This answer makes me wonder why you are storing log files on the root partition, especially one that small... but to each their own, I suppose...

              – a CVn
              Nov 14 '14 at 18:47











            • I had a similar issue , I had restarted all the applications that were using the deleted file , I guess there was a zombie process still holding on to a large deleted file

              – user1965449
              Dec 15 '15 at 2:37











            • This was the case for us, a log processing linux app known as filebeat kept files open.

              – Pykler
              Dec 7 '16 at 20:53











            • @Pykler For us it was filebeat as well. Thanks for the tip!

              – Martijn Heemels
              Jan 29 at 9:30







            1




            1





            This answer makes me wonder why you are storing log files on the root partition, especially one that small... but to each their own, I suppose...

            – a CVn
            Nov 14 '14 at 18:47





            This answer makes me wonder why you are storing log files on the root partition, especially one that small... but to each their own, I suppose...

            – a CVn
            Nov 14 '14 at 18:47













            I had a similar issue , I had restarted all the applications that were using the deleted file , I guess there was a zombie process still holding on to a large deleted file

            – user1965449
            Dec 15 '15 at 2:37





            I had a similar issue , I had restarted all the applications that were using the deleted file , I guess there was a zombie process still holding on to a large deleted file

            – user1965449
            Dec 15 '15 at 2:37













            This was the case for us, a log processing linux app known as filebeat kept files open.

            – Pykler
            Dec 7 '16 at 20:53





            This was the case for us, a log processing linux app known as filebeat kept files open.

            – Pykler
            Dec 7 '16 at 20:53













            @Pykler For us it was filebeat as well. Thanks for the tip!

            – Martijn Heemels
            Jan 29 at 9:30





            @Pykler For us it was filebeat as well. Thanks for the tip!

            – Martijn Heemels
            Jan 29 at 9:30











            5














            Files that are open by a program do not actually go away (stop consuming disk space) when you delete them, they go away when the program closes them. A program might have a huge temporary file that you (and du) can't see. If it's a zombie program, you might need to reboot to clear those files.






            share|improve this answer























            • OP said he'd rebooted the system and the problem persisted.

              – OldTroll
              May 30 '11 at 12:58











            • I had zombies that wouldn't release the locks on the files, I kill -9 'pid' them to release the locks and get the disk space back.

              – Micka
              Jun 17 '15 at 8:58















            5














            Files that are open by a program do not actually go away (stop consuming disk space) when you delete them, they go away when the program closes them. A program might have a huge temporary file that you (and du) can't see. If it's a zombie program, you might need to reboot to clear those files.






            share|improve this answer























            • OP said he'd rebooted the system and the problem persisted.

              – OldTroll
              May 30 '11 at 12:58











            • I had zombies that wouldn't release the locks on the files, I kill -9 'pid' them to release the locks and get the disk space back.

              – Micka
              Jun 17 '15 at 8:58













            5












            5








            5







            Files that are open by a program do not actually go away (stop consuming disk space) when you delete them, they go away when the program closes them. A program might have a huge temporary file that you (and du) can't see. If it's a zombie program, you might need to reboot to clear those files.






            share|improve this answer













            Files that are open by a program do not actually go away (stop consuming disk space) when you delete them, they go away when the program closes them. A program might have a huge temporary file that you (and du) can't see. If it's a zombie program, you might need to reboot to clear those files.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered May 30 '11 at 12:51









            Paul TomblinPaul Tomblin

            4,77412439




            4,77412439












            • OP said he'd rebooted the system and the problem persisted.

              – OldTroll
              May 30 '11 at 12:58











            • I had zombies that wouldn't release the locks on the files, I kill -9 'pid' them to release the locks and get the disk space back.

              – Micka
              Jun 17 '15 at 8:58

















            • OP said he'd rebooted the system and the problem persisted.

              – OldTroll
              May 30 '11 at 12:58











            • I had zombies that wouldn't release the locks on the files, I kill -9 'pid' them to release the locks and get the disk space back.

              – Micka
              Jun 17 '15 at 8:58
















            OP said he'd rebooted the system and the problem persisted.

            – OldTroll
            May 30 '11 at 12:58





            OP said he'd rebooted the system and the problem persisted.

            – OldTroll
            May 30 '11 at 12:58













            I had zombies that wouldn't release the locks on the files, I kill -9 'pid' them to release the locks and get the disk space back.

            – Micka
            Jun 17 '15 at 8:58





            I had zombies that wouldn't release the locks on the files, I kill -9 'pid' them to release the locks and get the disk space back.

            – Micka
            Jun 17 '15 at 8:58











            4














            Try this to see if a dead/hung process is locked while still writing to the disk:
            lsof | grep "/mnt"



            Then try killing off any PIDs which are stuck (especially look for lines ending in "(deleted"))






            share|improve this answer























            • Thanks! I was able to find that the SFTP server process was holding the deleted file

              – lyomi
              Aug 30 '13 at 4:43















            4














            Try this to see if a dead/hung process is locked while still writing to the disk:
            lsof | grep "/mnt"



            Then try killing off any PIDs which are stuck (especially look for lines ending in "(deleted"))






            share|improve this answer























            • Thanks! I was able to find that the SFTP server process was holding the deleted file

              – lyomi
              Aug 30 '13 at 4:43













            4












            4








            4







            Try this to see if a dead/hung process is locked while still writing to the disk:
            lsof | grep "/mnt"



            Then try killing off any PIDs which are stuck (especially look for lines ending in "(deleted"))






            share|improve this answer













            Try this to see if a dead/hung process is locked while still writing to the disk:
            lsof | grep "/mnt"



            Then try killing off any PIDs which are stuck (especially look for lines ending in "(deleted"))







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Jun 26 '11 at 10:38









            PhirskPhirsk

            411




            411












            • Thanks! I was able to find that the SFTP server process was holding the deleted file

              – lyomi
              Aug 30 '13 at 4:43

















            • Thanks! I was able to find that the SFTP server process was holding the deleted file

              – lyomi
              Aug 30 '13 at 4:43
















            Thanks! I was able to find that the SFTP server process was holding the deleted file

            – lyomi
            Aug 30 '13 at 4:43





            Thanks! I was able to find that the SFTP server process was holding the deleted file

            – lyomi
            Aug 30 '13 at 4:43











            4














            This is the easiest method I have found to date to find large files!



            Here is a example if your root mount is full / (mount /root)
            Example:



            cd / (so you are in root)



            ls | xargs du -hs



            Example Output:




            9.4M bin
            63M boot
            4.0K cgroup
            680K dev
            31M etc
            6.3G home
            313M lib
            32M lib64
            16K lost+found
            61G media
            4.0K mnt
            113M opt
            du: cannot access `proc/6102/task/6102/fd/4': No such file or directory
            0 proc
            19M root
            840K run
            19M sbin
            4.0K selinux
            4.0K srv
            25G store
            26M tmp


            then you would notice that store is large do a
            cd /store



            and run again



            ls | xargs du -hs




            Example output:
            109M backup
            358M fnb
            4.0G iso
            8.0K ks
            16K lost+found
            47M root
            11M scripts
            79M tmp
            21G vms


            in this case the vms directory is the space hog.






            share|improve this answer


















            • 1





              Why not using simpler tools like baobab? (see marzocca.net/linux/baobab/baobab-getting-started.html)

              – Yvan
              May 5 '15 at 7:11







            • 2





              Hm ls + xargs seems like overkill, du -sh /* works just fine by itself

              – ChrisWue
              Aug 8 '16 at 1:35






            • 1





              if you don't know about ncdu ... you'll thank me later: dev.yorhel.nl/ncdu

              – Troy Folger
              Dec 5 '16 at 22:56















            4














            This is the easiest method I have found to date to find large files!



            Here is a example if your root mount is full / (mount /root)
            Example:



            cd / (so you are in root)



            ls | xargs du -hs



            Example Output:




            9.4M bin
            63M boot
            4.0K cgroup
            680K dev
            31M etc
            6.3G home
            313M lib
            32M lib64
            16K lost+found
            61G media
            4.0K mnt
            113M opt
            du: cannot access `proc/6102/task/6102/fd/4': No such file or directory
            0 proc
            19M root
            840K run
            19M sbin
            4.0K selinux
            4.0K srv
            25G store
            26M tmp


            then you would notice that store is large do a
            cd /store



            and run again



            ls | xargs du -hs




            Example output:
            109M backup
            358M fnb
            4.0G iso
            8.0K ks
            16K lost+found
            47M root
            11M scripts
            79M tmp
            21G vms


            in this case the vms directory is the space hog.






            share|improve this answer


















            • 1





              Why not using simpler tools like baobab? (see marzocca.net/linux/baobab/baobab-getting-started.html)

              – Yvan
              May 5 '15 at 7:11







            • 2





              Hm ls + xargs seems like overkill, du -sh /* works just fine by itself

              – ChrisWue
              Aug 8 '16 at 1:35






            • 1





              if you don't know about ncdu ... you'll thank me later: dev.yorhel.nl/ncdu

              – Troy Folger
              Dec 5 '16 at 22:56













            4












            4








            4







            This is the easiest method I have found to date to find large files!



            Here is a example if your root mount is full / (mount /root)
            Example:



            cd / (so you are in root)



            ls | xargs du -hs



            Example Output:




            9.4M bin
            63M boot
            4.0K cgroup
            680K dev
            31M etc
            6.3G home
            313M lib
            32M lib64
            16K lost+found
            61G media
            4.0K mnt
            113M opt
            du: cannot access `proc/6102/task/6102/fd/4': No such file or directory
            0 proc
            19M root
            840K run
            19M sbin
            4.0K selinux
            4.0K srv
            25G store
            26M tmp


            then you would notice that store is large do a
            cd /store



            and run again



            ls | xargs du -hs




            Example output:
            109M backup
            358M fnb
            4.0G iso
            8.0K ks
            16K lost+found
            47M root
            11M scripts
            79M tmp
            21G vms


            in this case the vms directory is the space hog.






            share|improve this answer













            This is the easiest method I have found to date to find large files!



            Here is a example if your root mount is full / (mount /root)
            Example:



            cd / (so you are in root)



            ls | xargs du -hs



            Example Output:




            9.4M bin
            63M boot
            4.0K cgroup
            680K dev
            31M etc
            6.3G home
            313M lib
            32M lib64
            16K lost+found
            61G media
            4.0K mnt
            113M opt
            du: cannot access `proc/6102/task/6102/fd/4': No such file or directory
            0 proc
            19M root
            840K run
            19M sbin
            4.0K selinux
            4.0K srv
            25G store
            26M tmp


            then you would notice that store is large do a
            cd /store



            and run again



            ls | xargs du -hs




            Example output:
            109M backup
            358M fnb
            4.0G iso
            8.0K ks
            16K lost+found
            47M root
            11M scripts
            79M tmp
            21G vms


            in this case the vms directory is the space hog.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Jun 26 '11 at 13:05









            RiaanRiaan

            321313




            321313







            • 1





              Why not using simpler tools like baobab? (see marzocca.net/linux/baobab/baobab-getting-started.html)

              – Yvan
              May 5 '15 at 7:11







            • 2





              Hm ls + xargs seems like overkill, du -sh /* works just fine by itself

              – ChrisWue
              Aug 8 '16 at 1:35






            • 1





              if you don't know about ncdu ... you'll thank me later: dev.yorhel.nl/ncdu

              – Troy Folger
              Dec 5 '16 at 22:56












            • 1





              Why not using simpler tools like baobab? (see marzocca.net/linux/baobab/baobab-getting-started.html)

              – Yvan
              May 5 '15 at 7:11







            • 2





              Hm ls + xargs seems like overkill, du -sh /* works just fine by itself

              – ChrisWue
              Aug 8 '16 at 1:35






            • 1





              if you don't know about ncdu ... you'll thank me later: dev.yorhel.nl/ncdu

              – Troy Folger
              Dec 5 '16 at 22:56







            1




            1





            Why not using simpler tools like baobab? (see marzocca.net/linux/baobab/baobab-getting-started.html)

            – Yvan
            May 5 '15 at 7:11






            Why not using simpler tools like baobab? (see marzocca.net/linux/baobab/baobab-getting-started.html)

            – Yvan
            May 5 '15 at 7:11





            2




            2





            Hm ls + xargs seems like overkill, du -sh /* works just fine by itself

            – ChrisWue
            Aug 8 '16 at 1:35





            Hm ls + xargs seems like overkill, du -sh /* works just fine by itself

            – ChrisWue
            Aug 8 '16 at 1:35




            1




            1





            if you don't know about ncdu ... you'll thank me later: dev.yorhel.nl/ncdu

            – Troy Folger
            Dec 5 '16 at 22:56





            if you don't know about ncdu ... you'll thank me later: dev.yorhel.nl/ncdu

            – Troy Folger
            Dec 5 '16 at 22:56











            2














            For me, I needed to run sudo du as there were a large amount of docker files under /var/lib/docker that a non-sudo user doesn't have permission to read.






            share|improve this answer























            • This was my problem. I forgot I switched storage systems in docker and the old volumes were still hanging around.

              – Richard Nienaber
              Jan 9 at 9:21















            2














            For me, I needed to run sudo du as there were a large amount of docker files under /var/lib/docker that a non-sudo user doesn't have permission to read.






            share|improve this answer























            • This was my problem. I forgot I switched storage systems in docker and the old volumes were still hanging around.

              – Richard Nienaber
              Jan 9 at 9:21













            2












            2








            2







            For me, I needed to run sudo du as there were a large amount of docker files under /var/lib/docker that a non-sudo user doesn't have permission to read.






            share|improve this answer













            For me, I needed to run sudo du as there were a large amount of docker files under /var/lib/docker that a non-sudo user doesn't have permission to read.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Feb 9 '18 at 11:51









            jobeversjobevers

            1213




            1213












            • This was my problem. I forgot I switched storage systems in docker and the old volumes were still hanging around.

              – Richard Nienaber
              Jan 9 at 9:21

















            • This was my problem. I forgot I switched storage systems in docker and the old volumes were still hanging around.

              – Richard Nienaber
              Jan 9 at 9:21
















            This was my problem. I forgot I switched storage systems in docker and the old volumes were still hanging around.

            – Richard Nienaber
            Jan 9 at 9:21





            This was my problem. I forgot I switched storage systems in docker and the old volumes were still hanging around.

            – Richard Nienaber
            Jan 9 at 9:21











            1














            So I had this problem in Centos 7 as well and found a solution after trying a bunch of stuff like bleachbit and cleaning /usr and /var even though they only showed about 7G each. Was still showing 50G of 50G used in the root partition but only showed 9G of file usage. Ran a live ubuntu cd and unmounted the offending 50G partition, opened terminal and ran xfs_check and xfs_repair on the partition. I then remounted the partition and my lost+found directory had expanded to 40G. Sorted the lost+found by size and found a 38G text log file for steam that eventualy just repeated a mp3 error. Removed the large file and now have space and my disks usage agrees with my root partition size. I would still like to know how to get the steam log to not grow so big again.






            share|improve this answer























            • Did this happen to you at work? serverfault.com/help/on-topic

              – chicks
              May 4 '17 at 20:37











            • No just on my home computer.

              – Justin Chadwick
              May 6 '17 at 2:57






            • 3





              xfs_fsr fixed this issue for us

              – Druska
              Aug 17 '17 at 18:34















            1














            So I had this problem in Centos 7 as well and found a solution after trying a bunch of stuff like bleachbit and cleaning /usr and /var even though they only showed about 7G each. Was still showing 50G of 50G used in the root partition but only showed 9G of file usage. Ran a live ubuntu cd and unmounted the offending 50G partition, opened terminal and ran xfs_check and xfs_repair on the partition. I then remounted the partition and my lost+found directory had expanded to 40G. Sorted the lost+found by size and found a 38G text log file for steam that eventualy just repeated a mp3 error. Removed the large file and now have space and my disks usage agrees with my root partition size. I would still like to know how to get the steam log to not grow so big again.






            share|improve this answer























            • Did this happen to you at work? serverfault.com/help/on-topic

              – chicks
              May 4 '17 at 20:37











            • No just on my home computer.

              – Justin Chadwick
              May 6 '17 at 2:57






            • 3





              xfs_fsr fixed this issue for us

              – Druska
              Aug 17 '17 at 18:34













            1












            1








            1







            So I had this problem in Centos 7 as well and found a solution after trying a bunch of stuff like bleachbit and cleaning /usr and /var even though they only showed about 7G each. Was still showing 50G of 50G used in the root partition but only showed 9G of file usage. Ran a live ubuntu cd and unmounted the offending 50G partition, opened terminal and ran xfs_check and xfs_repair on the partition. I then remounted the partition and my lost+found directory had expanded to 40G. Sorted the lost+found by size and found a 38G text log file for steam that eventualy just repeated a mp3 error. Removed the large file and now have space and my disks usage agrees with my root partition size. I would still like to know how to get the steam log to not grow so big again.






            share|improve this answer













            So I had this problem in Centos 7 as well and found a solution after trying a bunch of stuff like bleachbit and cleaning /usr and /var even though they only showed about 7G each. Was still showing 50G of 50G used in the root partition but only showed 9G of file usage. Ran a live ubuntu cd and unmounted the offending 50G partition, opened terminal and ran xfs_check and xfs_repair on the partition. I then remounted the partition and my lost+found directory had expanded to 40G. Sorted the lost+found by size and found a 38G text log file for steam that eventualy just repeated a mp3 error. Removed the large file and now have space and my disks usage agrees with my root partition size. I would still like to know how to get the steam log to not grow so big again.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered May 4 '17 at 18:01









            Justin ChadwickJustin Chadwick

            112




            112












            • Did this happen to you at work? serverfault.com/help/on-topic

              – chicks
              May 4 '17 at 20:37











            • No just on my home computer.

              – Justin Chadwick
              May 6 '17 at 2:57






            • 3





              xfs_fsr fixed this issue for us

              – Druska
              Aug 17 '17 at 18:34

















            • Did this happen to you at work? serverfault.com/help/on-topic

              – chicks
              May 4 '17 at 20:37











            • No just on my home computer.

              – Justin Chadwick
              May 6 '17 at 2:57






            • 3





              xfs_fsr fixed this issue for us

              – Druska
              Aug 17 '17 at 18:34
















            Did this happen to you at work? serverfault.com/help/on-topic

            – chicks
            May 4 '17 at 20:37





            Did this happen to you at work? serverfault.com/help/on-topic

            – chicks
            May 4 '17 at 20:37













            No just on my home computer.

            – Justin Chadwick
            May 6 '17 at 2:57





            No just on my home computer.

            – Justin Chadwick
            May 6 '17 at 2:57




            3




            3





            xfs_fsr fixed this issue for us

            – Druska
            Aug 17 '17 at 18:34





            xfs_fsr fixed this issue for us

            – Druska
            Aug 17 '17 at 18:34











            0














            if the mounted disk is a shared folder on a windows machine, then it seems that df will show the size and disk use of the entire windows disk, but du will show only the part of the disk that you have access too. (and is mounted). so in this case the problem must be fixed on the windows machine.






            share|improve this answer



























              0














              if the mounted disk is a shared folder on a windows machine, then it seems that df will show the size and disk use of the entire windows disk, but du will show only the part of the disk that you have access too. (and is mounted). so in this case the problem must be fixed on the windows machine.






              share|improve this answer

























                0












                0








                0







                if the mounted disk is a shared folder on a windows machine, then it seems that df will show the size and disk use of the entire windows disk, but du will show only the part of the disk that you have access too. (and is mounted). so in this case the problem must be fixed on the windows machine.






                share|improve this answer













                if the mounted disk is a shared folder on a windows machine, then it seems that df will show the size and disk use of the entire windows disk, but du will show only the part of the disk that you have access too. (and is mounted). so in this case the problem must be fixed on the windows machine.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Jun 21 '12 at 10:33









                SverreSverre

                503720




                503720





















                    0














                    One more possibility to consider - you are almost guaranteed to see a big discrepancy if you are using docker, and you run df/du inside a container that is using volume mounts. In the case of a directory mounted to a volume on the docker host, df will report the HOST's df totals. This is obvious if you think about it, but when you get a report of a "runaway container filling the disk!", make sure you verify the container's filespace consumption with something like du -hs <dir>.






                    share|improve this answer



























                      0














                      One more possibility to consider - you are almost guaranteed to see a big discrepancy if you are using docker, and you run df/du inside a container that is using volume mounts. In the case of a directory mounted to a volume on the docker host, df will report the HOST's df totals. This is obvious if you think about it, but when you get a report of a "runaway container filling the disk!", make sure you verify the container's filespace consumption with something like du -hs <dir>.






                      share|improve this answer

























                        0












                        0








                        0







                        One more possibility to consider - you are almost guaranteed to see a big discrepancy if you are using docker, and you run df/du inside a container that is using volume mounts. In the case of a directory mounted to a volume on the docker host, df will report the HOST's df totals. This is obvious if you think about it, but when you get a report of a "runaway container filling the disk!", make sure you verify the container's filespace consumption with something like du -hs <dir>.






                        share|improve this answer













                        One more possibility to consider - you are almost guaranteed to see a big discrepancy if you are using docker, and you run df/du inside a container that is using volume mounts. In the case of a directory mounted to a volume on the docker host, df will report the HOST's df totals. This is obvious if you think about it, but when you get a report of a "runaway container filling the disk!", make sure you verify the container's filespace consumption with something like du -hs <dir>.







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Dec 5 '16 at 23:02









                        Troy FolgerTroy Folger

                        1212




                        1212





















                            0














                            A similar thing happened to us in production, disk usage went to 98%. Did the following investigation :



                            a) df -i for checking the inode usage, inode usage was 6% so not much smaller files



                            b) Mounting root and checking hidden files. Could not file any extra files. du results were same as before mount.



                            c) Finally, checked nginxlogs. It was configured to write to disk but a developer deleted the log file directly causing nginx to keep all the logs in-memory. As the file /var/log/nginx/access.log was deleted from disk using rm it was not visible using du but the file was getting accessed by nginx and hence it was still held open






                            share|improve this answer





























                              0














                              A similar thing happened to us in production, disk usage went to 98%. Did the following investigation :



                              a) df -i for checking the inode usage, inode usage was 6% so not much smaller files



                              b) Mounting root and checking hidden files. Could not file any extra files. du results were same as before mount.



                              c) Finally, checked nginxlogs. It was configured to write to disk but a developer deleted the log file directly causing nginx to keep all the logs in-memory. As the file /var/log/nginx/access.log was deleted from disk using rm it was not visible using du but the file was getting accessed by nginx and hence it was still held open






                              share|improve this answer



























                                0












                                0








                                0







                                A similar thing happened to us in production, disk usage went to 98%. Did the following investigation :



                                a) df -i for checking the inode usage, inode usage was 6% so not much smaller files



                                b) Mounting root and checking hidden files. Could not file any extra files. du results were same as before mount.



                                c) Finally, checked nginxlogs. It was configured to write to disk but a developer deleted the log file directly causing nginx to keep all the logs in-memory. As the file /var/log/nginx/access.log was deleted from disk using rm it was not visible using du but the file was getting accessed by nginx and hence it was still held open






                                share|improve this answer















                                A similar thing happened to us in production, disk usage went to 98%. Did the following investigation :



                                a) df -i for checking the inode usage, inode usage was 6% so not much smaller files



                                b) Mounting root and checking hidden files. Could not file any extra files. du results were same as before mount.



                                c) Finally, checked nginxlogs. It was configured to write to disk but a developer deleted the log file directly causing nginx to keep all the logs in-memory. As the file /var/log/nginx/access.log was deleted from disk using rm it was not visible using du but the file was getting accessed by nginx and hence it was still held open







                                share|improve this answer














                                share|improve this answer



                                share|improve this answer








                                edited Apr 20 '18 at 19:36

























                                answered Apr 20 '18 at 19:23









                                darxtrixdarxtrix

                                1034




                                1034





















                                    0














                                    I had the same problem that is mentioned in this topic, but in one VPS.
                                    So I have tested everything that is described in this topic but without success.
                                    The solution was a contact for support with our VPS provider who performed a quota recalculation and corrected the space difference of df -h and du-sh /.






                                    share|improve this answer



























                                      0














                                      I had the same problem that is mentioned in this topic, but in one VPS.
                                      So I have tested everything that is described in this topic but without success.
                                      The solution was a contact for support with our VPS provider who performed a quota recalculation and corrected the space difference of df -h and du-sh /.






                                      share|improve this answer

























                                        0












                                        0








                                        0







                                        I had the same problem that is mentioned in this topic, but in one VPS.
                                        So I have tested everything that is described in this topic but without success.
                                        The solution was a contact for support with our VPS provider who performed a quota recalculation and corrected the space difference of df -h and du-sh /.






                                        share|improve this answer













                                        I had the same problem that is mentioned in this topic, but in one VPS.
                                        So I have tested everything that is described in this topic but without success.
                                        The solution was a contact for support with our VPS provider who performed a quota recalculation and corrected the space difference of df -h and du-sh /.







                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Jul 18 '18 at 13:33









                                        ldxdldxd

                                        32




                                        32





















                                            0














                                            I ran into this problem on a FreeBSD box today. The issue was that it was an artifact of vi (not vim, not sure if vim would create this problem). The file was consuming space but hadn't fully been written to disk.



                                            You can check that with:



                                            $ fstat -f /path/to/mount/point |sort -nk8 |tail


                                            This looks at all open files and sorts (numerically via -n) by the 8th column (key, -k8), showing the last ten items.



                                            In my case, the final (largest) entry looked like this:



                                            bob vi 12345 4 /var 97267 -rwx------ 1569454080 rw


                                            This meant process (PID) 12345 was consuming 1.46G (the eighth column divided by 1024³) of disk despite the lack of du noticing it. vi is horrible at viewing extremely large files; even 100MB is large for it. 1.5G (or however large that file actually was) is ridiculous.



                                            The solution was to sudo kill -HUP 12345 (if that didn't work, I'd sudo kill 12345 and if that also fails, the dreaded kill -9 would come into play).



                                            Avoid text editors on large files. Sample workarounds for quick skimming:



                                            Assuming reasonable line lengths:



                                            • head -n1000 big.log; tail -n1000 big.log |vim -R -

                                            • wc -l big.log |awk -v n=2000 'NR==FNRL=$1;nextFNR%int(L/n)==1' - big.log |vim -R -

                                            Assuming unreasonably large line(s):



                                            • head -c8000 big.log; tail -c8000 big.log |vim -R -

                                            These use vim -R in place of view because vim is nearly always better ... when it's installed. Feel free to pipe them into view or vi -R instead.



                                            If you're opening such a large file to actually edit it, consider sed or awk or some other programmatic approach.






                                            share|improve this answer



























                                              0














                                              I ran into this problem on a FreeBSD box today. The issue was that it was an artifact of vi (not vim, not sure if vim would create this problem). The file was consuming space but hadn't fully been written to disk.



                                              You can check that with:



                                              $ fstat -f /path/to/mount/point |sort -nk8 |tail


                                              This looks at all open files and sorts (numerically via -n) by the 8th column (key, -k8), showing the last ten items.



                                              In my case, the final (largest) entry looked like this:



                                              bob vi 12345 4 /var 97267 -rwx------ 1569454080 rw


                                              This meant process (PID) 12345 was consuming 1.46G (the eighth column divided by 1024³) of disk despite the lack of du noticing it. vi is horrible at viewing extremely large files; even 100MB is large for it. 1.5G (or however large that file actually was) is ridiculous.



                                              The solution was to sudo kill -HUP 12345 (if that didn't work, I'd sudo kill 12345 and if that also fails, the dreaded kill -9 would come into play).



                                              Avoid text editors on large files. Sample workarounds for quick skimming:



                                              Assuming reasonable line lengths:



                                              • head -n1000 big.log; tail -n1000 big.log |vim -R -

                                              • wc -l big.log |awk -v n=2000 'NR==FNRL=$1;nextFNR%int(L/n)==1' - big.log |vim -R -

                                              Assuming unreasonably large line(s):



                                              • head -c8000 big.log; tail -c8000 big.log |vim -R -

                                              These use vim -R in place of view because vim is nearly always better ... when it's installed. Feel free to pipe them into view or vi -R instead.



                                              If you're opening such a large file to actually edit it, consider sed or awk or some other programmatic approach.






                                              share|improve this answer

























                                                0












                                                0








                                                0







                                                I ran into this problem on a FreeBSD box today. The issue was that it was an artifact of vi (not vim, not sure if vim would create this problem). The file was consuming space but hadn't fully been written to disk.



                                                You can check that with:



                                                $ fstat -f /path/to/mount/point |sort -nk8 |tail


                                                This looks at all open files and sorts (numerically via -n) by the 8th column (key, -k8), showing the last ten items.



                                                In my case, the final (largest) entry looked like this:



                                                bob vi 12345 4 /var 97267 -rwx------ 1569454080 rw


                                                This meant process (PID) 12345 was consuming 1.46G (the eighth column divided by 1024³) of disk despite the lack of du noticing it. vi is horrible at viewing extremely large files; even 100MB is large for it. 1.5G (or however large that file actually was) is ridiculous.



                                                The solution was to sudo kill -HUP 12345 (if that didn't work, I'd sudo kill 12345 and if that also fails, the dreaded kill -9 would come into play).



                                                Avoid text editors on large files. Sample workarounds for quick skimming:



                                                Assuming reasonable line lengths:



                                                • head -n1000 big.log; tail -n1000 big.log |vim -R -

                                                • wc -l big.log |awk -v n=2000 'NR==FNRL=$1;nextFNR%int(L/n)==1' - big.log |vim -R -

                                                Assuming unreasonably large line(s):



                                                • head -c8000 big.log; tail -c8000 big.log |vim -R -

                                                These use vim -R in place of view because vim is nearly always better ... when it's installed. Feel free to pipe them into view or vi -R instead.



                                                If you're opening such a large file to actually edit it, consider sed or awk or some other programmatic approach.






                                                share|improve this answer













                                                I ran into this problem on a FreeBSD box today. The issue was that it was an artifact of vi (not vim, not sure if vim would create this problem). The file was consuming space but hadn't fully been written to disk.



                                                You can check that with:



                                                $ fstat -f /path/to/mount/point |sort -nk8 |tail


                                                This looks at all open files and sorts (numerically via -n) by the 8th column (key, -k8), showing the last ten items.



                                                In my case, the final (largest) entry looked like this:



                                                bob vi 12345 4 /var 97267 -rwx------ 1569454080 rw


                                                This meant process (PID) 12345 was consuming 1.46G (the eighth column divided by 1024³) of disk despite the lack of du noticing it. vi is horrible at viewing extremely large files; even 100MB is large for it. 1.5G (or however large that file actually was) is ridiculous.



                                                The solution was to sudo kill -HUP 12345 (if that didn't work, I'd sudo kill 12345 and if that also fails, the dreaded kill -9 would come into play).



                                                Avoid text editors on large files. Sample workarounds for quick skimming:



                                                Assuming reasonable line lengths:



                                                • head -n1000 big.log; tail -n1000 big.log |vim -R -

                                                • wc -l big.log |awk -v n=2000 'NR==FNRL=$1;nextFNR%int(L/n)==1' - big.log |vim -R -

                                                Assuming unreasonably large line(s):



                                                • head -c8000 big.log; tail -c8000 big.log |vim -R -

                                                These use vim -R in place of view because vim is nearly always better ... when it's installed. Feel free to pipe them into view or vi -R instead.



                                                If you're opening such a large file to actually edit it, consider sed or awk or some other programmatic approach.







                                                share|improve this answer












                                                share|improve this answer



                                                share|improve this answer










                                                answered Oct 12 '18 at 21:58









                                                Adam KatzAdam Katz

                                                522415




                                                522415





















                                                    0














                                                    check if your server have ossec agent installed. Or some proccess is using the deleted log files. In my a time ago was ossec agent.






                                                    share|improve this answer


















                                                    • 1





                                                      OP mentioned that the machine was rebooted, so there should be no deleted files left.

                                                      – RalfFriedl
                                                      Mar 5 at 18:06















                                                    0














                                                    check if your server have ossec agent installed. Or some proccess is using the deleted log files. In my a time ago was ossec agent.






                                                    share|improve this answer


















                                                    • 1





                                                      OP mentioned that the machine was rebooted, so there should be no deleted files left.

                                                      – RalfFriedl
                                                      Mar 5 at 18:06













                                                    0












                                                    0








                                                    0







                                                    check if your server have ossec agent installed. Or some proccess is using the deleted log files. In my a time ago was ossec agent.






                                                    share|improve this answer













                                                    check if your server have ossec agent installed. Or some proccess is using the deleted log files. In my a time ago was ossec agent.







                                                    share|improve this answer












                                                    share|improve this answer



                                                    share|improve this answer










                                                    answered Mar 5 at 17:43









                                                    Richard MéridaRichard Mérida

                                                    1




                                                    1







                                                    • 1





                                                      OP mentioned that the machine was rebooted, so there should be no deleted files left.

                                                      – RalfFriedl
                                                      Mar 5 at 18:06












                                                    • 1





                                                      OP mentioned that the machine was rebooted, so there should be no deleted files left.

                                                      – RalfFriedl
                                                      Mar 5 at 18:06







                                                    1




                                                    1





                                                    OP mentioned that the machine was rebooted, so there should be no deleted files left.

                                                    – RalfFriedl
                                                    Mar 5 at 18:06





                                                    OP mentioned that the machine was rebooted, so there should be no deleted files left.

                                                    – RalfFriedl
                                                    Mar 5 at 18:06











                                                    -3














                                                    check the /lost+found, I had a system (centos 7) and some of file in the /lost+found ate up all the space.






                                                    share|improve this answer

























                                                    • How would this account for the difference in reported disk usage as described in the question?

                                                      – roaima
                                                      Nov 30 '16 at 23:36















                                                    -3














                                                    check the /lost+found, I had a system (centos 7) and some of file in the /lost+found ate up all the space.






                                                    share|improve this answer

























                                                    • How would this account for the difference in reported disk usage as described in the question?

                                                      – roaima
                                                      Nov 30 '16 at 23:36













                                                    -3












                                                    -3








                                                    -3







                                                    check the /lost+found, I had a system (centos 7) and some of file in the /lost+found ate up all the space.






                                                    share|improve this answer















                                                    check the /lost+found, I had a system (centos 7) and some of file in the /lost+found ate up all the space.







                                                    share|improve this answer














                                                    share|improve this answer



                                                    share|improve this answer








                                                    edited Nov 23 '16 at 23:01









                                                    Michael Hampton

                                                    176k27322653




                                                    176k27322653










                                                    answered Nov 23 '16 at 22:24









                                                    Jude ZhuJude Zhu

                                                    1




                                                    1












                                                    • How would this account for the difference in reported disk usage as described in the question?

                                                      – roaima
                                                      Nov 30 '16 at 23:36

















                                                    • How would this account for the difference in reported disk usage as described in the question?

                                                      – roaima
                                                      Nov 30 '16 at 23:36
















                                                    How would this account for the difference in reported disk usage as described in the question?

                                                    – roaima
                                                    Nov 30 '16 at 23:36





                                                    How would this account for the difference in reported disk usage as described in the question?

                                                    – roaima
                                                    Nov 30 '16 at 23:36

















                                                    draft saved

                                                    draft discarded
















































                                                    Thanks for contributing an answer to Server Fault!


                                                    • Please be sure to answer the question. Provide details and share your research!

                                                    But avoid


                                                    • Asking for help, clarification, or responding to other answers.

                                                    • Making statements based on opinion; back them up with references or personal experience.

                                                    To learn more, see our tips on writing great answers.




                                                    draft saved


                                                    draft discarded














                                                    StackExchange.ready(
                                                    function ()
                                                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f275206%2fdisk-full-du-tells-different-how-to-further-investigate%23new-answer', 'question_page');

                                                    );

                                                    Post as a guest















                                                    Required, but never shown





















































                                                    Required, but never shown














                                                    Required, but never shown












                                                    Required, but never shown







                                                    Required, but never shown

































                                                    Required, but never shown














                                                    Required, but never shown












                                                    Required, but never shown







                                                    Required, but never shown







                                                    Popular posts from this blog

                                                    Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

                                                    Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

                                                    What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company