Various folks in my office dumped just shy of 100K files to a linux
file server I made for back ups. But, they dumped them all into one
directory. There are so many files there that ls times out.
I think the cure would be to recompile a kernel, editing
/usr/src/linux/include/linux/fs.h to change the NR_FILE variable from
8192 to 8192*10 or even 8192*100. I think this would allow things like
ls and rm to work. Or am I barking up the wrong tree?
Has anyone else had any experience like this? I know I can also cure
it by making more subdirectories and force them to put there files in
the subdirectories. But chiding from the Windoze users stirs me to try
to let them leave their crap all in one big heap.
Logan
-- 7:30pm up 5 days, 12:13, 2 users, load average: 0.25, 0.21, 0.48 "What's the use of a good quotation if you can't change it?" -- The Doctor
This archive was generated by hypermail 2.1.3 : Fri Aug 01 2014 - 20:09:29 EDT