19

How do I find all the open files in a process (from inside itself)?

This seems useful to know after a fork() (before exec()).

I know of the existance of getdtablesize() and the more portable sysconf(_SC_OPEN_MAX), but it seems inefficient to attempt closing every valid file descriptor, whether there's an open file behind it or not. (I am also aware of the dangers of premature optimisation, this is more about aesthetics I guess :-)

codeforester
  • 28,846
  • 11
  • 78
  • 104
Magnus
  • 4,444
  • 1
  • 30
  • 47

4 Answers4

9

If your program will be calling fork and exec, you really should open all file descriptors with the O_CLOEXEC flag so you don't have to manually close them before exec. You can also use fcntl to add this flag after a file is opened, but that's subject to race conditions in multithreaded programs.

R.. GitHub STOP HELPING ICE
  • 195,354
  • 31
  • 331
  • 669
6

It may sound inefficient to attempt to close all file descriptors, but it actually is not that bad. The system call implementation to lookup a file descriptor should be fairly efficient if the system is any good.

If you want to find only close the open file descriptors, you can use the proc filesystem on systems where it exists. E.g. on Linux, /proc/self/fd will list all open file descriptors. Iterate over that directory, and close everything >2, excluding the file descriptor that denotes the directory you are iterating over.

Martin v. Löwis
  • 115,074
  • 16
  • 187
  • 226
5

On systems that support it (which basically means any unix other than Linux) there's the closefrom(2) system call, designed specifically for this purpose.

3

Having just spent many hours tracking down a bug, yes, closing all file descriptors can cause problems.

The question is, how many file descriptors are there?

1024 used to be very common, and 1024 is not an entirely unreasonable number of file handles to close. Since most of them are closed, this is just checking a byte in memory.

My operating system ships with a default of 1,048,576. On this (admittedly slow) server, it apparently can take over 4.7 microseconds to try to close a filehandle. This resulted in a timeout (5 seconds). And there's no telling how high the number will grow. At least put a (reasonable) upper limit on it.

/proc/self/fd is not ideal, but bugs like this are very hard to find.

AMADANON Inc.
  • 5,391
  • 18
  • 30