2

I have some gperf tool files:

the first one was running about 2 minites,file is 18M;

others running about 2 hours and the files are about 800M

when I try to use :pprof --text to get the report, found the the first one has 1300 samples but these 2 hours running just 5500 samples.

I excepted the larger files have about 2*3600*100 samples(because "by default the gperf tools take 100 samples a second").

The same procedures and the same operating environment, why the samples too few? sorry for my poor english.

ρяσѕρєя K
  • 127,886
  • 50
  • 184
  • 206
asafu
  • 25
  • 2
  • Try 'CPUPROFILE_REALTIME=1' environment parameter of cpuprofiler - https://gperftools.googlecode.com/git/doc/cpuprofile.html Also, file size will be huge if there is deep stack... (PS: it can be more useful not to do Mike-styled manual sampling, but to try recent `perf` profiler) – osgx Jan 09 '16 at 01:33

1 Answers1

0

I looks like it's I/O bound. In the 120-second job, you're getting 13 seconds of samples. In the 120-minute job, you're getting about 1 minute of samples. The actual fraction of time spent computing vs. I/O can vary pretty widely, especially if there is some constant startup overhead.

If the time ought to be roughly linear in file size, that 120-minute job should really only be about 40 minutes, so I would do some manual sampling on the big job, to see what's happening.

Community
  • 1
  • 1
Mike Dunlavey
  • 38,662
  • 12
  • 86
  • 126