1

A friend asked for a command that can be used to find the real system execution time for a program in Linux. I replied that the time command is a great one.

He then asked, is the time of execution (via time) for a program which is returned by the virtual machine when you query for the execution time, the same as the real system execution time of the program?

My instinct was to say it depends:

  1. If there are enough resources on the machine, then the VM time returned would be the same as the real/wall clock time of the program execution.

  2. If the system does NOT have sufficient resources, then the VM time will be much different than the real (system) execution time. Moreover, the VM is an application on top of the OS, which has its own scheduler. This means that the VM needs to invoke systems calls which are then processed by the OS which in turn communicate with hardware and then provide a real (system) execution time. Hence, the time returned can be different than real time in this situation.

  3. If the executed program is simple, then the VM time could be equal to real (system) time.

  4. If the executed program is NOT simple, then the VM time could be much different.

Are my assumptions correct?

I now wonder: How could you find/calculate the real execution time of a program ran on a virtual machine? Any ideas?

Community
  • 1
  • 1
lucidgold
  • 4,176
  • 4
  • 26
  • 46
  • This doesn't really seem to be about programming but rather about VMs and the Linux `time` command. Voted to move to Super User. – Chris Hayes Oct 01 '14 at 04:10
  • Can I move it myself? I thought this was relevant since a lot of us deal with measuring execution time of program on Linux. At least a lot of us who are in graduate school. – lucidgold Oct 01 '14 at 04:11
  • I don't think you can move it yourself (except to delete it here and post it there). You could flag it for moderator attention if you want it moved. It's relevant to the community here on SO, but unfortunately not everything relevant is on-topic here. :) – Chris Hayes Oct 01 '14 at 04:12
  • I would believe it depends if the program does a lot of syscalls or not. I guess that syscalls are slower on a VM. – Basile Starynkevitch Oct 01 '14 at 13:44

1 Answers1

1

The complexity of the program doesn't matter.

The guest OS doesn't have visibility into the host. If you run 'time' on the guest, the 'sys' value returned is describing guest system resources used, only.

Or to rephrase: in a typical virtual machine setup, you're going to allocate only a portion of the host CPU resources to the guest OS. But that's all the guest can see, and thus, that's all the 'time' command can report on. Since you allocated this when you launched the VM, the guest cannot exhaust all of the host resources.

This is true of pretty much any VM: it would be a major security issue if the guest had knowledge of the hypervisor.

So yes the sys time could absolutely differ for VM versus real hardware, because the guest won't have full resources. You could also see variability depending on whether you're dealing with hardware or software virtualization.

Some good reading here (sections 10.3 through 10.5):

https://www.virtualbox.org/manual/ch10.html#hwvirt

spudone
  • 1,247
  • 7
  • 10