Louis I. answered 05/01/19
Computer Science Instructor/Tutor: Real World and Academia Experienced
Well, they all matter: - depending on what you're bench-marking - what you're looking for.
First let's define each part:
real: obviously, as close to a real-time captured lag as possible
user: time spent executing application text (non-kernel-space code)
sys: time spent executing kernel-space code
Here's the output I typically get when I time-profile "sleep 10"
time sleep 10
real 0m10.049s
user 0m0.000s
sys 0m0.031s
So this tells me that it took about 10 seconds to sleep - that's good ;-) - virtually no non-kernel execution time - and the overhead of "waiting doing nothing" at the system level took a small fraction of a second.
But if I profile something a little more time/cpu intensive - like asking find to list all directories under a given point in my file system, I would expect more interesting numbers than we got while sleeping for 10 seconds.
$ time find . -type d
real 0m0.107s
user 0m0.046s
sys 0m0.062s
This tells us that this find command executed very cleanly with no blocked time (waiting for input, etc.) since the real time seems to represent the sum of kernel and non-kernel executed code ... it's not unexpected that find (given what it does) spends time executing system calls as well as application text.
NOTE: in the universe of application bench-marking / profiling, the Linux "time" utility is a first step ... it gives you an estimate of a big picture.
- the ratio of real time to (sys+user)
- how much real computation time is spent executing kernel code vs custom app code
Based on this feedback, you may decide to dig deeper.
For instance, why is there so much "blocked" time spent effectively doing nothing?
What's the app typically blocked on?
Or why is 90% of [sys+user] time, system time? What system calls are primarily being used?
Does all that make sense given the application? ....