31

I cannot find any info on agner.org on the latency or throughput of the RDRAND instruction. However, this processor exists, so the information must be out there.

Edit: Actually the newest optimization manual mentions this instruction. It is documented as <200 cycles, and a total bandwidth of at least 500MB/s on Ivy Bridge. But some more in-depth statistics on this instruction would be great since the latency and throughput is variable.

mbq
  • 18,025
  • 6
  • 45
  • 70
user239558
  • 6,096
  • 1
  • 25
  • 33
  • I don't know the answer, without running a benchmark, but as an interested party may I ask "How fast do you want it to be?" I.e. what apps need lots of RDRANDs? By the way, there are two se6parate questions here: (a) how fast the instruction is, in terms of latency and throughput, but also (b) can it be read faster than the entropy pool accumulates? I.e. can you exhaust the entropy pool, and just be running off pseudo-random numbers? – Krazy Glew Jun 05 '12 at 06:13
  • 3
    The only reason I can think of why anyone would care is to decide whether to use `RDRAND` directly or through a PRNG. You'll get the same observable behavior in both cases, but one might be significantly faster than the other, and it's not immediately obvious which one that would be. (KrazyGlew: Your `b` is kind of irrelevant. It's like asking how much Holy water you get before it switches to water. There is no detectable difference between the two, and the distinction is essentially meaningless in this context.) – David Schwartz Jun 06 '12 at 11:11
  • @KrazyGlew A use-case is generating random numbers for statistical sampling on a GPU. – user239558 Jun 27 '12 at 22:51
  • Related: [Is there any legitimate use for Intel's RDRAND?](https://stackoverflow.com/questions/26771329/is-there-any-legitimate-use-for-intels-rdrand) has a benchmark against a `std::mt19937` PRNG. If anything, RDRAND is probably slower than in that test, because they don't use the result (which is problematic in asm as David's answer explains). – Peter Cordes Dec 01 '17 at 03:14
  • Agner's testing includes RDRAND numbers now. IvB throughtput: one per 104-117 clocks. SKL throughput: one per ~460 clocks. (But presumably this is dependent on core clock speed, if the DRNG runs at constant clock. Still, Agner tested on an i7-3770k so the IvB shouldn't have been clocked extremely low, making RDRAND look fast. Unless it was at idle clock speed? Or maybe his testing didn't use the result either, and IvB squashed the "dead" uops better than SKL.) – Peter Cordes Dec 01 '17 at 03:17

4 Answers4

32

I wrote librdrand. It's a very basic set of routines to use the RdRand instruction to fill buffers with random numbers.

The performance data we showed at IDF is from test software I wrote that spawns a number of threads using pthreads in Linux. Each thread pulls fills a memory buffer with random numbers using RdRand. The program measures the average speed and can iterate while varying the number of threads.

Since there is a round trip communications latency from each core to the shared DRNG and back that is longer than the time needed to generate a random number at the DRNG, the average performance obviously increases as you add threads, up until the maximum throughput is reached. The physical maximum throughput of the DRNG on IVB is 800MBytes/s. A 4 core IVB with 8 threads manages something of the order of 780Mbytes/s. With fewer threads and cores, lower numbers are achieved. The 500MB/s number is somewhat conservative, but when you're trying to make honest performance claims, you have to be.

Since the DRNG runs at a fixed frequency (800MHz) while the core frequencies may vary, the number of core clock cycles per RdRand varies, depending on the core frequency and the number of other cores simultaneously accessing the DRNG. The curves given in the IDF presentation are a realistic representation of what to expect. The total performance is affected a little by core clock frequency, but not much. The number of threads is what dominates.

One should be careful when measuring RdRand performance to actually 'use' the RdRand result. If you don't, I.E. you did this.. RdRand R6, RdRand R6,....., RdRand R6 repeated many times, the performance would read as being artificially high. Since the data isn't used before it is overwritten, the CPU pipeline doesn't wait for the data to come back from the DRNG before it issues the next instruction. The tests we wrote write the resulting data to memory that will be in on-chip cache so the pipeline stalls waiting for the data. That is also why hyperthreading is so much more effective with RdRand than with other sorts of code.

The details of the specific platform, clock speed, Linux version and GCC version were given in the IDF slides. I don't remember the numbers off the top of my head. There are chips available that are slower and chips available that are faster. The number we gave for <200 cycles per instruction is based on measurements of about 150 core cycles per instruction.

The chips are available now, so anyone well versed in the use of rdtsc can do the same sort of test.

David Johnston
  • 344
  • 2
  • 2
  • 4
    Please add a link to the IDF presentation. – Nathan Jul 03 '13 at 18:21
  • 6
    "I wrote librdrand" - 'nuf said. – JebaDaHut Dec 01 '13 at 01:19
  • So `rdrand` is like a high-latency load? Agner Fog's numbers indicate a throughput of one per ~110c on IvB, or one per ~460cycles on Skylake. I'm curious how much computation can overlap with `rdrand`, since most code that uses random numbers actually has lots of work to do other than generating random numbers. So I'm curious how much it would slow down some real code to use `RDRAND` instead of a super-fast PRNG like xorshift, or even vs. the fastest-possible non-random number generator: `xor eax, eax`. – Peter Cordes May 20 '16 at 20:12
  • Do you get better results from sofware-pipelining? Generating the next iteration's random number before some slow calculation that hides the latency? Or does that not help much, because the `rdrand` itself can't retire, so it's stuck in the ROB? – Peter Cordes May 20 '16 at 20:14
7

You'll find some relevant information at Intel Digital Random Number Generator (DRNG) Software Implementation Guide.

A verbatim quote follows:

Measured Throughput:

Up to 70 million RDRAND invocations per second
500+ million bytes of random data per second
Throughput ceiling is insensitive to the number of contending parallel threads
jww
  • 83,594
  • 69
  • 338
  • 732
Eugene Smith
  • 8,550
  • 6
  • 32
  • 38
  • @user434507 - Always good to include the relevant bit. That link could break and this answer would become meaningless. I've done this for you this time :) – ArjunShankar Jun 08 '12 at 08:07
  • Quote: `This has the effect of distilling the entropy into more concentrated samples`. Awesome, isn't it? – Hans Passant Jun 08 '12 at 10:20
  • 1
    @ArjunShankar, you are right and I considered doing that too, but there's also a number of interesting charts in the article. – Eugene Smith Jun 08 '12 at 17:14
  • `70 million invocations per second`. At what clock speed? That kinda matters too. – Mysticial Jun 10 '12 at 16:25
  • 1
    In case somebody reads the last comment, no it doesn't since the DRNG runs at 800 MHz regardless of the CPU speed (on Ivy Bridge anyways), see [David's answer](http://stackoverflow.com/a/11042778/589259) – Maarten Bodewes Jan 11 '15 at 16:04
4

I have done some preliminary throughput tests on an actual Ivy Bridge i7-3770 using Intel's "librdrand" wrapper and it generates 33-35 million 32bit numbers per second on a single core.

This 70M number from Intel is about 8 cores; for one they report only about 10M, so my test is over 3x better :-/

mbq
  • 18,025
  • 6
  • 45
  • 70
  • 3
    Did you actually use the result? David's answer says that the CPU discards incomplete `rdrand` uops if the result register is simply overwritten. (So e.g. store to memory or `XOR` it into something.) – Peter Cordes Dec 01 '17 at 03:21
4

Here are some performance figures I get with rdrand: http://smackerelofopinion.blogspot.co.uk/2012/10/intel-rdrand-instruction-revisited.html

On a i5-3210M (2.5GHz) Ivybridge (2 cores, 4 threads) I get a peak of ~99.6 million 64 bit rdrands per second with 4 threads which equates to ~6.374 billion bits per second.

An 8 threaded i7-3770 (3.4GHz) Ivybridge (4 cores, 8 threads) I hit a peak throughput of 99.6 million 64 bit rdrands a second on 3 threads.

Colin King
  • 41
  • 1
  • How do you invoke `stress-ng` to get the throughput numbers? The best I have been able to do is `stress-ng --rdrand 1 --metrics -t 60`, but the metrics (like BogoMIPS) are not very useful to me. – jww Mar 07 '17 at 12:04