[Cryptography] Dieharder & /dev/urandom Crypto
/dev/urandom is supposed to be as solid as /dev/random, except acute paranoia: https://www.2uo.de/myths-about-urandom/ I played with Dieharder (evolution of the famous Diehard statistics tests) There was a known bug wth 500 & 501 generators (/dev/random and /dev/urandom): https://bugzilla.redhat.com/show_bug.cgi?id=803292 https://bugs.gentoo.org/677386 Once this bug is fixed, or by using a file filed with binary data (dieharder -g 201 ...) I still find weaknesses with urandom 1. I get at least one WEAK result nearly every time I run "dieharder -a -g 501" 2. These weaknesses do not appear with /dev/random 3. The tests which failed are not always the same. Failed tests I got so far: rgb_lagged_sum (7 times), rgb_bitdist (6), sts_serial (3), rgb_minimum_distance (2), rgb_permutations (2), diehard_crap, diehard_rank_32x32, rgb_kstest_test, sts_runs Am I doing something wrong or is there a true weakness in /dev/urandom? Example: $ dieharder -a -g 501 #=============================================================================# # dieharder version 3.31.1 Copyright 2003 Robert G. Brown # #=============================================================================# rng_name |rands/second| Seed | /dev/urandom| 1.79e+07 |1200973343| #=============================================================================# test_name |ntup| tsamples |psamples| p-value |Assessment #=============================================================================# diehard_birthdays| 0| 100| 100|0.46753772| PASSED diehard_operm5| 0| 1000000| 100|0.84242621| PASSED diehard_rank_32x32| 0| 40000| 100|0.22126282| PASSED [snip] sts_serial| 12| 100000| 100|0.59332179| PASSED sts_serial| 12| 100000| 100|0.99723352| WEAK <--- sts_serial| 13| 100000| 100|0.73984626| PASSED sts_serial| 13| 100000| 100|0.75394894| PASSED sts_serial| 14| 100000| 100|0.53569954| PASSED sts_serial| 14| 100000| 100|0.84443587| PASSED sts_serial| 15| 100000| 100|0.70062655| PASSED sts_serial| 15| 100000| 100|0.86398789| PASSED sts_serial| 16| 100000| 100|0.77044189| PASSED sts_serial| 16| 100000| 100|0.40665896| PASSED rgb_bitdist| 1| 100000| 100|0.23117788| PASSED rgb_bitdist| 2| 100000| 100|0.72830922| PASSED rgb_bitdist| 3| 100000| 100|0.96816091| PASSED rgb_bitdist| 4| 100000| 100|0.58267893| PASSED rgb_bitdist| 5| 100000| 100|0.42065873| PASSED rgb_bitdist| 6| 100000| 100|0.71015893| PASSED rgb_bitdist| 7| 100000| 100|0.99864266| WEAK <--- rgb_bitdist| 8| 100000| 100|0.59789616| PASSED [snip] $ _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Den tis 14 maj 2019 18:37Michel Arboi <michel.arboi@gmail.com> skrev: > /dev/urandom is supposed to be as solid as /dev/random, except acute > paranoia: https://www.2uo.de/myths-about-urandom/ > > [...] > Once this bug is fixed, or by using a file filed with binary data > (dieharder -g 201 ...) I still find weaknesses with urandom > 1. I get at least one WEAK result nearly every time I run "dieharder -a > -g 501" > 2. These weaknesses do not appear with /dev/random > 3. The tests which failed are not always the same. > I have admit I don't know exactly how the dieharder tests are implemented, I'd like to point one thing out; All these tests are heuristic by design. Randomness does not lie in numbers, it lies in the sources. And heuristic tests for randomness needs to rely on randomness themselves. That means sometimes when it runs a statistical test with random numbers it will then claim they aren't random, as well as claiming the predictable numbers are random, because it just tries to guess how *probable* it is that a random source would come up with the series of numbers that you gave to it. So passing or failing some tests is not the interesting question - it is *how many* tests you pass or fail and of what kind. > _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
On May 14, 2019, at 9:17 AM, Michel Arboi <michel.arboi@gmail.com> wrote: ... > I played with Dieharder (evolution of the famous Diehard statistics tests) > There was a known bug wth 500 & 501 generators (/dev/random and /dev/urandom): > https://bugzilla.redhat.com/show_bug.cgi?id=803292 > https://bugs.gentoo.org/677386 > > Once this bug is fixed, or by using a file filed with binary data > (dieharder -g 201 ...) I still find weaknesses with urandom > 1. I get at least one WEAK result nearly every time I run "dieharder -a -g 501" > 2. These weaknesses do not appear with /dev/random > 3. The tests which failed are not always the same. /dev/urandom is giving you cryptographically processed bits, so I’m like 99.99% sure what you’re seeing is that you ran lots of tests each with a small probability of giving a false positive, and a couple false positives happened. The practical issue with /dev/urandom is that it’s never allowed to block, so in some extreme circumstances you could be getting output bits even though the system hasn’t managed to collect any entropy. This was apparently behind the finding a few years back of a bunch of appliance routers and firewalls whose RSA keys shared primes. (This demonstrates a disastrous lack of entropy!) Note the the statistics of those systems’ /dev/urandom outputs would have been fine if checked—the problem was only visible when you looked at many different machines’ outputs. —John _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Michel Arboi wrote on 15/05/19 1:17 AM: > I played with Dieharder (evolution of the famous Diehard statistics tests) > There was a known bug wth 500 & 501 generators (/dev/random and /dev/urandom): Here is an article which goes through a very thorough step by step analysis of an RNG using Dieharder to demonstrate how to interpret WEAK results by using them to refine the testing to get more certain results. http://www.bitbabbler.org/test-data/dieharder.html One important point of the article is that a result of PASSED does not mean that the RNG has passed a test, it means that it has with high probability not failed; a result of FAILED means that it has with high probability failed; and a result of WEAK indicates uncertainty in the results, not that the RNG is close to passing or close to failing a test. WEAK is an indication that you need to make the test stronger (e.g. with more p samples) until the uncertainty is resolved one way or another. The article is a demonstration of how to do that. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Slightly off-topic: Le mardi 14 mai 2019 à 19:41 -0400, John Kelsey a écrit : > > The practical issue with /dev/urandom is that it’s never allowed to > block, so in some extreme circumstances you could be getting output > bits even though the system hasn’t managed to collect any > entropy. This was apparently behind the finding a few years back of > a bunch of appliance routers and firewalls whose RSA keys shared > primes. (This demonstrates a disastrous lack of entropy!) Note the > the statistics of those systems’ /dev/urandom outputs would have been > fine if checked—the problem was only visible when you looked at many > different machines’ outputs. > It's "only" a boot problem (albeit a big issue). As soon as "enough" entropy is gathered, the kernel CSPRNG "well" seeded, /dev/random and /dev/urandom output are of equal quality. https://www.2uo.de/myths-about-urandom Anyway getrandom(,, 0) should be prefered now as: 1) it blocks until the kernel CSPRNG is seeded, eliminating the boot issue, (except when you're PID 1, in charge of initializing the system). 2) doesn't block after that. 3) doesn't require opening a file Entropy tracking done for /dev/random is controversial. See this thread from Filippo Valsorda: https://twitter.com/FiloSottile/status/1125843366837616640 Regards. -- Yann Droneaud OPTEYA _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Le mar. 14 mai 2019 à 18:51, Natanael <natanael.l@gmail.com> a écrit : > So passing or failing some tests is not the interesting question - it is *how many* tests you pass or fail and of what kind. I (nearly?) always get a couple of "WEAK" results with urandom, not with random. I'm confused, as this is to be the same PRNG in fact -- random is just guaranteed to be regularly reseeded if I understood correctly. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Le mer. 15 mai 2019 à 01:36, John Kelsey <crypto.jmk@gmail.com> a écrit : > /dev/urandom is giving you cryptographically processed bits, so I’m like 99.99% sure what you’re seeing is that you ran lots of tests each with a small probability of giving a false positive, and a couple false positives happened. I never get that with random, or with a much lower probability than urandom. That's odd. > The practical issue with /dev/urandom is that it’s never allowed to block, so in some extreme circumstances you could be getting output bits even though the system hasn’t managed to collect any entropy. This was apparently behind the finding a few years back of a bunch of appliance routers and firewalls whose RSA keys shared primes. (This demonstrates a disastrous lack of entropy!) Note the the statistics of those systems’ /dev/urandom outputs would have been fine if checked—the problem was only visible when you looked at many different machines’ outputs. I'm doing all these tests on computers that have been up and running for days. Moreover, one of them has a OneRNG hardware key. This does not change anything to the results. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Le mer. 15 mai 2019 à 06:24, Sidney Markowitz <sidney@sidney.com> a écrit : > One important point of the article is that a result of PASSED does not mean > that the RNG has passed a test, it means that it has with high probability not > failed; a result of FAILED means that it has with high probability failed; and > a result of WEAK indicates uncertainty in the results, not that the RNG is > close to passing or close to failing a test. WEAK is an indication that you > need to make the test stronger (e.g. with more p samples) until the > uncertainty is resolved one way or another. The article is a demonstration of > how to do that. So I should run the tests in "resolve ambiguity" mode, like this? dieharder -a -g 501 -k 2 -Y 1 man dieharder says: -k ks_flag - ks_flag 0 is fast but slightly sloppy for psamples > 4999 (default). 1 is MUCH slower but more accurate for larger numbers of psam‐ ples. 2 is slower still, but (we hope) accurate to machine precision for any number of psamples up to some as yet unknown numerical upper limit (it has been tested out to at least hundreds of thousands). 3 is kuiper ks, fast, quite inaccurate for small samples, depre‐ cated. -Y Xtrategy - the Xtrategy flag controls the new "test to failure" (T2F) modes. These flags and their modes act as follows: 0 - just run dieharder with the specified number of tsamples and psamples, do not dynamically modify a run based on results. This is the way it has always run, and is the default. 1 - "resolve ambiguity" (RA) mode. If a test returns "weak", this is an undesired result. What does that mean, after all? If you run a long test series, you will see occasional weak returns for a perfect generators because p is uniformly distrib‐ uted and will appear in any finite interval from time to time. Even if a test run returns more than one weak result, you cannot be certain that the generator is failing. RA mode adds psamples (usually in blocks of 100) until the test result ends up solidly not weak or proceeds to unambiguous failure. This is morally equivalent to running the test several times to see if a weak result is reproducible, but eliminates the bias of personal judgement in the process since the default failure threshold is very small and very unlikely to be reached by random chance even in many runs. This option should only be used with -k 2. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
Using "dieharder -a -g 501 -k 2 -Y 1 -P 10000000" I still get WEAK results. I can hardly increase the -P parameter, as the memory use skyrockets. I got a FAIL once but was unable to reproduce ut. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
89.2 MB 3,873 messages
Last sync: 15 July 2019 22:44

Move Messages

Save

Apply Labels


Warning: Unknown: write failed: No space left on device (28) in Unknown on line 0

Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/sessions) in Unknown on line 0