C++ Random Number Question

JC724

Weaksauce
Joined
Jan 20, 2016
Messages
118
This maybe a dumb question but I been googling and looking for an answer.

How can I create a 16 BIT random number?
 
All 16 bits are random. Like I can do .

Can I do this ?
int16_t bitNum = rand()%100; //bitNum in the range of 0 to 99.

?
 
All 16 bits are random. Like I can do .

Can I do this ?
int16_t bitNum = rand()%100; //bitNum in the range of 0 to 99.

?
yes this will work:
int16_t bitNum = (int16_t)rand()%0x10000; //bitNum in the range of 0 to 0xffff

Note that rand() is not considered random enough for applications where a truly random number is required. Also, the use of %0x10000 causes the number to not be perfectly uniformly distributed. None the less, for most applications it will probably be good enough.

Also, you may want to look into srand if you want to initiaze your random number generator with a random seed first (otherwise your code will produce the same sequence of random numbers every time it runs, which you may or may not care about.
 
not considered random enough for applications where a truly random number is required

Would you mind explaining that further? Layman's terms a bonus. Not a software person, but not a stranger to math either; the above made me curious.
 
a 16 bit number can range from 0 to 65535. If you apply %100 to get a value from 0-100 then you will have 655 sets of numbers that go from 0 to 100 and 1 set that goes from 0 to 35. So the numbers 0 to 35 will appear with slightly more frequency. It is "fairly" random but not truly random. There are also problems in generating random numbers because the generation relies on techniques that are not random and can possibly be predicted. Many encryption exploits have been done based on this. Some programs when generating a random number will have you wiggle a mouse in a box and use that as input to their RNG.
 
  • Like
Reactions: Aenra
like this
rand() is a pseudo-random number generator. In math terms, it is deterministic. You can "predict" its outputs based on its inputs.

Ideally, for something to be truly random, you want something leaning towards non-deterministic. What constitutes as a "good" random number generator is still of great debate today.

Basically, digital computers are imperfect (or is that too perfect? hmm) for generating random numbers. Still, there are some really good RNG engines out there, such as those used by the C++ library I mentioned above. There are even some RNG engines that get assistance from the underlying hardware for introducing even more "randomness" into the result. Most of these hardware implementations rely on "noise" that is inherent of the analog signals found in the power delivery of the underlying circuits. This "noise" is often referred to as "entropy", which is just a fancy word for chaos. The more chaos in a system, the more potential for harvesting randomness.

Unless you need NSA level encryption robustness, most of these standard library implementations are good enough for most projects.

Coming up with one's own random number generator isn't that hard. You could literally use twitter as a source for a random seed. There's more random shit on the internet these days than any super AI could ever predict.
 
Last edited:
Would you mind explaining that further? Layman's terms a bonus. Not a software person, but not a stranger to math either; the above made me curious.
I'm assuming you are specifically asking about rand() and not why %100 makes it non uniform. I don't know too much about this subject, but here are some links that might give you more info:
https://channel9.msdn.com/Events/GoingNative/2013/rand-Considered-Harmful
https://wiki.sei.cmu.edu/confluence...+function+for+generating+pseudorandom+numbers
 
  • Like
Reactions: Aenra
like this
a 16 bit number can range from 0 to 65535. If you apply %100 to get a value from 0-100 then you will have 655 sets of numbers that go from 0 to 100 and 1 set that goes from 0 to 35

Would never have thought of that..
Thanks to everyone for replying as well, i like learning new things :)

Expected this to be complicated, but more like the other way round; like conceiving the way for a simple program to "pick" a trully "random" moment, but never thought of the pool itself.
 
Dunno anything about C or C++, so this might be a complete waste of your time.

Pseudorandom number generators are typically based on a chaotic cellular automata.
Even the linear feedback shift register with a couple of XORs is just computing CA in
a bit serial manner. Applying the same or similar rule in a parallel manner to an entire
generation is probably faster than discarding at least 15 iterations to hide the obvious
shifting pattern you would otherwise see.

Wolfram's Rule 30 is popular these days, not sure why. For some pequliar reason they
only harvest one bit right down in the middle, which doesn't save any steps toward
sixteen useful bits. The rest of the pattern grows sideways, but eventually has to loop
upon itself or abuse some other termination rule. Can't grow sideways forever, so the
randomness of the middle column is suspect whenever the growth is constrained.

http://www.stephenwolfram.com/publi...dom-sequence-generation-cellular-automata.pdf

I use a small lookup table of 1024 bytes. 10bits of address in , 8 bits data out.
Extra input bit on each end for expanding the CA generator to arbitrary width.
18bits input for 16bits out lookup table might be too big for a microcontroller.
But a PC can handle it no problem. How fast and wide do you want your CA?
Its not efficent to roll pseudorandoms one bit at a time, the shift register way.

I wrote this example in FUZE BASIC, which can run on PC or a PI. Just a quick and
dirty proof that Fibonacci LFSR is a dumbed down serial CA in disguise. Not Wolfram,
but its a CA, no doubt about it. No lookup tables here. We do it the hard way, and 48
bits wide. So you can see that CA behavior was the source of pseudorandomness,
and not because of any cheats...

REM *** Fibonacci Linear Feedback Shift Register ***
REM *** Immediate copy bits omitted from display by allowing ***
REM *** an entire register width to quietly roll unseen ***
DIM LFSR(49)
FOR X = 1 TO 49 LOOP
LFSR(X) = 1
REPEAT
FOR Z = 1 TO 28 LOOP
FOR X = 2 TO 48 LOOP
PRINT LFSR(X);
REPEAT
PRINT LFSR(49)
FOR Y = 1 TO 48 LOOP
FOR X = 1 TO 48 LOOP
LFSR(X) = LFSR(X + 1)
REPEAT
REM *** The default Pseudorandom feedback rule from Wikipedia ***
REM *** LFSR(49) = LFSR(1) XOR LFSR(3) XOR LFSR(4) XOR LFSR(6) ***
REM *** Simplified feedback rule to better illustrate a point ***
LFSR(49) = LFSR(1) XOR LFSR(3)
REPEAT
REPEAT

CA.png


The default rule from Wikipedia samples a wider group of neighbors and does a better
job hiding what's really going on. But once you know what to look for, CA is there too.
Obviously you wouldn't want to use this sort of pattern for random numbers, but that's
exactly what the lazy man's solution does, once you unscramble it for easier viewing.
Going to need further hashing before we could even pretend that this was random.

Fibonacci.png


These numbers may not be random, but do have a use for very fast counting.
Regular counting involves waiting for carry to ripple from lowest bit to highest.
Counting with Fibonnaci LFSR involves no carry. Each number is represented
only once, except zero is excluded (sorry rules)...

If you ever tackle the challenge of designing your own CPU from scratch, you
may need a counter to sequence the last five or six address bits of microcode.
And the sequence doesn't matter, just store your microcode in the same goofy
pseudorandom sequence that would be abused to read it back.

I would just tack on six extra microcode bits to lookup the next address, and
not need a goofy sequence. But was helping another dude that hated tables
and was barely tolerant that any micrcode was necessary. His counter used
ripple carry, and was limited to only 2MHz before it made counting mistakes.
Dunno if he ever upgraded to the LFSR counter I redesigned, or left it slow.

----

So my questions for you to consider: Exactly what kind of random do you get
when you let the compiler decide for you? Is modulo 100 enough extra hash
to walk away and say "good enough"?
 
Last edited:
So my questions for you to consider: Exactly what kind of random do you get
when you let the compiler decide for you? Is modulo 100 enough extra hash
to walk away and say "good enough"?

Intel's RDRAND instruction is good enough for me. There's enough entropy there to keep me sleeping like a baby for years.

If I required something that scales a little bit better, I'd use Nvidia's cuRAND.

I don't think ARM has any solution in hardware yet, but I'm sure there are some embedded solutions out there.

These are all hardware solutions, obviously. They are going to be overkill for 99.9% of the projects I'd ever work on. The last time I used cuRAND was something like in 2012. And that project was more of a genetic algorithm PoC than anything.

These are all hardware implementations of course, but I find they are less likely to be reverse engineered. They were designed by a team that is a lot smarter than I ever will be. Only issue is that of trust. Do you trust the company that provides you with this highly entropic touted solution? Tinfoil hat much?

I mean how much entropy does one need these days?
 
Back
Top