Randomize?

Is there any better randomize generator than Visual C++'s own???

Thanx…

rand() ?

random()

VC++'s own?

Hi there!

Combining rand() with someting else (like the current mouse position and/or the time)?

I prefer using the actual clock speed, because it is REALLY random (because it is a quarz it is changing frequently).

So long,

LG

How do I write to use the actual ClockSpeed??? Because it’s what I’ve been looking for, if U don’t have any good seed the randomize sucks…

Im pretty sure rand() and random() already use the clock - or something similar to generate their numbers

i find i can get any range of numbers i need with rand()

((rand() % 8000) - 4000) / 1000;

gives me a nice random range of -4.000 to 4.000…

rand() (dunno about random()) is not using the clock to generate numbers. It’s using some algorithm to generate numbers based on a seed. This means you will get the same numbers each time you call rand() with the same seed.
One way to get unique series of numbers is to always call srand(time(NULL)) before each call to rand(). Might be abit slow though. These functions will indirectly use the clock to generate a random number (what igrosshennig mentioned).

Well, I could talk your ear off about random numbers, but that’s not what you’re looking for…

rand() as it is implemented on most systems uses a simple division and add with some relatively prime numbers on a seed integer. The seed is updated each time rand() is called. You can generate the same list of numbers using the same seed. This is called Pseudo-Random Number Generation.

If you want a ‘stronger’ pseudo-random number generator, just type “cryptographic random number generator” into Google and see what pops up. ISSAC is a pretty good one: http://burtleburtle.net/bob/rand/isaacafa.html

If you want TRUE random numbers, ie: you will never get the same series, EVER. You will have to implement some real-world tricks. Minor fluctuations in interrupt speeds, keyboard taps, and wiggles in the mouse actuators are all good sources. However, the best by far is the HotBits server: http://www.fourmilab.ch/hotbits/

You can request up to 2K of random bytes at a time. Then just poll from this set for your random number, or use it as the seed for a pseudo-random number series.

–Bryan

Hi there!

I am sorry for the delay, but I was busy.
It looks like the thread draws some attention. Well, dosen’t matter

As Bob already said rand() uses some bull**** formula to generate (always) the same numbers. As I said the current clockspeed is REALLY random. I present to you the proof. A code snipped to show you how to get it.

Since the windows timegettime() funtcion is limited to milisecond, the results of my funtion are also limited to this. This means the last three digits of the actual clockspeed are missing. BUT you should still get UNIQUE result (i.e. for a 400MHz system you should get values between 380000 and 420000). Well, as I said there a dependend on your system oscilator.

btw. you have to link the winmm.lib to the sample.

— code snipped of main.cpp ----
#include <iostream.h>
#include <stdlib.h>
#include <windows.h>
#include <mmsystem.h>

DWORD GetClockSpeed()
{
DWORD i;
DWORD StartTimeRDTSC, EndTimeRDTSC;
DWORD StartTimeWIN, EndTimeWIN;
DWORD ProcessorMHZ = 0;
float TimeT = 0.0f;
__asm
{
__emit 0fh // rdtsc
__emit 031h
mov StartTimeRDTSC,eax
}
StartTimeWIN = timeGetTime();
for(i=0; i<1000000; i++) TimeT += ((float)(rand()%19)/2.0f*(float)i);

__asm
{
__emit 0fh // rdtsc
__emit 031h
mov EndTimeRDTSC,eax
}

EndTimeWIN = timeGetTime();
ProcessorMHZ = (EndTimeRDTSC-StartTimeRDTSC)/((EndTimeWIN-StartTimeWIN));
return ProcessorMHZ;
};

void main(void)
{
for(int i=0; i < 20; i++)
{
cout << GetClockSpeed() << "
";
}
// wait till the user enters something
char something;
cin >> something;
};

— code snipped stops----

Hm, it is probably in a bad format now (due to the post client…), but you will figure
it out .

Don’t get nervous about the rand() inside the function, it is only there to keep the cpu busy

I you have still questions simply ask me

Have fun, and may the vector be with you!

LG

Thanks, I’ll try that…

Sorry LG, this code is NOT random. The actual ‘randomness’ from your algorithm is again coming from the rand() function.

Let me explain:
When the processor executes an instruction, the cycle counter increments based on the cycles it took to execute that instruction. This is great for simple stuff like Shift, Move, and memory accesses because these instructions execute in exactly the same number of cycles regardless.

When you throw in Mod, Mult, Div, etc… the number of cycles becomes a bit less static. Depending on the number of bits that are set in the mantissa of the numbers, the Div operation takes differing amounts of time.

It is this differing amount of time in the Div and Mod operations of your loop that is generating the apparently random numbers you see, not clock cycle fluctuation.

This is much worse than rand() in that the randomness of rand() is mathematicaly bound and known. Allowing you to work within a known level of error and make compensations for it. All computers will generate the same error within these bounds.

With your function, the bounds of the randomness are not well defined: on my overclocked AMD K6-2 450, I would get pretty good results but on a very stable Intel PIII 1GHz, your randomness would be almost exactly the same if you reset the randseed() each time (because the same computations would be performed, thus causing the same cycles to be used, thus the same resulting clock speed).

Here’s an algorithm for capturing random real-time interrupt fluctuation:
Trap the real time interrupt to occur at the shortest allowable time.
Start a VERY optimized counter routine at zero.
When the interrupt is called, store your counter.
Do this again to get two counts.
If the first count is less, return one bit = 1. If the first count is more, return one bit = 0. If they were equal, do not return any bits.

After producing 8 bits, you have one random byte. Repeat for all the bytes you need.

I’ve used this system on three platforms and tested the results with the random-number generator monkey tests. All the streams passed the tests. This still isn’t good enough for me, but it works in a pinch.

Hope that helps,
–Bryan

Sorry Brian, but you are wrong.

You know why? You think that you always have a contanst clock speed, but that isn’t right.

That means if you have a 450 Mhz system the time for ONE clock cycle will probably vary between 2.0e-9 seconds and 2.4e-9 seconds. And this is where I get my “randomness”.

The point is I’m getting my “random value” from the natural instablity of the clock generator. That means in your case your clock speed is around 450Mhz. But it is NOT constant.

It doesn’t matter what kind of system you have. I you have a 1GHz system, the ocsilator will generate for example a frequency between 0.95 GHz and 1.05GHz, if not even more.

So what I’m doing, is the following:

  1. I get the current millisecond count
  2. Then I read in the RealTimeStampCounter.
    (The counter is increased by one with each clock tick).
  3. I do something (in this case I only generate random numbers, but I don’t use them anywhere) It’s all to let the CPU do something) ANYTHING will do the job, You can calc prime numbers if you want, or execute NOP’s it doesn’t matter.
  4. I read in the actual RealTimeStampCounter.
    By subtracting this value from the first I know how much clockcyles got “wasted”.
  5. I read in the the current millisecound counter, and subtract it from the former value to get the time the hole stuff took and voila I got a value that is based on the instablity of the system clock oscilator.

Since this value is completly independent from any code that is actually executed it is random.

Do you agree, that it is the nature of a oscilator to be instable?

Regards,

LG

LG,

Good debate

Well, of course I have to say I’m right again and support it with more than words this time to remain credible.

I agree oscilators are instable, I disagree that this code is ‘detecting’ that instability. I’m only continuing the argument to prove the lack of ‘true’ randomness with this method, not its utility as a pseudo-random number generator.

Given that the function should calculate a CPU’s current clock speed, then averaging the results of this function over many itterations should give the actual clock speed, right?

So, first I replaced the inner loop of the function with a stable equation:

for (i=0; i<1000000; i++)
nShift = (nShift >> 1) & 0x01000000;

Where nShift is a ‘long’. This function is stable in that its operations all consist of non-varying cycle instructions (shifts, increments and logical-AND).

This gave me results in the range 300Mhz up to 600 Mhz on a 400 Mhz system - oscilators aren’t THAT unstable. So I looked up the rdtsc instructions: http://developer.intel.com/drg/pentiumII/appnotes/RDTSCPM1.HTM

It suggested adding a cpuid instruction before and after the loop. This stabilized the function’s output to 370-450 range; much better.

Now, you say that the resulting value is independent of the code executed. So, I ran this function 1000 times, averaged the result and displayed it. To mitigate any system-level interruptions in the first run of an application, I ran the application three times.

The results were:
1: 411783
2: 411180
3: 411112

Not bad, this machine is a 400 Mhz Win NT box. So 411 Mhz is not far off. One might leave it at that, but…

I changed the inner loop count from 1000000 to 2000000… let’s see what happened:

1: 402570
2: 402734
3: 402821

My machine slowed down? Let’s do it one more time, with 3000000.

1: 398787
2: 399297
3: 398648

Yep. Monotonically decreasing CPU speeds, seem a bit suspect now? If the function does not measure the actual CPU speed, then it can’t measure fluctuations of the CPU speed. Again, my argument holds, this function does not generate random numbers.

Just for validation, I re-instated the original inner loop and ran it with 100000 and 200000 itterations, the results were as I expected:
1: 441232
2: 441675
3: 441559

1: 405355
2: 405281
3: 405392

–Bryan

Maybe a bit off-topic, but I have a question about oscilators varying in clockspeed. As far as I know, oscilators are quite stable. Have seen 100Mhz oscilators for not more that $5, varying ± 10 ppm per year.
lgrosshennig said was that a 450Mhz one could vary from 2.0 tro 2.4 us. In my eyes, this looks quite far from the oscilator i mentioned above.
igrosshennig also said a 1GHz system would vary ± 50Mhz.

Are oscilators really THAT unstable? What about the oscilator i mentioned above? Do manufacturers really use the crappiest oscilator they can get (when they vary that much) ?

No offence lgrosshennig, could be anyone who said that about frequencyvariations, just wanted to know if this is true…

Hi Bob!

No offence taken

Because, I know what I’m talking about I think I should give you a little lesson
redarging ocscillators (no offense intented) :stuck_out_tongue:

You said that you saw a oscillator with ± 10ppm per year. Well, this value only
gives you the ageing factor of the oscillator (and in this case it is bad):stuck_out_tongue:

Mainstream ocsillators usely have something in the range of ± 4 to 7 ppm for
the first year and a lower number for the following years. However there
is also the operation error that ranges from ± 15 ppm for super high acurate ones
to more than ± 100 ppm for the midrange and everything above is the rest.

BTW: ppm means part/pulses per million.

Besides this two values there is also the raising and release time. Since, an oscillator does not generate a perfectly square waveform this times specifying the max. time to reach the voltage level that is required so a connected device detects that as a clockedge (this is probably wrong translated).

Well, why should a mainboard manufacturer build in a quality ocsilator? Nobody ever notice
this, and it is not required anyway.

To perform a little calculation, we asume the following: We have a oscillator with 100MHz and an error of ± 100ppm. So we get a variation of 100Mhz * 100ppm = 10000 pulses per secound. Since the cpu takes
this clock and multiplys it with for example 5 to get a cpu speed of 500Mhz, the error is also multiplied by 5 so we are ending up having an error of ± 50000 pulses per second.

Since yet be haven’t thought about the raising/releases time / nor the changes in the
frequncies due to a change of the enviroment temperature it summs up a ± 1% change in the cpu frequency.

Since, I don’t think you will belive that, I suggest you grap yourself a book about electronics/ and
or oscillators and read it.

According to the last statement you made:

I know that is true, I don’t have to ask anybody else because I saw it often enought on a ocsiloscope. Do you ?

But could it be, that someone reading a oscillator specification does not know how to interprett it :rolleys: ?

Have a nice day!

LG

Hi Bryan!

First of all I have to say that I really start to enjoy our little discusion (even it is NOT OpenGL related).

Well again I also disagree with you .

But first let me explain something.

In the past I already worked with a AMD K6-2 300Mhz system and did some 3dnow! stuff.
To be honest I did not really got into the internals of the K6-2 (like the pipelining, out-of-order execution, register renaming, code translation…) as I did with the Intel cpu’s, so the following
is/may Intel P2/P3 specific.

In general you are right about the cpuid instruction, but I didn’t have to put in because
the internal code (look at the disassembly) use the EAX and EDX register as well. Since the
rdtsc instructions writes to the EAX/EDX register as well, the Intel P2/P3 instruction sheduler
dectects this as a dependency and thus executes the instrutions “in order” (it seams that the amd is handling this different, but i dont have a proof for this yet. Since I still have the documention
I will check that later). So the code is executed linear on my system, making the sheduling cpuid
instruction unnessary (in this case).

I agree with you that it probably better to use a different inner loop, but only not to confuse
everybody. I still say that this is code independ.

Yes, I did checked your results, and came to this conclusion: I mentioned in a former post that
the result is limited by the accuarcy of the timegettime() function (1ms, I wish it would be 1ns)

Your results are not monotonical decreasing, I am pretty sure that you will also get a result around 398750 if you increase the loop count to something like 5000000 to 50000000, because the higher values
nullifing the inacuraccy from the timefunction. Maybe you should look at the milliseconds it took
to execute the 1000000 and the 2000000 loops. The result for the second is not twice the first one right?

The same is true for my original inner loop I cross checked it on my system with various loop counts
and got almost the same result.

What do you think?

Besides I agree that I made the mistake to choose a loop count that is to low, and that I only performed
a binary test on a AMD system (isworking/isnotworking)

Could anybody with a Intel based system validate, my results, please?

Oh, by the way

Since you are the one that got picky
You don’t really think your “static” loop always excute within the same amount of clockcycles, do you?
While this is true in theory, it is not in realty, because there is always an interrupt/task change
trashing your caches, branchprediction and so on
You also said the execute time of a divison is dependent on the bits of the mantissa, this was true for
older cpu like the 486 or motorola 680040, it is not true for an Intel Pentium 1/2/3 or an AMD K5/K6/K6-2/K6-3/Athlon and predessors. BTW: Did you know that the Intel cpu’s perform the INTEGER multiply/devision inside the fpu? Scarry, eh?

You mentioned a random-generator monkey test? What is that? I want it

Regards,

LG

hmm. OpenRandomLibrary.

computers can’t generate truly random numbers, and seeding the random function with the clock speed is a stab at trying to alleviate the problem, but it still caught by the fact cpu’s are finite state machines. If you know the seed, and you know the state of the random function, then you know with utter certainty what the number number its going to produce. Purely random generates do not have this property. (This is just a nod towards the argument someone said that it wasn’t truly random, and then the original poster saying that he was wrong

seeding the random generator with a value taken from physical measures (ie. measuring something physical, and thus inherently analogue) will mean that the program DOESN"T know what seed it will start with, but from then on it’s entirely deterministic.

my 2c worth

cheers
John

John, you’d be a good Maths teacher

Anyway that’s why we only have “pseudo-random” number generation functions. Usually we have to use a really random seed, like the three digits of the clock or the mouse position, like lgrosshennig said. But the seed determines the numeric suite we get.
[same seed <=> same numbers generated]

Using a combination of the techniques mentioned (mouse pos. clock and CPU speed) we can randomize the seed, and doing this from time to time we would surely not get the same numbers.
That’s what Bob intended to do but if he calls srand(time(NULL)) regularly (I mean periodically) he will get the same numbers with the same seed, because the instructions are executed within the same period of time: he would call srand() with the same timer values.
—> We must consider the whole program as a pseudorandom generator then.

Does anybody has an idea about using the error spread on a floating point number calculation to get different suite of “random” numbers from the same suite?
Or maybe combining two really random seeds, then?
Or even better, randomly use any of these techniques ?!

Sorry I have no code to submit, I’m just making suggestions. Hope this can help!

[This message has been edited by namu (edited 05-13-2000).]

OOps, I forgot some things:

You have to consider the numbers you generate as a long suite determined once and forever by your algorithm. The seed only indicates where you start in the suite. So you would have to change your algorithm each time you wanna get really random numbers ?!
What about randomly generating algorithms for generating random numbers…
Looks like this problem’s more complicated than I thought.

Well, mee again…

LG: Ok, I admit I mixed a few terms here. ppm was per year, and not runtime variations. I also talked to a teacher here at univeristy, and he said that a crystal can vary in speed, but he also said that ± 50MHz maybe was a bit too much.

namu: Yeah, calling srand() regularly will indeed cause rand() to generate the same number with the same seed. However, if you call srand() regularly, you will have a almost unique serie of number. This is what i wanted to say.

We also must remember one thing. How much “randomness” do we want/need? I don’t need a “true” randomgenerator in my apps, so I can safely use rand().

I came up with another idea aswell. Found this code on the net, seen it it other apps I have downloaded from the net, so I suppose it’s not too uncommon.

function IntNoise(32-bit integer: x)
x = (x<<13) ^ x;
return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) & 7fffffff) / 1073741824.0);
end IntNoise function

You could make the function to take no parameters, and then calculate a value for x when you enter the function. Here you can take some values like cpu-speed, time, mouseposition, <insert_more_components_here_if_you_like> and add/multiply them (or whatever you like).
The function returns a float value from 0.0 to 1.0. Quite handy when you want a random value in a range from 0 to x, just multiply the returnvalue by x.

LG,

I must concede. We’ve reached the limits of my hardware knowledge and I can’t think of a better software argument. Good work!

Monkey Tests: A guy named Marsaglia came up with some statistical tests for random number bit streams. I think there were five originally.

Here are the PostScript versions of Marsaglia’s work: http://www.evensen.org/marsaglia/keynote.ps http://www.evensen.org/marsaglia/monkey.ps

After the introduction of a suite of tests, more have been added. At last count I think there were fifteen. Some are specific to pseudo-random number generators (like the cycle length test and the keyspace-flatness tests).

Marsaglia wrote a program in Fortran that used his tests, this was called DieHard, a C translation can be found here:
http://stat.fsu.edu/~geo/diehard.html

A good introduction to random numbers and their importance can be found in “Applied Cryptography” by Bruce Schneier: http://www.counterpane.com/applied.html
http://www.amazon.com/exec/obidos/ASIN/0…0864803-8496018

This book is an excellent reference and has some very insightful discussion on all manner of security topics. Be sure to get the latest printing (6th?), there were over 100 errors in the first two runs.

Of course, anything used in Cryptography gets attacked simply for claiming mathematical security, so here’s Bruce et. al. attacking some PRNGs:
http://www.counterpane.com/pseudorandom_number.html

Hope you all find this stuff as fascinating as I do. Sorry for the extreme off-topic wandering.

–Bryan