rand

rand

rand
Pseudo random number generation.
Module rand was introduced in OTP 18.0.

This module provides a pseudo random number generator. The module contains a number of algorithms. The uniform distribution algorithms are based on the Xoroshiro and Xorshift algorithms by Sebastiano Vigna. The normal distribution algorithm uses the Ziggurat Method by Marsaglia and Tsang on top of the uniform distribution algorithm.

For most algorithms, jump functions are provided for generating non-overlapping sequences for parallel computations. The jump functions perform calculations equivalent to perform a large number of repeated calls for calculating new states, but execute in a time roughly equivalent to one regular iteration per generator bit.

At the end of this module documentation there are also some niche algorithms to be used without this module's normal plug-in framework API that may be useful for special purposes like short generation time when quality is not essential, for seeding other generators, and such.

The following algorithms are provided:

exsss

Xorshift116**, 58 bits precision and period of 2^116-1

Jump function: equivalent to 2^64 calls

This is the Xorshift116 generator combined with the StarStar scrambler from the 2018 paper by David Blackman and Sebastiano Vigna: Scrambled Linear Pseudorandom Number Generators

The generator does not need 58-bit rotates so it is faster than the Xoroshiro116 generator, and when combined with the StarStar scrambler it does not have any weak low bits like exrop (Xoroshiro116+).

Alas, this combination is about 10% slower than exrop, but is despite that the default algorithm thanks to its statistical qualities.

exro928ss

Xoroshiro928**, 58 bits precision and a period of 2^928-1

Jump function: equivalent to 2^512 calls

This is a 58 bit version of Xoroshiro1024**, from the 2018 paper by David Blackman and Sebastiano Vigna: Scrambled Linear Pseudorandom Number Generators that on a 64 bit Erlang system executes only about 40% slower than the default exsss algorithm but with much longer period and better statistical properties, but on the flip side a larger state.

Many thanks to Sebastiano Vigna for his help with the 58 bit adaption.

exrop

Xoroshiro116+, 58 bits precision and period of 2^116-1

Jump function: equivalent to 2^64 calls

exs1024s

Xorshift1024*, 64 bits precision and a period of 2^1024-1

Jump function: equivalent to 2^512 calls

exsp

Xorshift116+, 58 bits precision and period of 2^116-1

Jump function: equivalent to 2^64 calls

This is a corrected version of the previous default algorithm, that now has been superseded by Xoroshiro116+ (exrop). Since there is no native 58 bit rotate instruction this algorithm executes a little (say < 15%) faster than exrop. See the algorithms' homepage.

The current default algorithm is exsss (Xorshift116**). If a specific algorithm is required, ensure to always use seed/1 to initialize the state.

Which algorithm that is the default may change between Erlang/OTP releases, and is selected to be one with high speed, small state and "good enough" statistical properties.

Undocumented (old) algorithms are deprecated but still implemented so old code relying on them will produce the same pseudo random sequences as before.

Note

There were a number of problems in the implementation of the now undocumented algorithms, which is why they are deprecated. The new algorithms are a bit slower but do not have these problems:

Uniform integer ranges had a skew in the probability distribution that was not noticable for small ranges but for large ranges less than the generator's precision the probability to produce a low number could be twice the probability for a high.

Uniform integer ranges larger than or equal to the generator's precision used a floating point fallback that only calculated with 52 bits which is smaller than the requested range and therefore were not all numbers in the requested range even possible to produce.

Uniform floats had a non-uniform density so small values i.e less than 0.5 had got smaller intervals decreasing as the generated value approached 0.0 although still uniformly distributed for sufficiently large subranges. The new algorithms produces uniformly distributed floats on the form N * 2.0^(-53) hence equally spaced.

Every time a random number is requested, a state is used to calculate it and a new state is produced. The state can either be implicit or be an explicit argument and return value.

The functions with implicit state use the process dictionary variable rand_seed to remember the current state.

If a process calls uniform/0, uniform/1 or uniform_real/0 without setting a seed first, seed/1 is called automatically with the default algorithm and creates a non-constant seed.

The functions with explicit state never use the process dictionary.

Examples:

Simple use; creates and seeds the default algorithm with a non-constant seed if not already done:

R0 = rand:uniform(),
R1 = rand:uniform(),

Use a specified algorithm:

_ = rand:seed(exs928ss),
R2 = rand:uniform(),

Use a specified algorithm with a constant seed:

_ = rand:seed(exs928ss, {123, 123534, 345345}),
R3 = rand:uniform(),

Use the functional API with a non-constant seed:

S0 = rand:seed_s(exsss),
{R4, S1} = rand:uniform_s(S0),

Textbook basic form Box-Muller standard normal deviate

R5 = rand:uniform_real(),
R6 = rand:uniform(),
SND0 = math:sqrt(-2 * math:log(R5)) * math:cos(math:pi() * R6)

Create a standard normal deviate:

{SND1, S2} = rand:normal_s(S1),

Create a normal deviate with mean -3 and variance 0.5:

{ND0, S3} = rand:normal_s(-3, 0.5, S2),
Note

The builtin random number generator algorithms are not cryptographically strong. If a cryptographically strong random number generator is needed, use something like crypto:rand_seed/0.

For all these generators except exro928ss and exsss the lowest bit(s) has got a slightly less random behaviour than all other bits. 1 bit for exrop (and exsp), and 3 bits for exs1024s. See for example the explanation in the Xoroshiro128+ generator source code:

Beside passing BigCrush, this generator passes the PractRand test suite
up to (and included) 16TB, with the exception of binary rank tests,
which fail due to the lowest bit being an LFSR; all other bits pass all
tests. We suggest to use a sign test to extract a random Boolean value.

If this is a problem; to generate a boolean with these algorithms use something like this:

(rand:uniform(256) > 128) % -> boolean()
((rand:uniform(256) - 1) bsr 7) % -> 0 | 1

For a general range, with N = 1 for exrop, and N = 3 for exs1024s:

(((rand:uniform(Range bsl N) - 1) bsr N) + 1)

The floating point generating functions in this module waste the lowest bits when converting from an integer so they avoid this snag.

A seed value for the generator.

A list of integers sets the generator's internal state directly, after algorithm-dependent checks of the value and masking to the proper word size. The number of integers must be equal to the number of state words in the generator.

An integer is used as the initial state for a SplitMix64 generator. The output values of that is then used for setting the generator's internal state after masking to the proper word size and if needed avoiding zero values.

A traditional 3-tuple of integers seed is passed through algorithm-dependent hashing functions to create the generator's initial state.

0 .. (2^58 - 1)

0 .. (2^64 - 1)

Returns, for a specified integer N >= 0, a binary() with that number of random bytes. Generates as many random numbers as required using the selected algorithm to compose the binary, and updates the state in the process dictionary accordingly.

Returns, for a specified integer N >= 0 and a state, a binary() with that number of random bytes, and a new state. Generates as many random numbers as required using the selected algorithm to compose the binary, and the new state.

Returns the random number state in an external format. To be used with seed/1.

Returns the random number generator state in an external format. To be used with seed/1.

Returns the state after performing jump calculation to the state in the process dictionary.

This function generates a not_implemented error exception when the jump function is not implemented for the algorithm specified in the state in the process dictionary.

Returns the state after performing jump calculation to the given state.

This function generates a not_implemented error exception when the jump function is not implemented for the algorithm specified in the state.

Returns a standard normal deviate float (that is, the mean is 0 and the standard deviation is 1) and updates the state in the process dictionary.

Returns a normal N(Mean, Variance) deviate float and updates the state in the process dictionary.

Returns, for a specified state, a standard normal deviate float (that is, the mean is 0 and the standard deviation is 1) and a new state.

Returns, for a specified state, a normal N(Mean, Variance) deviate float and a new state.

Seeds random number generation with the specifed algorithm and time-dependent data if AlgOrStateOrExpState is an algorithm. Alg = default is an alias for the default algorithm.

Otherwise recreates the exported seed in the process dictionary, and returns the state. See also export_seed/0.

Seeds random number generation with the specified algorithm and integers in the process dictionary and returns the state. Alg = default is an alias for the default algorithm.

Seeds random number generation with the specifed algorithm and time-dependent data if AlgOrStateOrExpState is an algorithm. Alg = default is an alias for the default algorithm.

Otherwise recreates the exported seed and returns the state. See also export_seed/0.

Returns a random float uniformly distributed in the value range 0.0 =< X < 1.0 and updates the state in the process dictionary.

The generated numbers are on the form N * 2.0^(-53), that is; equally spaced in the interval.

Warning

This function may return exactly 0.0 which can be fatal for certain applications. If that is undesired you can use (1.0 - rand:uniform()) to get the interval 0.0 < X =< 1.0, or instead use uniform_real/0.

If neither endpoint is desired you can test and re-try like this:

my_uniform() ->
    case rand:uniform() of
        0.0 -> my_uniform();
	X -> X
    end
end.

Returns a random float uniformly distributed in the value range DBL_MIN =< X < 1.0 and updates the state in the process dictionary.

Conceptually, a random real number R is generated from the interval 0 =< R < 1 and then the closest rounded down normalized number in the IEEE 754 Double precision format is returned.

Note

The generated numbers from this function has got better granularity for small numbers than the regular uniform/0 because all bits in the mantissa are random. This property, in combination with the fact that exactly zero is never returned is useful for algoritms doing for example 1.0 / X or math:log(X).

See uniform_real_s/1 for more explanation.

Returns, for a specified integer N >= 1, a random integer uniformly distributed in the value range 1 =< X =< N and updates the state in the process dictionary.

Returns, for a specified state, random float uniformly distributed in the value range 0.0 =< X < 1.0 and a new state.

The generated numbers are on the form N * 2.0^(-53), that is; equally spaced in the interval.

Warning

This function may return exactly 0.0 which can be fatal for certain applications. If that is undesired you can use (1.0 - rand:uniform(State)) to get the interval 0.0 < X =< 1.0, or instead use uniform_real_s/1.

If neither endpoint is desired you can test and re-try like this:

my_uniform(State) ->
    case rand:uniform(State) of
        {0.0, NewState} -> my_uniform(NewState);
	Result -> Result
    end
end.

Returns, for a specified state, a random float uniformly distributed in the value range DBL_MIN =< X < 1.0 and updates the state in the process dictionary.

Conceptually, a random real number R is generated from the interval 0 =< R < 1 and then the closest rounded down normalized number in the IEEE 754 Double precision format is returned.

Note

The generated numbers from this function has got better granularity for small numbers than the regular uniform_s/1 because all bits in the mantissa are random. This property, in combination with the fact that exactly zero is never returned is useful for algoritms doing for example 1.0 / X or math:log(X).

The concept implicates that the probability to get exactly zero is extremely low; so low that this function is in fact guaranteed to never return zero. The smallest number that it might return is DBL_MIN, which is 2.0^(-1022).

The value range stated at the top of this function description is technically correct, but 0.0 =< X < 1.0 is a better description of the generated numbers' statistical distribution. Except that exactly 0.0 is never returned, which is not possible to observe statistically.

For example; for all sub ranges N*2.0^(-53) =< X < (N+1)*2.0^(-53) where 0 =< integer(N) < 2.0^53 the probability is the same. Compare that with the form of the numbers generated by uniform_s/1.

Having to generate extra random bits for small numbers costs a little performance. This function is about 20% slower than the regular uniform_s/1

Returns, for a specified integer N >= 1 and a state, a random integer uniformly distributed in the value range 1 =< X =< N and a new state.

Returns a random 64-bit integer X and a new generator state NewAlgState, according to the SplitMix64 algorithm.

This generator is used internally in the rand module for seeding other generators since it is of a quite different breed which reduces the probability for creating an accidentally bad seed.

Returns a random 58-bit integer X and a new generator state NewAlgState, according to the Xorshift116+ algorithm.

This is an API function into the internal implementation of the exsp algorithm that enables using it without the overhead of the plug-in framework, which might be useful for time critial applications. On a typical 64 bit Erlang VM this approach executes in just above 30% (1/3) of the time for the default algorithm through this module's normal plug-in framework.

To seed this generator use {_, AlgState} = rand:seed_s(exsp) or {_, AlgState} = rand:seed_s(exsp, Seed) with a specific Seed.

Note

This function offers no help in generating a number on a selected range, nor in generating a floating point number. It is easy to accidentally mess up the fairly good statistical properties of this generator when doing either. Note also the caveat about weak low bits that this generator suffers from. The generator is exported in this form primarily for performance.

Returns a new generator state equivalent of the state after iterating over exsp_next/1 2^64 times.

See the description of jump functions at the top of this module description.

Returns a generated Pseudo Random Number which is also the new generated state, X1, according to a classical Multiplicative Congruential Generator (a.k.a Multiplicative Linear Congruential Generator, Lehmer random number generator, Park-Miller random number generator).

This generator uses the modulus 2^35 - 31 and the multiplication constant 185852 from the paper "Tables of Linear Congruential Generators of different sizes and good lattice structure" by Pierre L'Ecuyer (1997) and they are selected for performance to keep the computation under the Erlang bignum limit.

The generator may be written as X1 = (185852*X0) rem ((1 bsl 35)-31), but the properties of the chosen constants has allowed an optimization of the otherwise expensive rem operation.

On a typical 64 bit Erlang VM this generator executes in just below 10% (1/10) of the time for the default algorithm in this module.

Note

This generator is only suitable for insensitive special niche applications since it has a short period (2^35 - 32), few bits (under 35), is not a power of 2 generator (range 1 .. (2^35 - 32)), offers no help in generating numbers on a specified range, and so on.

But for pseudo random load distribution and such it might be useful, since it is very fast. It normally beats even the known-to-be-fast trick erlang:phash2(erlang:unique_integer()).

Returns a generated Pseudo Random Number which is also the new generated state, X1, according to a classical Linear Congruential Generator, a power of 2 mixed congruential generator.

This generator uses the modulus 2^35 and the multiplication constant 15319397 from the paper "Tables of Linear Congruential Generators of different sizes and good lattice structure" by Pierre L'Ecuyer (1997) and they are selected for performance to keep the computation under the Erlang bignum limit. The addition constant has been selected to 15366142135 (it has to be odd) which looks more interesting than simply 1.

The generator may be written as X1 = ((15319397*X0) + 15366142135) band ((1 bsl 35)-1).

On a typical 64 bit Erlang VM this generator executes in just below 7% (1/15) of the time for the default algorithm in this module, which is the fastest generator the author has seen. It can hardly be beaten by even a BIF implementation of any generator since the execution time is close to the overhead of a BIF call.

Note

This generator is only suitable for insensitive special niche applications since it has a short period (2^35), few bits (35), offers no help in generating numbers on a specified range, and has, among others, the known statistical artifact that the lowest bit simply alternates, the next to lowest has a period of 4, and so on, so it is only the highest bits that achieve any form of statistical quality.

But for pseudo random load distribution and such it might be useful, since it is extremely fast. The mcg35/1 generator above has less statistical artifacts, but instead it has other peculiarities since it is not a power of 2 generator.