You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to study the limit of Gini coefficient over time assuming random transactions. The goal is to approximate the inverse function to support systems designed for a particular Gini index, given some trading volume (amounts and number of transactions) vs number of players. Random transactions can later be used to compare the decision bias of more complex system. The lower limit makes it rather hard to derive the limit mathematically, so I think about running simulations and measure it.
The problem is to get a source of approximate perfect randomness, and an algorithm that converges efficiently but with high numerical accuracy. These two things are interdependent.
Ideally I want a guarantee that doesn't require me to store the sum or use a floating average, since that might accumulate rounding errors. If the random source is biased, I want a method to detect it.
One approach is to treat is the measurement as a spring, which gradually gets stiffer and less sensitive to measurements over time. When it stops changing the last decimal one can say it has "converged to a limit". The rate where it gets stiffer should be low enough to pick the next digit. If the random source is biased, two runs on the same input (except the random source) should yield different limits.
The problem is: I don't know what change in stiffness is sufficient for picking the next digit. If I knew that I could detect bias in the random source at an earlier moment of time.
It might be that a such algorithm is susceptible to rounding errors, but I am not sure.
Another approach is to repeat the spring algorithm in two layers, since the limit of many results also should converge to a limit. Alternatively use average in the second step.
The text was updated successfully, but these errors were encountered:
Neither thread_rng or OsRng could reproduce the limit accurately. I have been thinking some about why, and believe that the stiffness should slow down exponentially. This is equivalent to increasing samples exponentially to get higher accuracy.
For now, I think it is best to settle for a goal of 1/1000 accuracy.
I want to study the limit of Gini coefficient over time assuming random transactions. The goal is to approximate the inverse function to support systems designed for a particular Gini index, given some trading volume (amounts and number of transactions) vs number of players. Random transactions can later be used to compare the decision bias of more complex system. The lower limit makes it rather hard to derive the limit mathematically, so I think about running simulations and measure it.
The problem is to get a source of approximate perfect randomness, and an algorithm that converges efficiently but with high numerical accuracy. These two things are interdependent.
Ideally I want a guarantee that doesn't require me to store the sum or use a floating average, since that might accumulate rounding errors. If the random source is biased, I want a method to detect it.
One approach is to treat is the measurement as a spring, which gradually gets stiffer and less sensitive to measurements over time. When it stops changing the last decimal one can say it has "converged to a limit". The rate where it gets stiffer should be low enough to pick the next digit. If the random source is biased, two runs on the same input (except the random source) should yield different limits.
The problem is: I don't know what change in stiffness is sufficient for picking the next digit. If I knew that I could detect bias in the random source at an earlier moment of time.
It might be that a such algorithm is susceptible to rounding errors, but I am not sure.
Another approach is to repeat the spring algorithm in two layers, since the limit of many results also should converge to a limit. Alternatively use average in the second step.
The text was updated successfully, but these errors were encountered: