Skip to content

Rate Limiter

Chau Nguyen edited this page May 11, 2017 · 33 revisions

There are currently two forms of rate limiter (both quite primitive).

  1. Burst
  2. Spread

Burst

Burst is the default rate limiter.

So for example, with a rate limit of 10r/10s, say you want to send 35 requests almost all at the same time:

  1. Request #1 is accepted, and your rate limit starts.

  2. Requests #2-10 are accepted in almost the same second.

  3. You are now rate limited until 10 seconds from Request #1.

  4. Rate limit is now lifted.

  5. Same as above (#+10).

  6. Same as above (#+10).

  7. Rate limit now lifted, and the Requests #31-35 are processed in like 1 second and we're done.

Which should net us an execution time of around 30000ms + code execution in ms.

You can test out the rate limiter (and see that it supports simultaneous requests to multiple regions) with the following code:

var num = 45 // # of requests

function count(err, data) {
  if (data) --num
  if (err) console.error(err)
  if (num == 0) console.timeEnd('api')
}

console.time('api')
for (var i = 0; i < 15; ++i) {
  k.Champion.list('na', count)
  k.Champion.list('kr', count)
  k.Champion.list('euw', count)
}

This should output something like api: 11820.972ms.

var num = 300 // # of requests

function count(err, data) {
  if (data) --num
  if (err) console.error(err)
  if (num == 0) console.timeEnd('api')
}

console.time('api')
for (var i = 0; i < 100; ++i) {
  k.Champion.list('na', count)
  k.Champion.list('kr', count)
  k.Champion.list('euw', count)
}

This should output something like api: 100186.515ms.

To test that it works with retry headers, just run the program while sending a few requests from your browser to intentionally rate limit yourself.

Because of these lines, if (data) --num and if (num == 0) console.timeEnd('api'), you can tell if all your requests went through.

Spread

To initialize a spread rate limiter, initialize Kindred through the standard way, but add spread: true to the config object.

var KindredAPI = require('kindred-api')

var RIOT_API_KEY = 'whatever'
var REGIONS = KindredAPI.REGIONS
var LIMITS = KindredAPI.LIMITS
var CACHE_TYPES = KindredAPI.CACHE_TYPES

var k = new KindredAPI.Kindred({
  key: RIOT_API_KEY,
  defaultRegion: REGIONS.NORTH_AMERICA,
  debug: true,
  limits: LIMITS.DEV,
  spread: true, // this!
  cacheOptions: CACHE_TYPES[0]
})

Since spreading out requests naturally mean requests fill up the window more tightly, the execution time should be longer. Right now, I spread the requests by basically adding a rate limiter per 1~s (it's not actually 1s).

So if you are using a DEV key, you'll make 1 request at almost a rate of 1s. If you are using a PROD key, you'll make 50 requests at almost a rate of 1s.

var num = 45 // # of requests

function count(err, data) {
  if (data) --num
  if (err) console.error(err)
  if (num == 0) console.timeEnd('api')
}

console.time('api')
for (var i = 0; i < 15; ++i) {
  k.Champion.list('na', count)
  k.Champion.list('kr', count)
  k.Champion.list('euw', count)
}

This should output something like api: 15779.552ms, unlike in the Burst example where it is 11820.972ms. The final 5 requests were spread over the last extra 3-4 seconds.

Note, if you sent the maximum number of requests (20 instead of 15), you would be at around api: 20000ms naturally.

The next burst example should output something like api: 109209.904ms. There's an extra 9 seconds here, but I'm pretty sure this is because of code execution time and faulty math. Nonetheless, the requests are still spread out.