diff --git a/.circleci/config.yml b/.circleci/config.yml index 5d2e5941..47eac8fd 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -23,7 +23,7 @@ jobs: name: Check Code Format command: | set -e - pip install "black==19.10b0" "isort>=4.3.15" + pip install "black>=20.8b1" "isort>=5.4.2" make format-check - run: name: Run Tests @@ -33,6 +33,7 @@ jobs: set -e make test no_output_timeout: 30m + - store_test_results: path: /home/circleci/project/tests/test-reports - store_artifacts: diff --git a/dockerfiles/Dockerfile b/dockerfiles/Dockerfile index f537098a..04d98f59 100644 --- a/dockerfiles/Dockerfile +++ b/dockerfiles/Dockerfile @@ -29,8 +29,7 @@ RUN apk --no-cache upgrade && \ chmod 0700 /root/.ssh && \ passwd -u root && \ # install dependencies of conductor2 used by perf - pip2 install filelock twisted requests queuelib psutil crochet msgpack-python unidecode attrdict service_identity && \ - pip2 install git+https://github.com/esnme/ultrajson.git@v1.35 + pip2 install filelock twisted requests queuelib ujson psutil crochet msgpack-python unidecode attrdict service_identity COPY dockerfiles/sshd_config /etc/ssh/sshd_config COPY dockerfiles/entrypoint.sh /sbin/entrypoint.sh diff --git a/docs/BASICS.md b/docs/BASICS.md index b0834083..0fa79364 100644 --- a/docs/BASICS.md +++ b/docs/BASICS.md @@ -1,42 +1,45 @@ # Welcome Welcome to the basics of Eventgen. -This should hopefully get you through setting up a working eventgen instance. For a complete reference of all of the available configuration options, please check out the [eventgen.conf.spec](REFERENCE.md#eventgenconfspec). With that, feel free to dig right in, and please post to the Issues page if you have any questions. +This should hopefully get you through setting up a working eventgen instance. For a complete reference of all available configuration options, please check out the [eventgen.conf.spec](REFERENCE.md#eventgenconfspec). In the event you hit an issue, please post to the Issues page of the eventgen github repository (github.com/splunk/eventgen). ## Replay Example -The first example we'll show you should likely cover 90% of the use cases you can imagine. Eventgen can take an export from another Splunk instance, or just a plain text file, and replay those events while replacing the time stamps. Eventgen will pause the amount of time between each event just like it happened in the original, so the events will appear to be coming out in real time. When Eventgen reaches the end of the file, it will automatically start over from the beginning. +Replay mode is likely to cover 90% of the use cases you can imagine. Eventgen can take an export from another Splunk instance, or just a plain text file, and replay those events while replacing the time stamps. Eventgen will pause the amount of time between each event just like what happened in the original, so the events will appear to be coming out in real time. When Eventgen reaches the end of the file, it can be configured to start over, stop or rest an interval and begin all over. By default replay mode it will rest the default interval (60s) and then automatically start over from the beginning. ### Making a Splunk Export -To build a seed for your new Eventgen, I recommend taking an export from an existing Splunk instance. You could also take a regular log file and use it for replay (in which case, you can omit sampletype=csv). There are a few considerations. +To build a seed for your new Eventgen, start by taking an export from an existing Splunk instance. Replay also can take a regular log file (in which case, you can omit sampletype=csv). There are a few considerations. * First, Eventgen assumes its sample files are in chronological order. -* Second, it only uses `index`, `host`, `source`, `sourcetype` and `_raw` fields. To accommodate that, whatever search you write, we recommend appending `| reverse | fields index, host, source, sourcetype, _raw` to your Splunk search and then doing an export to CSV format. +* Second, csv only uses `index`, `host`, `source`, `sourcetype` and `_raw` fields. When using splunk search to build your replay, please append `| reverse | fields index, host, source, sourcetype, _raw` to your Splunk search and then doing an export to CSV format. * Third, make sure you find all the different time formats inside the log file and set up tokens to replace for them, so limiting your initial search to a few sourcetypes is probably advisable. +* Forth, if not using a csv, token.0. should always be used to find and replace the replaytimestamp. Eventgen needs to be told which field / regex to use for finding out the difference in time between events. + +Please note, replaytimestamp means replace a replay with the time difference of the original event difference, where timestamp will always replace the time with "now". ### Running the example -You can easily run these examples by hand. In fact, for testing purposes, I almost always change `outputMode = stdout` to visually examine the data. Run the command below from directory `$EVENTGEN_HOME/splunk_eventgen`. +You can easily run these examples by hand. For testing purposes, change `outputMode = stdout` or `outputMode = modinput` to visually examine the data. Run the command below from directory `$EVENTGEN_HOME/splunk_eventgen`. python -m splunk_eventgen generate README/eventgen.conf.tutorial1 -You should now see events showing up on your terminal window. You can see Eventgen will sleep between events as it sees gaps in the events in the source log. +You should now see events showing up on your terminal window. Eventgen will sleep between events as it sees gaps in the events in the source log. ### Wrapping up the first example -This will cover most, if not all, of most people's use cases. Find a real world example of what you want to generate events off, extract it from Splunk or a log file, and toss it into Eventgen. Assuming that meets all your needs, you might want to skip to the [Deployment](#deployment) section. +Find a real world example of what you want to generate events off, extract it from Splunk or a log file, and toss it into Eventgen. Assuming that meets all your needs, you might want to skip to the [Deployment](#deployment) section. ## Basic Sample -Next, lets build a basic noise generator from a log file. This will use sample mode, which take a file and replay all or a subset of that file every X seconds, defined by the interval. Sample mode is the original way eventgen ran, and it's still very useful for generating random data where you want to engineer the data generated from the ground up. Run the command below from directory `$EVENTGEN_HOME/splunk_eventgen`: +Next, lets build a basic noise generator from a log file. This will use sample mode, which take a file and either dump the entire file, or randomly select subset of that file every X seconds, defined by the count and interval. It's important to remember, the default interval is set to 60s, even if you do not specify an interval, there will be one added to your stanza. `Count` is used to specify how many events should leak out per `interval`. Sample mode is the original way eventgen ran, and it's still very useful for generating random data where you want to engineer the data generated from the ground up. Run the command below from directory `$EVENTGEN_HOME/splunk_eventgen`: python -m splunk_eventgen generate README/eventgen.conf.tutorial2 ### Grabbing and rating events -We have a file in the samples directory called `sample.tutorial2` that we'll use as the seed for our event generator. It contains some random noise pulled from Router and Switch logs. It will provide a good basis of showing how we can very quickly take a customer's log file and randomly sample it and make it show up in real time. We won't get too sophisticated with substitutions in this example, just a timestamp, and some more varied interfaces to make it look interesting. +In the samples directory there is a file called `sample.tutorial2`. It contains some random noise pulled from Router and Switch logs. The sample will select 20 events from the file, every 15s and then allow that output to change slightly based on the time of day. -When we're defining a new config file, we need to decide which defaults we're going to override. By default for example, we'll rate events by time of day and day of week. Do we want to override that? There's a variety of defaults we should consider. They're listed in the [eventgen.conf.spec](https://github.com/splunk/eventgen/blob/master/README/eventgen.conf.spec) in the README directory for reference. +When defining a new config file, decide which defaults to override, and place them in your `eventgen.conf`. In this example, the default time of day and day of week are varied. There's a variety of defaults that can be overwritten. See [eventgen.conf.spec](https://github.com/splunk/eventgen/blob/master/README/eventgen.conf.spec) in the README directory for reference. -Let's list out the file here and then break down the config directives we've not seen before: +Below is the contents of configuration directives used in `sample.tutorial2`: ``` [sample.tutorial2] @@ -57,14 +60,14 @@ token.0.token = \w{3}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2} token.0.replacementType = timestamp token.0.replacement = %b %d %H:%M:%S ``` - +Eventgen has 3 major sections, rating, generating, and outputing. The first block located here lets the `generator` know how many events to create, and how: First: ``` interval = 15 earliest = -15s latest = now ``` -Let's us decide how often we want to generate events and how we want to generate time stamps for these events. In this case, every 15 seconds should be sufficient, but depending on your use case you may want to generate only once an hour, once every minute, or every second. We'll generally want to set earliest to a value that's equal to a splunk relative time specifier opposite of interval. So, if we set it to an hour, or 3600, we'll want earliest to be -3600s or -1h. For this example, lets generate every 15 seconds. +In the first three lines, the generator will be told to run every 15s, and to make sure the earliest event is placed 15s into the past. The last event will end exactly when the generator started (otherwise known as `now`), effectively creating a backfill for 15s. ``` count = 20 hourOfDayRate = { "0": 0.8, "1": 1.0, "2": 0.9, "3": 0.7, "4": 0.5, "5": 0.4, "6": 0.4, "7": 0.4, "8": 0.4, "9": 0.4, "10": 0.4, "11": 0.4, "12": 0.4, "13": 0.4, "14": 0.4, "15": 0.4, "16": 0.4, "17": 0.4, "18": 0.4, "19": 0.4, "20": 0.4, "21": 0.4, "22": 0.5, "23": 0.6 } @@ -72,32 +75,30 @@ dayOfWeekRate = { "0": 0.7, "1": 0.7, "2": 0.7, "3": 0.5, "4": 0.5, "5": 1.0, "6 randomizeCount = 0.2 randomizeEvents = true ``` -Eventgen by default will rate events by the time of day and the day of the week and introduce some randomness every interval. Also by default, we'll only grab the first X events from the log file every time. For this example, we're looking at router and switch events, which actually is the opposite of the normal business flow. We expect to see more events overnight for a few hours during maintenance windows and calm down during the day, so we'll need to override the default rating which looks like a standard business cycle. - -`hourOfDayRate` is a JSON formatted hash, with a string identifier for the current hour and a float representing the multiplier we want to use for that hour. In general, I've always configured the rate to be between 0 and 1, but nothing limits you from putting it at any valid floating point value. `dayOfWeekRate` is similar, but the number is the day of the week, starting with Sunday. In this example, Saturday and Sunday early mornings should have the greatest number of events, with fewer events evenly distributed during the week. `randomizeCount` says to introduce 20% randomness, which means plus or minus 10% of the rated total, to every rated count just to make sure we don't have a flat rate of events. `randomizeEvents` we discussed previously, it makes sure we don't grab the same lines from the file every time. +The next 5 lines in the first section tell the generator how much data to generate. In this case, a base count of 20, that then will be multiplied by the ratios for `hourOfDayRate`,`dayOfWeekRate`, and `randomizeCount`. `hourOfDayRate` is a JSON formatted hash, with a string identifier for the current hour and a float representing the multiplier we want to use for that hour. These ratios can be any valid floating point value. `dayOfWeekRate` is similar, but the number is the day of the week, starting with Sunday. `randomizeCount` says to introduce 20% randomness, which means plus or minus 10% of the rated total, to every rated count. `randomizeEvents` makes sure we don't grab the same lines from the file every time. +The next section configures the `output` plugin. ``` outputMode = file fileName = /tmp/ciscosample.log ``` -As you saw with the last example, we can output straight to Splunk, but in this case we're going to do a simple output to file. The file outputMode rotates based on size (by default 10 megabytes) and keeps the most recent 5 files around. +The output plugin is set to output to a file. The file outputMode rotates based on size (by default 10 megabytes) and keeps the most recent 5 files around. + +The last section deals with data manipulation. ``` ## Replace timestamp Feb 4 07:52:53 token.0.token = \w{3}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2} token.0.replacementType = timestamp token.0.replacement = %b %d %H:%M:%S ``` -As we've seen before, here's a simple token substitution for the timestamp. This will make the events appear to be coming in sometime during the last 15 seconds, based on earliest and latest configs above. - -Let's look in detail at this configuration format. `token` is the configuration statement, `0` is the token number (we'll want a different number for every token we define, although they can be non-contiguous). The third part defines the three subitems of token configuration. The first, `token`, defines a regular expression we're going to look for in the events as they stream through Eventgen. The second, `replacementType`, defines what type of replacement we're going to need. This is a timestamp, but we also offer a variety of other token replacement types such as random for randomly generated values, file for grabbing lines out of files, static for replacing with static strings, etc. We'll cover those in detail later. The third subitem, `replacement`, is specific for the `replacementType`, and in this case defines a strptime format we're going to use to output the time using strftime. For a reference on how to configure strptime, check python's documentation on strptime format strings. - -This should now replay random events from the file we have configured. Go ahead and cd to `$EVENTGEN_HOME/splunk_eventgen` and run `python -m splunk_eventgen generate README/eventgen.conf.tutorial2`. In another shell, `tail -f /tmp/ciscosample.log` and you should see events replaying from the `sample.tutorial2` file! You can reuse this same example to easily replay a customer log file, of course accounting for the different regular expressions and strptime formats you'll need for their timestamps. Remember to customize `interval`, `earliest`, and `count` for the number of events you want the generator to build. +This token substitution is for the timestamp. Events will have their timestamp overridden based on earliest and latest configs above. +`token` is the configuration statement, `0` is the token number (use a different number for every token). The third part of the token name defines the three subitems of token configuration. The first, `token`, defines a regular expression used on the events to match a given field. The second, `replacementType`, defines how to replace the matched field (for a list of different `replacementType`s please see [eventgen.conf.spec](https://github.com/splunk/eventgen/blob/master/README/eventgen.conf.spec)). The third subitem, `replacement`, specifies the configuration for the `replacementType`. In this case, defines a strptime format to use on output. For a reference on how to configure strptime, please see python's documentation on strptime format strings. ## Second example, building events from scratch Replaying random events from a file is an easy way to build an eventgen. Sometimes, like in Eventgen we're building for VMware, the events you're modeling are so complicated it's simplest way to do it without investing a lot of time modeling all the tokens you want to subtitute etc. Also, sometimes so many tokens need to move together, it's easiest just to replay the file with new timestamps. However, if we're building a new demo from scratch, a lot of times we want to generate events from a basic template with values we're providing from files. Let's look at an example: ``` -# Note, these samples assume you're installed as an app or a symbolic link in +# Note, these samples assume you're installed as an app or a symbolic link in # $SPLUNK_HOME/etc/apps/eventgen. If not, please change the paths below. [sample.tutorial3] @@ -208,7 +209,7 @@ The first challenge with modeling transactions is that they often contain multip [sample.mobilemusic.csv] sampletype = csv - + If you look at sample.mobilemusic.csv, you'll see the CSV file has fields for index, host, source and sourcetype. Just as we can specify those directives with `outputmode = splunkstream`, in `sampletype = csv` we'll pull those values directly from the file. This allows us to model a transaction with different \_raw events with individual values per event for index, host, source and sourcetype, but define tokens which will work across them. ### The second challenge and result: bundlelines @@ -228,7 +229,7 @@ Because of what we went through earlier, inside Eventgen, this three line CSV fi token.0.token = ((\w+\s+\d+\s+\d{2}:\d{2}:\d{2}:\d{3})|(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}:\d{3})) token.0.replacementType = replaytimestamp token.0.replacement = ["%b %d %H:%M:%S:%f", "%Y-%m-%d %H:%M:%S:%f"] - + The first line shows a really complicated RegEx. This is essentially using RegEx to match both timestamp formats contained in the file. If you look at the tutorial, you'll see both of these formats as they exist in other sample types, and in this case we bundled two capture groups together with a `|` to have our RegEx parser match both. Secondly, in the replacement clause, we have a JSON formatted list. This allows us to pass a user determined number of strptime formats. Replaytimestamp will use these formats to parse the timestamps it finds with the RegEx. It will then figure out differences between the events in the original event and introduce some randomness between them and then output them back in the strptime format it matched with. diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md index b91c7be4..e01d12ce 100644 --- a/docs/CHANGELOG.md +++ b/docs/CHANGELOG.md @@ -1,3 +1,7 @@ +**7.2.0**: + +- Check the release note and download the package/source from [Here](https://github.com/splunk/eventgen/releases/tag/7.2.0) + **7.1.1**: - Check the release note and download the package/source from [Here](https://github.com/splunk/eventgen/releases/tag/7.1.1) diff --git a/docs/REFERENCE.md b/docs/REFERENCE.md index 2009fb74..5d5dabf6 100644 --- a/docs/REFERENCE.md +++ b/docs/REFERENCE.md @@ -136,7 +136,7 @@ scsEndPoint = * Should be a full url to the scs endpoint scsAccessToken = - * Should be a scs access token. Do not include "Bearer". + * Should be a scs access token. Do not include "Bearer". scsClientId = * Optional @@ -329,7 +329,7 @@ sampletype = raw | csv OVERRIDES FOR DEFAULT FIELDS WILL ONLY WORK WITH outputMode SPLUNKSTREAM. interval = - * Only valid in mode = sample + * Delay between exections. This number in replay mode occurs after the replay has finished. * How often to generate sample (in seconds). * 0 means disabled. * Defaults to 60 seconds. diff --git a/poetry.lock b/poetry.lock index 2c32b56a..9e45d8ec 100644 --- a/poetry.lock +++ b/poetry.lock @@ -12,7 +12,7 @@ description = "A small Python module for determining appropriate platform-specif name = "appdirs" optional = false python-versions = "*" -version = "1.4.3" +version = "1.4.4" [[package]] category = "dev" @@ -28,13 +28,12 @@ description = "Classes Without Boilerplate" name = "attrs" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "19.3.0" +version = "20.1.0" [package.extras] -azure-pipelines = ["coverage", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface", "pytest-azurepipelines"] -dev = ["coverage", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface", "sphinx", "pre-commit"] -docs = ["sphinx", "zope.interface"] -tests = ["coverage", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface"] +dev = ["coverage (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface", "sphinx", "sphinx-rtd-theme", "pre-commit"] +docs = ["sphinx", "sphinx-rtd-theme", "zope.interface"] +tests = ["coverage (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface"] [[package]] category = "dev" @@ -42,18 +41,20 @@ description = "The uncompromising code formatter." name = "black" optional = false python-versions = ">=3.6" -version = "19.10b0" +version = "20.8b1" [package.dependencies] appdirs = "*" -attrs = ">=18.1.0" -click = ">=6.5" +click = ">=7.1.2" +mypy-extensions = ">=0.4.3" pathspec = ">=0.6,<1" -regex = "*" -toml = ">=0.9.4" +regex = ">=2020.1.8" +toml = ">=0.10.1" typed-ast = ">=1.4.0" +typing-extensions = ">=3.7.4" [package.extras] +colorama = ["colorama (>=0.4.3)"] d = ["aiohttp (>=3.3.2)", "aiohttp-cors"] [[package]] @@ -62,10 +63,10 @@ description = "The AWS SDK for Python" name = "boto3" optional = false python-versions = "*" -version = "1.13.3" +version = "1.14.53" [package.dependencies] -botocore = ">=1.16.3,<1.17.0" +botocore = ">=1.17.53,<1.18.0" jmespath = ">=0.7.1,<1.0.0" s3transfer = ">=0.3.0,<0.4.0" @@ -75,7 +76,7 @@ description = "Low-level, data-driven core of boto 3." name = "botocore" optional = false python-versions = "*" -version = "1.16.3" +version = "1.17.53" [package.dependencies] docutils = ">=0.10,<0.16" @@ -92,7 +93,7 @@ description = "Python package for providing Mozilla's CA Bundle." name = "certifi" optional = false python-versions = "*" -version = "2020.4.5.1" +version = "2020.6.20" [[package]] category = "main" @@ -100,7 +101,7 @@ description = "Foreign Function Interface for Python calling C code." name = "cffi" optional = false python-versions = "*" -version = "1.14.0" +version = "1.14.2" [package.dependencies] pycparser = "*" @@ -144,17 +145,17 @@ description = "cryptography is a package which provides cryptographic recipes an name = "cryptography" optional = false python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*" -version = "2.9.2" +version = "3.1" [package.dependencies] cffi = ">=1.8,<1.11.3 || >1.11.3" six = ">=1.4.1" [package.extras] -docs = ["sphinx (>=1.6.5,<1.8.0 || >1.8.0)", "sphinx-rtd-theme"] +docs = ["sphinx (>=1.6.5,<1.8.0 || >1.8.0,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1)", "sphinx-rtd-theme"] docstest = ["doc8", "pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"] -idna = ["idna (>=2.1)"] -pep8test = ["flake8", "flake8-import-order", "pep8-naming"] +pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"] +ssh = ["bcrypt (>=3.1.5)"] test = ["pytest (>=3.6.0,<3.9.0 || >3.9.0,<3.9.1 || >3.9.1,<3.9.2 || >3.9.2)", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,<3.79.2 || >3.79.2)"] [[package]] @@ -198,14 +199,6 @@ optional = false python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" version = "0.15.2" -[[package]] -category = "dev" -description = "Discover and load entry points from installed packages." -name = "entrypoints" -optional = false -python-versions = ">=2.7" -version = "0.3" - [[package]] category = "dev" description = "execnet: rapid multi-Python deployment" @@ -222,17 +215,20 @@ testing = ["pre-commit"] [[package]] category = "dev" -description = "the modular source code checker: pep8, pyflakes and co" +description = "the modular source code checker: pep8 pyflakes and co" name = "flake8" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "3.7.9" +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" +version = "3.8.3" [package.dependencies] -entrypoints = ">=0.3.0,<0.4.0" mccabe = ">=0.6.0,<0.7.0" -pycodestyle = ">=2.5.0,<2.6.0" -pyflakes = ">=2.1.0,<2.2.0" +pycodestyle = ">=2.6.0a1,<2.7.0" +pyflakes = ">=2.2.0,<2.3.0" + +[package.dependencies.importlib-metadata] +python = "<3.8" +version = "*" [[package]] category = "main" @@ -259,7 +255,7 @@ description = "A comprehensive HTTP client library." name = "httplib2" optional = false python-versions = "*" -version = "0.17.3" +version = "0.17.4" [[package]] category = "main" @@ -267,36 +263,35 @@ description = "Internationalized Domain Names in Applications (IDNA)" name = "idna" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "2.9" +version = "2.10" [[package]] -category = "dev" +category = "main" description = "Read metadata from Python packages" name = "importlib-metadata" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" -version = "1.6.0" +version = "1.7.0" [package.dependencies] zipp = ">=0.5" [package.extras] docs = ["sphinx", "rst.linker"] -testing = ["packaging", "importlib-resources"] +testing = ["packaging", "pep517", "importlib-resources (>=1.3)"] [[package]] category = "dev" description = "A Python utility / library to sort Python imports." name = "isort" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "4.3.21" +python-versions = ">=3.6,<4.0" +version = "5.4.2" [package.extras] -pipfile = ["pipreqs", "requirementslib"] -pyproject = ["toml"] -requirements = ["pipreqs", "pip-api"] -xdg_home = ["appdirs (>=1.4.0)"] +colors = ["colorama (>=0.4.3,<0.5.0)"] +pipfile_deprecated_finder = ["pipreqs", "requirementslib", "tomlkit (>=0.5.3)"] +requirements_deprecated_finder = ["pipreqs", "pip-api"] [[package]] category = "main" @@ -325,8 +320,8 @@ category = "main" description = "JSON Matching Expressions" name = "jmespath" optional = false -python-versions = "*" -version = "0.9.5" +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" +version = "0.10.0" [[package]] category = "main" @@ -334,7 +329,7 @@ description = "Powerful and Pythonic XML processing library combining libxml2/li name = "lxml" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, != 3.4.*" -version = "4.5.0" +version = "4.5.2" [package.extras] cssselect = ["cssselect (>=0.7)"] @@ -378,7 +373,15 @@ marker = "python_version > \"2.7\"" name = "more-itertools" optional = false python-versions = ">=3.5" -version = "8.2.0" +version = "8.5.0" + +[[package]] +category = "dev" +description = "Experimental type system extensions for programs checked with the mypy typechecker." +name = "mypy-extensions" +optional = false +python-versions = "*" +version = "0.4.3" [[package]] category = "dev" @@ -386,7 +389,7 @@ description = "Core utilities for Python packages" name = "packaging" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "20.3" +version = "20.4" [package.dependencies] pyparsing = ">=2.0.2" @@ -422,7 +425,7 @@ description = "library with cross-python path, ini-parsing, io, code, log facili name = "py" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "1.8.1" +version = "1.9.0" [[package]] category = "dev" @@ -430,7 +433,7 @@ description = "Python style guide checker" name = "pycodestyle" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "2.5.0" +version = "2.6.0" [[package]] category = "main" @@ -446,7 +449,7 @@ description = "passive checker of Python programs" name = "pyflakes" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "2.1.1" +version = "2.2.0" [[package]] category = "main" @@ -515,26 +518,27 @@ category = "dev" description = "Pytest plugin for measuring coverage." name = "pytest-cov" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "2.8.1" +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +version = "2.10.1" [package.dependencies] coverage = ">=4.4" -pytest = ">=3.6" +pytest = ">=4.6" [package.extras] -testing = ["fields", "hunter", "process-tests (2.0.2)", "six", "virtualenv"] +testing = ["fields", "hunter", "process-tests (2.0.2)", "six", "pytest-xdist", "virtualenv"] [[package]] category = "dev" description = "run tests in isolated forked subprocesses" name = "pytest-forked" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -version = "1.1.3" +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +version = "1.3.0" [package.dependencies] -pytest = ">=3.1.0" +py = "*" +pytest = ">=3.10" [[package]] category = "dev" @@ -542,7 +546,7 @@ description = "Thin-wrapper around the mock package for easier use with pytest" name = "pytest-mock" optional = false python-versions = ">=3.5" -version = "3.1.0" +version = "3.2.0" [package.dependencies] pytest = ">=2.7" @@ -556,7 +560,7 @@ description = "pytest xdist plugin for distributed testing and loop-on-failing m name = "pytest-xdist" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" -version = "1.32.0" +version = "1.34.0" [package.dependencies] execnet = ">=1.1" @@ -585,7 +589,7 @@ marker = "sys_platform == \"win32\" and python_version >= \"3.6\"" name = "pywin32" optional = false python-versions = "*" -version = "227" +version = "228" [[package]] category = "main" @@ -612,7 +616,7 @@ description = "Alternative regular expression module, to replace re." name = "regex" optional = false python-versions = "*" -version = "2020.4.4" +version = "2020.7.14" [[package]] category = "main" @@ -620,7 +624,7 @@ description = "Python HTTP for Humans." name = "requests" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" -version = "2.23.0" +version = "2.24.0" [package.dependencies] certifi = ">=2017.4.17" @@ -660,7 +664,7 @@ description = "Python 2 and 3 compatibility utilities" name = "six" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" -version = "1.14.0" +version = "1.15.0" [[package]] category = "dev" @@ -668,7 +672,7 @@ description = "Python Library for Tom's Obvious, Minimal Language" name = "toml" optional = false python-versions = "*" -version = "0.10.0" +version = "0.10.1" [[package]] category = "dev" @@ -678,6 +682,14 @@ optional = false python-versions = "*" version = "1.4.1" +[[package]] +category = "dev" +description = "Backported and Experimental Type Hints for Python 3.5+" +name = "typing-extensions" +optional = false +python-versions = "*" +version = "3.7.4.3" + [[package]] category = "main" description = "Ultra fast JSON encoder and decoder for Python" @@ -698,21 +710,13 @@ version = "1.24.2" secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"] socks = ["PySocks (>=1.5.6,<1.5.7 || >1.5.7,<2.0)"] -[[package]] -category = "main" -description = "UUID object and generation functions (Python 2.3 or higher)" -name = "uuid" -optional = false -python-versions = "*" -version = "1.30" - [[package]] category = "dev" -description = "Measures number of Terminal column cells of wide-character codes" +description = "Measures the displayed width of unicode strings in a terminal" name = "wcwidth" optional = false python-versions = "*" -version = "0.1.9" +version = "0.2.5" [[package]] category = "main" @@ -738,7 +742,7 @@ dev = ["pytest", "pytest-timeout", "coverage", "tox", "sphinx", "pallets-sphinx- watchdog = ["watchdog"] [[package]] -category = "dev" +category = "main" description = "Backport of pathlib-compatible object wrapper for zip files" name = "zipp" optional = false @@ -750,8 +754,9 @@ docs = ["sphinx", "jaraco.packaging (>=3.2)", "rst.linker (>=1.9)"] testing = ["jaraco.itertools", "func-timeout"] [metadata] -content-hash = "bf73dc1864dd064476086c4e0cd96891db1b431afdccd147b61b1c2e30ecb404" -python-versions = "^3.6" +content-hash = "500f9636a2d4932c0524595cb6cb44b4c715c29f03dab497219da2d4cda3fa85" +lock-version = "1.0" +python-versions = "^3.7" [metadata.files] apipkg = [ @@ -759,62 +764,62 @@ apipkg = [ {file = "apipkg-1.5.tar.gz", hash = "sha256:37228cda29411948b422fae072f57e31d3396d2ee1c9783775980ee9c9990af6"}, ] appdirs = [ - {file = "appdirs-1.4.3-py2.py3-none-any.whl", hash = "sha256:d8b24664561d0d34ddfaec54636d502d7cea6e29c3eaf68f3df6180863e2166e"}, - {file = "appdirs-1.4.3.tar.gz", hash = "sha256:9e5896d1372858f8dd3344faf4e5014d21849c756c8d5701f78f8a103b372d92"}, + {file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"}, + {file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"}, ] atomicwrites = [ {file = "atomicwrites-1.4.0-py2.py3-none-any.whl", hash = "sha256:6d1784dea7c0c8d4a5172b6c620f40b6e4cbfdf96d783691f2e1302a7b88e197"}, {file = "atomicwrites-1.4.0.tar.gz", hash = "sha256:ae70396ad1a434f9c7046fd2dd196fc04b12f9e91ffb859164193be8b6168a7a"}, ] attrs = [ - {file = "attrs-19.3.0-py2.py3-none-any.whl", hash = "sha256:08a96c641c3a74e44eb59afb61a24f2cb9f4d7188748e76ba4bb5edfa3cb7d1c"}, - {file = "attrs-19.3.0.tar.gz", hash = "sha256:f7b7ce16570fe9965acd6d30101a28f62fb4a7f9e926b3bbc9b61f8b04247e72"}, + {file = "attrs-20.1.0-py2.py3-none-any.whl", hash = "sha256:2867b7b9f8326499ab5b0e2d12801fa5c98842d2cbd22b35112ae04bf85b4dff"}, + {file = "attrs-20.1.0.tar.gz", hash = "sha256:0ef97238856430dcf9228e07f316aefc17e8939fc8507e18c6501b761ef1a42a"}, ] black = [ - {file = "black-19.10b0-py36-none-any.whl", hash = "sha256:1b30e59be925fafc1ee4565e5e08abef6b03fe455102883820fe5ee2e4734e0b"}, - {file = "black-19.10b0.tar.gz", hash = "sha256:c2edb73a08e9e0e6f65a0e6af18b059b8b1cdd5bef997d7a0b181df93dc81539"}, + {file = "black-20.8b1-py3-none-any.whl", hash = "sha256:70b62ef1527c950db59062cda342ea224d772abdf6adc58b86a45421bab20a6b"}, + {file = "black-20.8b1.tar.gz", hash = "sha256:1c02557aa099101b9d21496f8a914e9ed2222ef70336404eeeac8edba836fbea"}, ] boto3 = [ - {file = "boto3-1.13.3-py2.py3-none-any.whl", hash = "sha256:dc4d17a9b0bd6fb03b2650a0f7c7b6a458583fe7bebbcd6bbefd299d7169fb5b"}, - {file = "boto3-1.13.3.tar.gz", hash = "sha256:989ede38b9f69743e2536ccf371941354e77103b47b37d5ba90f77718368a248"}, + {file = "boto3-1.14.53-py2.py3-none-any.whl", hash = "sha256:defd1aa0bbc8eb9c3b6aefb5060972447f73a4f1c3ba995ba28d6f0b6b9059dd"}, + {file = "boto3-1.14.53.tar.gz", hash = "sha256:b240ac281de363e25a8e1a4c862559d6a056d98dcb9f487fc94d73c6f6599dfc"}, ] botocore = [ - {file = "botocore-1.16.3-py2.py3-none-any.whl", hash = "sha256:e2c384f378d96c09079d9fa44be4f6a849c4d5295de68c9c8afc8783f7a5a2a2"}, - {file = "botocore-1.16.3.tar.gz", hash = "sha256:d291035e643c353029df8985cbc0bbdcdf9117fff81c715dd688aadd51816f41"}, + {file = "botocore-1.17.53-py2.py3-none-any.whl", hash = "sha256:7e0272ceeb7747ed259a392e8d7b624cfd037085a8c59ef2b9f8916e7c556267"}, + {file = "botocore-1.17.53.tar.gz", hash = "sha256:d37a83ac23257c85c48b74ab81173980234f8fc078e7a9d312d0ee7d057f90e6"}, ] certifi = [ - {file = "certifi-2020.4.5.1-py2.py3-none-any.whl", hash = "sha256:1d987a998c75633c40847cc966fcf5904906c920a7f17ef374f5aa4282abd304"}, - {file = "certifi-2020.4.5.1.tar.gz", hash = "sha256:51fcb31174be6e6664c5f69e3e1691a2d72a1a12e90f872cbdb1567eb47b6519"}, + {file = "certifi-2020.6.20-py2.py3-none-any.whl", hash = "sha256:8fc0819f1f30ba15bdb34cceffb9ef04d99f420f68eb75d901e9560b8749fc41"}, + {file = "certifi-2020.6.20.tar.gz", hash = "sha256:5930595817496dd21bb8dc35dad090f1c2cd0adfaf21204bf6732ca5d8ee34d3"}, ] cffi = [ - {file = "cffi-1.14.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1cae98a7054b5c9391eb3249b86e0e99ab1e02bb0cc0575da191aedadbdf4384"}, - {file = "cffi-1.14.0-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:cf16e3cf6c0a5fdd9bc10c21687e19d29ad1fe863372b5543deaec1039581a30"}, - {file = "cffi-1.14.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:f2b0fa0c01d8a0c7483afd9f31d7ecf2d71760ca24499c8697aeb5ca37dc090c"}, - {file = "cffi-1.14.0-cp27-cp27m-win32.whl", hash = "sha256:99f748a7e71ff382613b4e1acc0ac83bf7ad167fb3802e35e90d9763daba4d78"}, - {file = "cffi-1.14.0-cp27-cp27m-win_amd64.whl", hash = "sha256:c420917b188a5582a56d8b93bdd8e0f6eca08c84ff623a4c16e809152cd35793"}, - {file = "cffi-1.14.0-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:399aed636c7d3749bbed55bc907c3288cb43c65c4389964ad5ff849b6370603e"}, - {file = "cffi-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:cab50b8c2250b46fe738c77dbd25ce017d5e6fb35d3407606e7a4180656a5a6a"}, - {file = "cffi-1.14.0-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:001bf3242a1bb04d985d63e138230802c6c8d4db3668fb545fb5005ddf5bb5ff"}, - {file = "cffi-1.14.0-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:e56c744aa6ff427a607763346e4170629caf7e48ead6921745986db3692f987f"}, - {file = "cffi-1.14.0-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:b8c78301cefcf5fd914aad35d3c04c2b21ce8629b5e4f4e45ae6812e461910fa"}, - {file = "cffi-1.14.0-cp35-cp35m-win32.whl", hash = "sha256:8c0ffc886aea5df6a1762d0019e9cb05f825d0eec1f520c51be9d198701daee5"}, - {file = "cffi-1.14.0-cp35-cp35m-win_amd64.whl", hash = "sha256:8a6c688fefb4e1cd56feb6c511984a6c4f7ec7d2a1ff31a10254f3c817054ae4"}, - {file = "cffi-1.14.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:95cd16d3dee553f882540c1ffe331d085c9e629499ceadfbda4d4fde635f4b7d"}, - {file = "cffi-1.14.0-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:66e41db66b47d0d8672d8ed2708ba91b2f2524ece3dee48b5dfb36be8c2f21dc"}, - {file = "cffi-1.14.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:028a579fc9aed3af38f4892bdcc7390508adabc30c6af4a6e4f611b0c680e6ac"}, - {file = "cffi-1.14.0-cp36-cp36m-win32.whl", hash = "sha256:cef128cb4d5e0b3493f058f10ce32365972c554572ff821e175dbc6f8ff6924f"}, - {file = "cffi-1.14.0-cp36-cp36m-win_amd64.whl", hash = "sha256:337d448e5a725bba2d8293c48d9353fc68d0e9e4088d62a9571def317797522b"}, - {file = "cffi-1.14.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e577934fc5f8779c554639376beeaa5657d54349096ef24abe8c74c5d9c117c3"}, - {file = "cffi-1.14.0-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:62ae9af2d069ea2698bf536dcfe1e4eed9090211dbaafeeedf5cb6c41b352f66"}, - {file = "cffi-1.14.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:14491a910663bf9f13ddf2bc8f60562d6bc5315c1f09c704937ef17293fb85b0"}, - {file = "cffi-1.14.0-cp37-cp37m-win32.whl", hash = "sha256:c43866529f2f06fe0edc6246eb4faa34f03fe88b64a0a9a942561c8e22f4b71f"}, - {file = "cffi-1.14.0-cp37-cp37m-win_amd64.whl", hash = "sha256:2089ed025da3919d2e75a4d963d008330c96751127dd6f73c8dc0c65041b4c26"}, - {file = "cffi-1.14.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3b911c2dbd4f423b4c4fcca138cadde747abdb20d196c4a48708b8a2d32b16dd"}, - {file = "cffi-1.14.0-cp38-cp38-manylinux1_i686.whl", hash = "sha256:7e63cbcf2429a8dbfe48dcc2322d5f2220b77b2e17b7ba023d6166d84655da55"}, - {file = "cffi-1.14.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:3d311bcc4a41408cf5854f06ef2c5cab88f9fded37a3b95936c9879c1640d4c2"}, - {file = "cffi-1.14.0-cp38-cp38-win32.whl", hash = "sha256:675686925a9fb403edba0114db74e741d8181683dcf216be697d208857e04ca8"}, - {file = "cffi-1.14.0-cp38-cp38-win_amd64.whl", hash = "sha256:00789914be39dffba161cfc5be31b55775de5ba2235fe49aa28c148236c4e06b"}, - {file = "cffi-1.14.0.tar.gz", hash = "sha256:2d384f4a127a15ba701207f7639d94106693b6cd64173d6c8988e2c25f3ac2b6"}, + {file = "cffi-1.14.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:da9d3c506f43e220336433dffe643fbfa40096d408cb9b7f2477892f369d5f82"}, + {file = "cffi-1.14.2-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:23e44937d7695c27c66a54d793dd4b45889a81b35c0751ba91040fe825ec59c4"}, + {file = "cffi-1.14.2-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:0da50dcbccd7cb7e6c741ab7912b2eff48e85af217d72b57f80ebc616257125e"}, + {file = "cffi-1.14.2-cp27-cp27m-win32.whl", hash = "sha256:76ada88d62eb24de7051c5157a1a78fd853cca9b91c0713c2e973e4196271d0c"}, + {file = "cffi-1.14.2-cp27-cp27m-win_amd64.whl", hash = "sha256:15a5f59a4808f82d8ec7364cbace851df591c2d43bc76bcbe5c4543a7ddd1bf1"}, + {file = "cffi-1.14.2-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:e4082d832e36e7f9b2278bc774886ca8207346b99f278e54c9de4834f17232f7"}, + {file = "cffi-1.14.2-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:57214fa5430399dffd54f4be37b56fe22cedb2b98862550d43cc085fb698dc2c"}, + {file = "cffi-1.14.2-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:6843db0343e12e3f52cc58430ad559d850a53684f5b352540ca3f1bc56df0731"}, + {file = "cffi-1.14.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:577791f948d34d569acb2d1add5831731c59d5a0c50a6d9f629ae1cefd9ca4a0"}, + {file = "cffi-1.14.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:8662aabfeab00cea149a3d1c2999b0731e70c6b5bac596d95d13f643e76d3d4e"}, + {file = "cffi-1.14.2-cp35-cp35m-win32.whl", hash = "sha256:837398c2ec00228679513802e3744d1e8e3cb1204aa6ad408b6aff081e99a487"}, + {file = "cffi-1.14.2-cp35-cp35m-win_amd64.whl", hash = "sha256:bf44a9a0141a082e89c90e8d785b212a872db793a0080c20f6ae6e2a0ebf82ad"}, + {file = "cffi-1.14.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:29c4688ace466a365b85a51dcc5e3c853c1d283f293dfcc12f7a77e498f160d2"}, + {file = "cffi-1.14.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:99cc66b33c418cd579c0f03b77b94263c305c389cb0c6972dac420f24b3bf123"}, + {file = "cffi-1.14.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:65867d63f0fd1b500fa343d7798fa64e9e681b594e0a07dc934c13e76ee28fb1"}, + {file = "cffi-1.14.2-cp36-cp36m-win32.whl", hash = "sha256:f5033952def24172e60493b68717792e3aebb387a8d186c43c020d9363ee7281"}, + {file = "cffi-1.14.2-cp36-cp36m-win_amd64.whl", hash = "sha256:7057613efefd36cacabbdbcef010e0a9c20a88fc07eb3e616019ea1692fa5df4"}, + {file = "cffi-1.14.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6539314d84c4d36f28d73adc1b45e9f4ee2a89cdc7e5d2b0a6dbacba31906798"}, + {file = "cffi-1.14.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:672b539db20fef6b03d6f7a14b5825d57c98e4026401fce838849f8de73fe4d4"}, + {file = "cffi-1.14.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:95e9094162fa712f18b4f60896e34b621df99147c2cee216cfa8f022294e8e9f"}, + {file = "cffi-1.14.2-cp37-cp37m-win32.whl", hash = "sha256:b9aa9d8818c2e917fa2c105ad538e222a5bce59777133840b93134022a7ce650"}, + {file = "cffi-1.14.2-cp37-cp37m-win_amd64.whl", hash = "sha256:e4b9b7af398c32e408c00eb4e0d33ced2f9121fd9fb978e6c1b57edd014a7d15"}, + {file = "cffi-1.14.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e613514a82539fc48291d01933951a13ae93b6b444a88782480be32245ed4afa"}, + {file = "cffi-1.14.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:9b219511d8b64d3fa14261963933be34028ea0e57455baf6781fe399c2c3206c"}, + {file = "cffi-1.14.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:c0b48b98d79cf795b0916c57bebbc6d16bb43b9fc9b8c9f57f4cf05881904c75"}, + {file = "cffi-1.14.2-cp38-cp38-win32.whl", hash = "sha256:15419020b0e812b40d96ec9d369b2bc8109cc3295eac6e013d3261343580cc7e"}, + {file = "cffi-1.14.2-cp38-cp38-win_amd64.whl", hash = "sha256:12a453e03124069b6896107ee133ae3ab04c624bb10683e1ed1c1663df17c13c"}, + {file = "cffi-1.14.2.tar.gz", hash = "sha256:ae8f34d50af2c2154035984b8b5fc5d9ed63f32fe615646ab435b05b132ca91b"}, ] chardet = [ {file = "chardet-3.0.4-py2.py3-none-any.whl", hash = "sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"}, @@ -863,25 +868,28 @@ coverage = [ {file = "coverage-4.5.4.tar.gz", hash = "sha256:e07d9f1a23e9e93ab5c62902833bf3e4b1f65502927379148b6622686223125c"}, ] cryptography = [ - {file = "cryptography-2.9.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:daf54a4b07d67ad437ff239c8a4080cfd1cc7213df57d33c97de7b4738048d5e"}, - {file = "cryptography-2.9.2-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:3b3eba865ea2754738616f87292b7f29448aec342a7c720956f8083d252bf28b"}, - {file = "cryptography-2.9.2-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:c447cf087cf2dbddc1add6987bbe2f767ed5317adb2d08af940db517dd704365"}, - {file = "cryptography-2.9.2-cp27-cp27m-win32.whl", hash = "sha256:f118a95c7480f5be0df8afeb9a11bd199aa20afab7a96bcf20409b411a3a85f0"}, - {file = "cryptography-2.9.2-cp27-cp27m-win_amd64.whl", hash = "sha256:c4fd17d92e9d55b84707f4fd09992081ba872d1a0c610c109c18e062e06a2e55"}, - {file = "cryptography-2.9.2-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:d0d5aeaedd29be304848f1c5059074a740fa9f6f26b84c5b63e8b29e73dfc270"}, - {file = "cryptography-2.9.2-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:1e4014639d3d73fbc5ceff206049c5a9a849cefd106a49fa7aaaa25cc0ce35cf"}, - {file = "cryptography-2.9.2-cp35-abi3-macosx_10_9_x86_64.whl", hash = "sha256:96c080ae7118c10fcbe6229ab43eb8b090fccd31a09ef55f83f690d1ef619a1d"}, - {file = "cryptography-2.9.2-cp35-abi3-manylinux1_x86_64.whl", hash = "sha256:e993468c859d084d5579e2ebee101de8f5a27ce8e2159959b6673b418fd8c785"}, - {file = "cryptography-2.9.2-cp35-abi3-manylinux2010_x86_64.whl", hash = "sha256:88c881dd5a147e08d1bdcf2315c04972381d026cdb803325c03fe2b4a8ed858b"}, - {file = "cryptography-2.9.2-cp35-cp35m-win32.whl", hash = "sha256:651448cd2e3a6bc2bb76c3663785133c40d5e1a8c1a9c5429e4354201c6024ae"}, - {file = "cryptography-2.9.2-cp35-cp35m-win_amd64.whl", hash = "sha256:726086c17f94747cedbee6efa77e99ae170caebeb1116353c6cf0ab67ea6829b"}, - {file = "cryptography-2.9.2-cp36-cp36m-win32.whl", hash = "sha256:091d31c42f444c6f519485ed528d8b451d1a0c7bf30e8ca583a0cac44b8a0df6"}, - {file = "cryptography-2.9.2-cp36-cp36m-win_amd64.whl", hash = "sha256:bb1f0281887d89617b4c68e8db9a2c42b9efebf2702a3c5bf70599421a8623e3"}, - {file = "cryptography-2.9.2-cp37-cp37m-win32.whl", hash = "sha256:18452582a3c85b96014b45686af264563e3e5d99d226589f057ace56196ec78b"}, - {file = "cryptography-2.9.2-cp37-cp37m-win_amd64.whl", hash = "sha256:22e91636a51170df0ae4dcbd250d318fd28c9f491c4e50b625a49964b24fe46e"}, - {file = "cryptography-2.9.2-cp38-cp38-win32.whl", hash = "sha256:844a76bc04472e5135b909da6aed84360f522ff5dfa47f93e3dd2a0b84a89fa0"}, - {file = "cryptography-2.9.2-cp38-cp38-win_amd64.whl", hash = "sha256:1dfa985f62b137909496e7fc182dac687206d8d089dd03eaeb28ae16eec8e7d5"}, - {file = "cryptography-2.9.2.tar.gz", hash = "sha256:a0c30272fb4ddda5f5ffc1089d7405b7a71b0b0f51993cb4e5dbb4590b2fc229"}, + {file = "cryptography-3.1-cp27-cp27m-macosx_10_10_x86_64.whl", hash = "sha256:969ae512a250f869c1738ca63be843488ff5cc031987d302c1f59c7dbe1b225f"}, + {file = "cryptography-3.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:b45ab1c6ece7c471f01c56f5d19818ca797c34541f0b2351635a5c9fe09ac2e0"}, + {file = "cryptography-3.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:247df238bc05c7d2e934a761243bfdc67db03f339948b1e2e80c75d41fc7cc36"}, + {file = "cryptography-3.1-cp27-cp27m-win32.whl", hash = "sha256:10c9775a3f31610cf6b694d1fe598f2183441de81cedcf1814451ae53d71b13a"}, + {file = "cryptography-3.1-cp27-cp27m-win_amd64.whl", hash = "sha256:9f734423eb9c2ea85000aa2476e0d7a58e021bc34f0a373ac52a5454cd52f791"}, + {file = "cryptography-3.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e7563eb7bc5c7e75a213281715155248cceba88b11cb4b22957ad45b85903761"}, + {file = "cryptography-3.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:94191501e4b4009642be21dde2a78bd3c2701a81ee57d3d3d02f1d99f8b64a9e"}, + {file = "cryptography-3.1-cp35-abi3-macosx_10_10_x86_64.whl", hash = "sha256:dc3f437ca6353979aace181f1b790f0fc79e446235b14306241633ab7d61b8f8"}, + {file = "cryptography-3.1-cp35-abi3-manylinux1_x86_64.whl", hash = "sha256:725875681afe50b41aee7fdd629cedbc4720bab350142b12c55c0a4d17c7416c"}, + {file = "cryptography-3.1-cp35-abi3-manylinux2010_x86_64.whl", hash = "sha256:321761d55fb7cb256b771ee4ed78e69486a7336be9143b90c52be59d7657f50f"}, + {file = "cryptography-3.1-cp35-abi3-manylinux2014_aarch64.whl", hash = "sha256:2a27615c965173c4c88f2961cf18115c08fedfb8bdc121347f26e8458dc6d237"}, + {file = "cryptography-3.1-cp35-cp35m-win32.whl", hash = "sha256:e7dad66a9e5684a40f270bd4aee1906878193ae50a4831922e454a2a457f1716"}, + {file = "cryptography-3.1-cp35-cp35m-win_amd64.whl", hash = "sha256:4005b38cd86fc51c955db40b0f0e52ff65340874495af72efabb1bb8ca881695"}, + {file = "cryptography-3.1-cp36-abi3-win32.whl", hash = "sha256:cc6096c86ec0de26e2263c228fb25ee01c3ff1346d3cfc219d67d49f303585af"}, + {file = "cryptography-3.1-cp36-abi3-win_amd64.whl", hash = "sha256:2e26223ac636ca216e855748e7d435a1bf846809ed12ed898179587d0cf74618"}, + {file = "cryptography-3.1-cp36-cp36m-win32.whl", hash = "sha256:7a63e97355f3cd77c94bd98c59cb85fe0efd76ea7ef904c9b0316b5bbfde6ed1"}, + {file = "cryptography-3.1-cp36-cp36m-win_amd64.whl", hash = "sha256:4b9e96543d0784acebb70991ebc2dbd99aa287f6217546bb993df22dd361d41c"}, + {file = "cryptography-3.1-cp37-cp37m-win32.whl", hash = "sha256:eb80a288e3cfc08f679f95da72d2ef90cb74f6d8a8ba69d2f215c5e110b2ca32"}, + {file = "cryptography-3.1-cp37-cp37m-win_amd64.whl", hash = "sha256:180c9f855a8ea280e72a5d61cf05681b230c2dce804c48e9b2983f491ecc44ed"}, + {file = "cryptography-3.1-cp38-cp38-win32.whl", hash = "sha256:fa7fbcc40e2210aca26c7ac8a39467eae444d90a2c346cbcffd9133a166bcc67"}, + {file = "cryptography-3.1-cp38-cp38-win_amd64.whl", hash = "sha256:548b0818e88792318dc137d8b1ec82a0ab0af96c7f0603a00bb94f896fbf5e10"}, + {file = "cryptography-3.1.tar.gz", hash = "sha256:26409a473cc6278e4c90f782cd5968ebad04d3911ed1c402fc86908c17633e08"}, ] docker = [ {file = "docker-3.7.3-py2.py3-none-any.whl", hash = "sha256:2434b396e616a5ef682fbf80e04839a59e8b81880ece5662c33dff34b8863519"}, @@ -896,37 +904,33 @@ docutils = [ {file = "docutils-0.15.2-py3-none-any.whl", hash = "sha256:6c4f696463b79f1fb8ba0c594b63840ebd41f059e92b31957c46b74a4599b6d0"}, {file = "docutils-0.15.2.tar.gz", hash = "sha256:a2aeea129088da402665e92e0b25b04b073c04b2dce4ab65caaa38b7ce2e1a99"}, ] -entrypoints = [ - {file = "entrypoints-0.3-py2.py3-none-any.whl", hash = "sha256:589f874b313739ad35be6e0cd7efde2a4e9b6fea91edcc34e58ecbb8dbe56d19"}, - {file = "entrypoints-0.3.tar.gz", hash = "sha256:c70dd71abe5a8c85e55e12c19bd91ccfeec11a6e99044204511f9ed547d48451"}, -] execnet = [ {file = "execnet-1.7.1-py2.py3-none-any.whl", hash = "sha256:d4efd397930c46415f62f8a31388d6be4f27a91d7550eb79bc64a756e0056547"}, {file = "execnet-1.7.1.tar.gz", hash = "sha256:cacb9df31c9680ec5f95553976c4da484d407e85e41c83cb812aa014f0eddc50"}, ] flake8 = [ - {file = "flake8-3.7.9-py2.py3-none-any.whl", hash = "sha256:49356e766643ad15072a789a20915d3c91dc89fd313ccd71802303fd67e4deca"}, - {file = "flake8-3.7.9.tar.gz", hash = "sha256:45681a117ecc81e870cbf1262835ae4af5e7a8b08e40b944a8a6e6b895914cfb"}, + {file = "flake8-3.8.3-py2.py3-none-any.whl", hash = "sha256:15e351d19611c887e482fb960eae4d44845013cc142d42896e9862f775d8cf5c"}, + {file = "flake8-3.8.3.tar.gz", hash = "sha256:f04b9fcbac03b0a3e58c0ab3a0ecc462e023a9faf046d57794184028123aa208"}, ] flask = [ {file = "Flask-1.1.2-py2.py3-none-any.whl", hash = "sha256:8a4fdd8936eba2512e9c85df320a37e694c93945b33ef33c89946a340a238557"}, {file = "Flask-1.1.2.tar.gz", hash = "sha256:4efa1ae2d7c9865af48986de8aeb8504bf32c7f3d6fdc9353d34b21f4b127060"}, ] httplib2 = [ - {file = "httplib2-0.17.3-py3-none-any.whl", hash = "sha256:6d9722decd2deacd486ef10c5dd5e2f120ca3ba8736842b90509afcdc16488b1"}, - {file = "httplib2-0.17.3.tar.gz", hash = "sha256:39dd15a333f67bfb70798faa9de8a6e99c819da6ad82b77f9a259a5c7b1225a2"}, + {file = "httplib2-0.17.4-py3-none-any.whl", hash = "sha256:743cff16beadd128511e786474740264aa805fba106d6fc90e3586829ad0298b"}, + {file = "httplib2-0.17.4.tar.gz", hash = "sha256:1e9340ecf0187a621bdcfb407c32e04e8e09fc6ab28b050efa38f20eae0e975f"}, ] idna = [ - {file = "idna-2.9-py2.py3-none-any.whl", hash = "sha256:a068a21ceac8a4d63dbfd964670474107f541babbd2250d61922f029858365fa"}, - {file = "idna-2.9.tar.gz", hash = "sha256:7588d1c14ae4c77d74036e8c22ff447b26d0fde8f007354fd48a7814db15b7cb"}, + {file = "idna-2.10-py2.py3-none-any.whl", hash = "sha256:b97d804b1e9b523befed77c48dacec60e6dcb0b5391d57af6a65a312a90648c0"}, + {file = "idna-2.10.tar.gz", hash = "sha256:b307872f855b18632ce0c21c5e45be78c0ea7ae4c15c828c20788b26921eb3f6"}, ] importlib-metadata = [ - {file = "importlib_metadata-1.6.0-py2.py3-none-any.whl", hash = "sha256:2a688cbaa90e0cc587f1df48bdc97a6eadccdcd9c35fb3f976a09e3b5016d90f"}, - {file = "importlib_metadata-1.6.0.tar.gz", hash = "sha256:34513a8a0c4962bc66d35b359558fd8a5e10cd472d37aec5f66858addef32c1e"}, + {file = "importlib_metadata-1.7.0-py2.py3-none-any.whl", hash = "sha256:dc15b2969b4ce36305c51eebe62d418ac7791e9a157911d58bfb1f9ccd8e2070"}, + {file = "importlib_metadata-1.7.0.tar.gz", hash = "sha256:90bb658cdbbf6d1735b6341ce708fc7024a3e14e99ffdc5783edea9f9b077f83"}, ] isort = [ - {file = "isort-4.3.21-py2.py3-none-any.whl", hash = "sha256:6e811fcb295968434526407adb8796944f1988c5b65e8139058f2014cbe100fd"}, - {file = "isort-4.3.21.tar.gz", hash = "sha256:54da7e92468955c4fceacd0c86bd0ec997b0e1ee80d97f67c35a78b719dccab1"}, + {file = "isort-5.4.2-py3-none-any.whl", hash = "sha256:60a1b97e33f61243d12647aaaa3e6cc6778f5eb9f42997650f1cc975b6008750"}, + {file = "isort-5.4.2.tar.gz", hash = "sha256:d488ba1c5a2db721669cc180180d5acf84ebdc5af7827f7aaeaa75f73cf0e2b8"}, ] itsdangerous = [ {file = "itsdangerous-1.1.0-py2.py3-none-any.whl", hash = "sha256:b12271b2047cb23eeb98c8b5622e2e5c5e9abd9784a153e9d8ef9cb4dd09d749"}, @@ -937,37 +941,41 @@ jinja2 = [ {file = "Jinja2-2.10.3.tar.gz", hash = "sha256:9fe95f19286cfefaa917656583d020be14e7859c6b0252588391e47db34527de"}, ] jmespath = [ - {file = "jmespath-0.9.5-py2.py3-none-any.whl", hash = "sha256:695cb76fa78a10663425d5b73ddc5714eb711157e52704d69be03b1a02ba4fec"}, - {file = "jmespath-0.9.5.tar.gz", hash = "sha256:cca55c8d153173e21baa59983015ad0daf603f9cb799904ff057bfb8ff8dc2d9"}, + {file = "jmespath-0.10.0-py2.py3-none-any.whl", hash = "sha256:cdf6525904cc597730141d61b36f2e4b8ecc257c420fa2f4549bac2c2d0cb72f"}, + {file = "jmespath-0.10.0.tar.gz", hash = "sha256:b85d0567b8666149a93172712e68920734333c0ce7e89b78b3e987f71e5ed4f9"}, ] lxml = [ - {file = "lxml-4.5.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:0701f7965903a1c3f6f09328c1278ac0eee8f56f244e66af79cb224b7ef3801c"}, - {file = "lxml-4.5.0-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:06d4e0bbb1d62e38ae6118406d7cdb4693a3fa34ee3762238bcb96c9e36a93cd"}, - {file = "lxml-4.5.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5828c7f3e615f3975d48f40d4fe66e8a7b25f16b5e5705ffe1d22e43fb1f6261"}, - {file = "lxml-4.5.0-cp27-cp27m-win32.whl", hash = "sha256:afdb34b715daf814d1abea0317b6d672476b498472f1e5aacbadc34ebbc26e89"}, - {file = "lxml-4.5.0-cp27-cp27m-win_amd64.whl", hash = "sha256:585c0869f75577ac7a8ff38d08f7aac9033da2c41c11352ebf86a04652758b7a"}, - {file = "lxml-4.5.0-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:8a0ebda56ebca1a83eb2d1ac266649b80af8dd4b4a3502b2c1e09ac2f88fe128"}, - {file = "lxml-4.5.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:fe976a0f1ef09b3638778024ab9fb8cde3118f203364212c198f71341c0715ca"}, - {file = "lxml-4.5.0-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:7bc1b221e7867f2e7ff1933165c0cec7153dce93d0cdba6554b42a8beb687bdb"}, - {file = "lxml-4.5.0-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:d068f55bda3c2c3fcaec24bd083d9e2eede32c583faf084d6e4b9daaea77dde8"}, - {file = "lxml-4.5.0-cp35-cp35m-win32.whl", hash = "sha256:e4aa948eb15018a657702fee0b9db47e908491c64d36b4a90f59a64741516e77"}, - {file = "lxml-4.5.0-cp35-cp35m-win_amd64.whl", hash = "sha256:1f2c4ec372bf1c4a2c7e4bb20845e8bcf8050365189d86806bad1e3ae473d081"}, - {file = "lxml-4.5.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:5d467ce9c5d35b3bcc7172c06320dddb275fea6ac2037f72f0a4d7472035cea9"}, - {file = "lxml-4.5.0-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:95e67224815ef86924fbc2b71a9dbd1f7262384bca4bc4793645794ac4200717"}, - {file = "lxml-4.5.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:ebec08091a22c2be870890913bdadd86fcd8e9f0f22bcb398abd3af914690c15"}, - {file = "lxml-4.5.0-cp36-cp36m-win32.whl", hash = "sha256:deadf4df349d1dcd7b2853a2c8796593cc346600726eff680ed8ed11812382a7"}, - {file = "lxml-4.5.0-cp36-cp36m-win_amd64.whl", hash = "sha256:f2b74784ed7e0bc2d02bd53e48ad6ba523c9b36c194260b7a5045071abbb1012"}, - {file = "lxml-4.5.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:fa071559f14bd1e92077b1b5f6c22cf09756c6de7139370249eb372854ce51e6"}, - {file = "lxml-4.5.0-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:edc15fcfd77395e24543be48871c251f38132bb834d9fdfdad756adb6ea37679"}, - {file = "lxml-4.5.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:fd52e796fee7171c4361d441796b64df1acfceb51f29e545e812f16d023c4bbc"}, - {file = "lxml-4.5.0-cp37-cp37m-win32.whl", hash = "sha256:90ed0e36455a81b25b7034038e40880189169c308a3df360861ad74da7b68c1a"}, - {file = "lxml-4.5.0-cp37-cp37m-win_amd64.whl", hash = "sha256:df533af6f88080419c5a604d0d63b2c33b1c0c4409aba7d0cb6de305147ea8c8"}, - {file = "lxml-4.5.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b4b2c63cc7963aedd08a5f5a454c9f67251b1ac9e22fd9d72836206c42dc2a72"}, - {file = "lxml-4.5.0-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e5d842c73e4ef6ed8c1bd77806bf84a7cb535f9c0cf9b2c74d02ebda310070e1"}, - {file = "lxml-4.5.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:63dbc21efd7e822c11d5ddbedbbb08cd11a41e0032e382a0fd59b0b08e405a3a"}, - {file = "lxml-4.5.0-cp38-cp38-win32.whl", hash = "sha256:4235bc124fdcf611d02047d7034164897ade13046bda967768836629bc62784f"}, - {file = "lxml-4.5.0-cp38-cp38-win_amd64.whl", hash = "sha256:d5b3c4b7edd2e770375a01139be11307f04341ec709cf724e0f26ebb1eef12c3"}, - {file = "lxml-4.5.0.tar.gz", hash = "sha256:8620ce80f50d023d414183bf90cc2576c2837b88e00bea3f33ad2630133bbb60"}, + {file = "lxml-4.5.2-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:74f48ec98430e06c1fa8949b49ebdd8d27ceb9df8d3d1c92e1fdc2773f003f20"}, + {file = "lxml-4.5.2-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e70d4e467e243455492f5de463b72151cc400710ac03a0678206a5f27e79ddef"}, + {file = "lxml-4.5.2-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:7ad7906e098ccd30d8f7068030a0b16668ab8aa5cda6fcd5146d8d20cbaa71b5"}, + {file = "lxml-4.5.2-cp27-cp27m-win32.whl", hash = "sha256:92282c83547a9add85ad658143c76a64a8d339028926d7dc1998ca029c88ea6a"}, + {file = "lxml-4.5.2-cp27-cp27m-win_amd64.whl", hash = "sha256:05a444b207901a68a6526948c7cc8f9fe6d6f24c70781488e32fd74ff5996e3f"}, + {file = "lxml-4.5.2-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:94150231f1e90c9595ccc80d7d2006c61f90a5995db82bccbca7944fd457f0f6"}, + {file = "lxml-4.5.2-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:bea760a63ce9bba566c23f726d72b3c0250e2fa2569909e2d83cda1534c79443"}, + {file = "lxml-4.5.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:c3f511a3c58676147c277eff0224c061dd5a6a8e1373572ac817ac6324f1b1e0"}, + {file = "lxml-4.5.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:59daa84aef650b11bccd18f99f64bfe44b9f14a08a28259959d33676554065a1"}, + {file = "lxml-4.5.2-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:c9d317efde4bafbc1561509bfa8a23c5cab66c44d49ab5b63ff690f5159b2304"}, + {file = "lxml-4.5.2-cp35-cp35m-win32.whl", hash = "sha256:9dc9006dcc47e00a8a6a029eb035c8f696ad38e40a27d073a003d7d1443f5d88"}, + {file = "lxml-4.5.2-cp35-cp35m-win_amd64.whl", hash = "sha256:08fc93257dcfe9542c0a6883a25ba4971d78297f63d7a5a26ffa34861ca78730"}, + {file = "lxml-4.5.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:121b665b04083a1e85ff1f5243d4a93aa1aaba281bc12ea334d5a187278ceaf1"}, + {file = "lxml-4.5.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:5591c4164755778e29e69b86e425880f852464a21c7bb53c7ea453bbe2633bbe"}, + {file = "lxml-4.5.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:cc411ad324a4486b142c41d9b2b6a722c534096963688d879ea6fa8a35028258"}, + {file = "lxml-4.5.2-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:1fa21263c3aba2b76fd7c45713d4428dbcc7644d73dcf0650e9d344e433741b3"}, + {file = "lxml-4.5.2-cp36-cp36m-win32.whl", hash = "sha256:786aad2aa20de3dbff21aab86b2fb6a7be68064cbbc0219bde414d3a30aa47ae"}, + {file = "lxml-4.5.2-cp36-cp36m-win_amd64.whl", hash = "sha256:e1cacf4796b20865789083252186ce9dc6cc59eca0c2e79cca332bdff24ac481"}, + {file = "lxml-4.5.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:80a38b188d20c0524fe8959c8ce770a8fdf0e617c6912d23fc97c68301bb9aba"}, + {file = "lxml-4.5.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:ecc930ae559ea8a43377e8b60ca6f8d61ac532fc57efb915d899de4a67928efd"}, + {file = "lxml-4.5.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:a76979f728dd845655026ab991df25d26379a1a8fc1e9e68e25c7eda43004bed"}, + {file = "lxml-4.5.2-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:cfd7c5dd3c35c19cec59c63df9571c67c6d6e5c92e0fe63517920e97f61106d1"}, + {file = "lxml-4.5.2-cp37-cp37m-win32.whl", hash = "sha256:5a9c8d11aa2c8f8b6043d845927a51eb9102eb558e3f936df494e96393f5fd3e"}, + {file = "lxml-4.5.2-cp37-cp37m-win_amd64.whl", hash = "sha256:4b4a111bcf4b9c948e020fd207f915c24a6de3f1adc7682a2d92660eb4e84f1a"}, + {file = "lxml-4.5.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5dd20538a60c4cc9a077d3b715bb42307239fcd25ef1ca7286775f95e9e9a46d"}, + {file = "lxml-4.5.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:2b30aa2bcff8e958cd85d907d5109820b01ac511eae5b460803430a7404e34d7"}, + {file = "lxml-4.5.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:aa8eba3db3d8761db161003e2d0586608092e217151d7458206e243be5a43843"}, + {file = "lxml-4.5.2-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:8f0ec6b9b3832e0bd1d57af41f9238ea7709bbd7271f639024f2fc9d3bb01293"}, + {file = "lxml-4.5.2-cp38-cp38-win32.whl", hash = "sha256:107781b213cf7201ec3806555657ccda67b1fccc4261fb889ef7fc56976db81f"}, + {file = "lxml-4.5.2-cp38-cp38-win_amd64.whl", hash = "sha256:f161af26f596131b63b236372e4ce40f3167c1b5b5d459b29d2514bd8c9dc9ee"}, + {file = "lxml-4.5.2.tar.gz", hash = "sha256:cdc13a1682b2a6241080745b1953719e7fe0850b40a5c71ca574f090a1391df6"}, ] markupsafe = [ {file = "MarkupSafe-1.1.1-cp27-cp27m-macosx_10_6_intel.whl", hash = "sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161"}, @@ -1013,12 +1021,16 @@ mock = [ {file = "mock-4.0.2.tar.gz", hash = "sha256:dd33eb70232b6118298d516bbcecd26704689c386594f0f3c4f13867b2c56f72"}, ] more-itertools = [ - {file = "more-itertools-8.2.0.tar.gz", hash = "sha256:b1ddb932186d8a6ac451e1d95844b382f55e12686d51ca0c68b6f61f2ab7a507"}, - {file = "more_itertools-8.2.0-py3-none-any.whl", hash = "sha256:5dd8bcf33e5f9513ffa06d5ad33d78f31e1931ac9a18f33d37e77a180d393a7c"}, + {file = "more-itertools-8.5.0.tar.gz", hash = "sha256:6f83822ae94818eae2612063a5101a7311e68ae8002005b5e05f03fd74a86a20"}, + {file = "more_itertools-8.5.0-py3-none-any.whl", hash = "sha256:9b30f12df9393f0d28af9210ff8efe48d10c94f73e5daf886f10c4b0b0b4f03c"}, +] +mypy-extensions = [ + {file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"}, + {file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"}, ] packaging = [ - {file = "packaging-20.3-py2.py3-none-any.whl", hash = "sha256:82f77b9bee21c1bafbf35a84905d604d5d1223801d639cf3ed140bd651c08752"}, - {file = "packaging-20.3.tar.gz", hash = "sha256:3c292b474fda1671ec57d46d739d072bfd495a4f51ad01a055121d81e952b7a3"}, + {file = "packaging-20.4-py2.py3-none-any.whl", hash = "sha256:998416ba6962ae7fbd6596850b80e17859a5753ba17c32284f67bfff33784181"}, + {file = "packaging-20.4.tar.gz", hash = "sha256:4357f74f47b9c12db93624a82154e9b120fa8293699949152b22065d556079f8"}, ] pathspec = [ {file = "pathspec-0.8.0-py2.py3-none-any.whl", hash = "sha256:7d91249d21749788d07a2d0f94147accd8f845507400749ea19c1ec9054a12b0"}, @@ -1029,20 +1041,20 @@ pluggy = [ {file = "pluggy-0.13.1.tar.gz", hash = "sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0"}, ] py = [ - {file = "py-1.8.1-py2.py3-none-any.whl", hash = "sha256:c20fdd83a5dbc0af9efd622bee9a5564e278f6380fffcacc43ba6f43db2813b0"}, - {file = "py-1.8.1.tar.gz", hash = "sha256:5e27081401262157467ad6e7f851b7aa402c5852dbcb3dae06768434de5752aa"}, + {file = "py-1.9.0-py2.py3-none-any.whl", hash = "sha256:366389d1db726cd2fcfc79732e75410e5fe4d31db13692115529d34069a043c2"}, + {file = "py-1.9.0.tar.gz", hash = "sha256:9ca6883ce56b4e8da7e79ac18787889fa5206c79dcc67fb065376cd2fe03f342"}, ] pycodestyle = [ - {file = "pycodestyle-2.5.0-py2.py3-none-any.whl", hash = "sha256:95a2219d12372f05704562a14ec30bc76b05a5b297b21a5dfe3f6fac3491ae56"}, - {file = "pycodestyle-2.5.0.tar.gz", hash = "sha256:e40a936c9a450ad81df37f549d676d127b1b66000a6c500caa2b085bc0ca976c"}, + {file = "pycodestyle-2.6.0-py2.py3-none-any.whl", hash = "sha256:2295e7b2f6b5bd100585ebcb1f616591b652db8a741695b3d8f5d28bdc934367"}, + {file = "pycodestyle-2.6.0.tar.gz", hash = "sha256:c58a7d2815e0e8d7972bf1803331fb0152f867bd89adf8a01dfd55085434192e"}, ] pycparser = [ {file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"}, {file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"}, ] pyflakes = [ - {file = "pyflakes-2.1.1-py2.py3-none-any.whl", hash = "sha256:17dbeb2e3f4d772725c777fabc446d5634d1038f234e77343108ce445ea69ce0"}, - {file = "pyflakes-2.1.1.tar.gz", hash = "sha256:d976835886f8c5b31d47970ed689944a0262b5f3afa00a5a7b4dc81e5449f8a2"}, + {file = "pyflakes-2.2.0-py2.py3-none-any.whl", hash = "sha256:0d94e0e05a19e57a99444b6ddcf9a6eb2e5c68d3ca1e98e90707af8152c90a92"}, + {file = "pyflakes-2.2.0.tar.gz", hash = "sha256:35b2d75ee967ea93b55750aa9edbbf72813e06a66ba54438df2cfac9e3c27fc8"}, ] pyopenssl = [ {file = "pyOpenSSL-19.1.0-py2.py3-none-any.whl", hash = "sha256:621880965a720b8ece2f1b2f54ea2071966ab00e2970ad2ce11d596102063504"}, @@ -1061,38 +1073,38 @@ pytest = [ {file = "pytest-4.6.4.tar.gz", hash = "sha256:b77ae6f2d1a760760902a7676887b665c086f71e3461c64ed2a312afcedc00d6"}, ] pytest-cov = [ - {file = "pytest-cov-2.8.1.tar.gz", hash = "sha256:cc6742d8bac45070217169f5f72ceee1e0e55b0221f54bcf24845972d3a47f2b"}, - {file = "pytest_cov-2.8.1-py2.py3-none-any.whl", hash = "sha256:cdbdef4f870408ebdbfeb44e63e07eb18bb4619fae852f6e760645fa36172626"}, + {file = "pytest-cov-2.10.1.tar.gz", hash = "sha256:47bd0ce14056fdd79f93e1713f88fad7bdcc583dcd7783da86ef2f085a0bb88e"}, + {file = "pytest_cov-2.10.1-py2.py3-none-any.whl", hash = "sha256:45ec2d5182f89a81fc3eb29e3d1ed3113b9e9a873bcddb2a71faaab066110191"}, ] pytest-forked = [ - {file = "pytest-forked-1.1.3.tar.gz", hash = "sha256:1805699ed9c9e60cb7a8179b8d4fa2b8898098e82d229b0825d8095f0f261100"}, - {file = "pytest_forked-1.1.3-py2.py3-none-any.whl", hash = "sha256:1ae25dba8ee2e56fb47311c9638f9e58552691da87e82d25b0ce0e4bf52b7d87"}, + {file = "pytest-forked-1.3.0.tar.gz", hash = "sha256:6aa9ac7e00ad1a539c41bec6d21011332de671e938c7637378ec9710204e37ca"}, + {file = "pytest_forked-1.3.0-py2.py3-none-any.whl", hash = "sha256:dc4147784048e70ef5d437951728825a131b81714b398d5d52f17c7c144d8815"}, ] pytest-mock = [ - {file = "pytest-mock-3.1.0.tar.gz", hash = "sha256:ce610831cedeff5331f4e2fc453a5dd65384303f680ab34bee2c6533855b431c"}, - {file = "pytest_mock-3.1.0-py2.py3-none-any.whl", hash = "sha256:997729451dfc36b851a9accf675488c7020beccda15e11c75632ee3d1b1ccd71"}, + {file = "pytest-mock-3.2.0.tar.gz", hash = "sha256:7122d55505d5ed5a6f3df940ad174b3f606ecae5e9bc379569cdcbd4cd9d2b83"}, + {file = "pytest_mock-3.2.0-py3-none-any.whl", hash = "sha256:5564c7cd2569b603f8451ec77928083054d8896046830ca763ed68f4112d17c7"}, ] pytest-xdist = [ - {file = "pytest-xdist-1.32.0.tar.gz", hash = "sha256:1d4166dcac69adb38eeaedb88c8fada8588348258a3492ab49ba9161f2971129"}, - {file = "pytest_xdist-1.32.0-py2.py3-none-any.whl", hash = "sha256:ba5ec9fde3410bd9a116ff7e4f26c92e02fa3d27975ef3ad03f330b3d4b54e91"}, + {file = "pytest-xdist-1.34.0.tar.gz", hash = "sha256:340e8e83e2a4c0d861bdd8d05c5d7b7143f6eea0aba902997db15c2a86be04ee"}, + {file = "pytest_xdist-1.34.0-py2.py3-none-any.whl", hash = "sha256:ba5d10729372d65df3ac150872f9df5d2ed004a3b0d499cc0164aafedd8c7b66"}, ] python-dateutil = [ {file = "python-dateutil-2.8.1.tar.gz", hash = "sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c"}, {file = "python_dateutil-2.8.1-py2.py3-none-any.whl", hash = "sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a"}, ] pywin32 = [ - {file = "pywin32-227-cp27-cp27m-win32.whl", hash = "sha256:371fcc39416d736401f0274dd64c2302728c9e034808e37381b5e1b22be4a6b0"}, - {file = "pywin32-227-cp27-cp27m-win_amd64.whl", hash = "sha256:4cdad3e84191194ea6d0dd1b1b9bdda574ff563177d2adf2b4efec2a244fa116"}, - {file = "pywin32-227-cp35-cp35m-win32.whl", hash = "sha256:f4c5be1a293bae0076d93c88f37ee8da68136744588bc5e2be2f299a34ceb7aa"}, - {file = "pywin32-227-cp35-cp35m-win_amd64.whl", hash = "sha256:a929a4af626e530383a579431b70e512e736e9588106715215bf685a3ea508d4"}, - {file = "pywin32-227-cp36-cp36m-win32.whl", hash = "sha256:300a2db938e98c3e7e2093e4491439e62287d0d493fe07cce110db070b54c0be"}, - {file = "pywin32-227-cp36-cp36m-win_amd64.whl", hash = "sha256:9b31e009564fb95db160f154e2aa195ed66bcc4c058ed72850d047141b36f3a2"}, - {file = "pywin32-227-cp37-cp37m-win32.whl", hash = "sha256:47a3c7551376a865dd8d095a98deba954a98f326c6fe3c72d8726ca6e6b15507"}, - {file = "pywin32-227-cp37-cp37m-win_amd64.whl", hash = "sha256:31f88a89139cb2adc40f8f0e65ee56a8c585f629974f9e07622ba80199057511"}, - {file = "pywin32-227-cp38-cp38-win32.whl", hash = "sha256:7f18199fbf29ca99dff10e1f09451582ae9e372a892ff03a28528a24d55875bc"}, - {file = "pywin32-227-cp38-cp38-win_amd64.whl", hash = "sha256:7c1ae32c489dc012930787f06244426f8356e129184a02c25aef163917ce158e"}, - {file = "pywin32-227-cp39-cp39-win32.whl", hash = "sha256:c054c52ba46e7eb6b7d7dfae4dbd987a1bb48ee86debe3f245a2884ece46e295"}, - {file = "pywin32-227-cp39-cp39-win_amd64.whl", hash = "sha256:f27cec5e7f588c3d1051651830ecc00294f90728d19c3bf6916e6dba93ea357c"}, + {file = "pywin32-228-cp27-cp27m-win32.whl", hash = "sha256:37dc9935f6a383cc744315ae0c2882ba1768d9b06700a70f35dc1ce73cd4ba9c"}, + {file = "pywin32-228-cp27-cp27m-win_amd64.whl", hash = "sha256:11cb6610efc2f078c9e6d8f5d0f957620c333f4b23466931a247fb945ed35e89"}, + {file = "pywin32-228-cp35-cp35m-win32.whl", hash = "sha256:1f45db18af5d36195447b2cffacd182fe2d296849ba0aecdab24d3852fbf3f80"}, + {file = "pywin32-228-cp35-cp35m-win_amd64.whl", hash = "sha256:6e38c44097a834a4707c1b63efa9c2435f5a42afabff634a17f563bc478dfcc8"}, + {file = "pywin32-228-cp36-cp36m-win32.whl", hash = "sha256:ec16d44b49b5f34e99eb97cf270806fdc560dff6f84d281eb2fcb89a014a56a9"}, + {file = "pywin32-228-cp36-cp36m-win_amd64.whl", hash = "sha256:a60d795c6590a5b6baeacd16c583d91cce8038f959bd80c53bd9a68f40130f2d"}, + {file = "pywin32-228-cp37-cp37m-win32.whl", hash = "sha256:af40887b6fc200eafe4d7742c48417529a8702dcc1a60bf89eee152d1d11209f"}, + {file = "pywin32-228-cp37-cp37m-win_amd64.whl", hash = "sha256:00eaf43dbd05ba6a9b0080c77e161e0b7a601f9a3f660727a952e40140537de7"}, + {file = "pywin32-228-cp38-cp38-win32.whl", hash = "sha256:fa6ba028909cfc64ce9e24bcf22f588b14871980d9787f1e2002c99af8f1850c"}, + {file = "pywin32-228-cp38-cp38-win_amd64.whl", hash = "sha256:9b3466083f8271e1a5eb0329f4e0d61925d46b40b195a33413e0905dccb285e8"}, + {file = "pywin32-228-cp39-cp39-win32.whl", hash = "sha256:ed74b72d8059a6606f64842e7917aeee99159ebd6b8d6261c518d002837be298"}, + {file = "pywin32-228-cp39-cp39-win_amd64.whl", hash = "sha256:8319bafdcd90b7202c50d6014efdfe4fde9311b3ff15fd6f893a45c0868de203"}, ] pyyaml = [ {file = "PyYAML-5.3.1-cp27-cp27m-win32.whl", hash = "sha256:74809a57b329d6cc0fdccee6318f44b9b8649961fa73144a98735b0aaf029f1f"}, @@ -1112,31 +1124,31 @@ redis = [ {file = "redis-3.3.10.tar.gz", hash = "sha256:bf027fdd92aead8e49ab9d29e72498eef6ca7a38a15b2f88c5d9146145e93049"}, ] regex = [ - {file = "regex-2020.4.4-cp27-cp27m-win32.whl", hash = "sha256:90742c6ff121a9c5b261b9b215cb476eea97df98ea82037ec8ac95d1be7a034f"}, - {file = "regex-2020.4.4-cp27-cp27m-win_amd64.whl", hash = "sha256:24f4f4062eb16c5bbfff6a22312e8eab92c2c99c51a02e39b4eae54ce8255cd1"}, - {file = "regex-2020.4.4-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:08119f707f0ebf2da60d2f24c2f39ca616277bb67ef6c92b72cbf90cbe3a556b"}, - {file = "regex-2020.4.4-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:c9423a150d3a4fc0f3f2aae897a59919acd293f4cb397429b120a5fcd96ea3db"}, - {file = "regex-2020.4.4-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:c087bff162158536387c53647411db09b6ee3f9603c334c90943e97b1052a156"}, - {file = "regex-2020.4.4-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:1cbe0fa0b7f673400eb29e9ef41d4f53638f65f9a2143854de6b1ce2899185c3"}, - {file = "regex-2020.4.4-cp36-cp36m-win32.whl", hash = "sha256:0ce9537396d8f556bcfc317c65b6a0705320701e5ce511f05fc04421ba05b8a8"}, - {file = "regex-2020.4.4-cp36-cp36m-win_amd64.whl", hash = "sha256:7e1037073b1b7053ee74c3c6c0ada80f3501ec29d5f46e42669378eae6d4405a"}, - {file = "regex-2020.4.4-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:4385f12aa289d79419fede43f979e372f527892ac44a541b5446617e4406c468"}, - {file = "regex-2020.4.4-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:a58dd45cb865be0ce1d5ecc4cfc85cd8c6867bea66733623e54bd95131f473b6"}, - {file = "regex-2020.4.4-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:ccccdd84912875e34c5ad2d06e1989d890d43af6c2242c6fcfa51556997af6cd"}, - {file = "regex-2020.4.4-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:ea4adf02d23b437684cd388d557bf76e3afa72f7fed5bbc013482cc00c816948"}, - {file = "regex-2020.4.4-cp37-cp37m-win32.whl", hash = "sha256:2294f8b70e058a2553cd009df003a20802ef75b3c629506be20687df0908177e"}, - {file = "regex-2020.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:e91ba11da11cf770f389e47c3f5c30473e6d85e06d7fd9dcba0017d2867aab4a"}, - {file = "regex-2020.4.4-cp38-cp38-manylinux1_i686.whl", hash = "sha256:5635cd1ed0a12b4c42cce18a8d2fb53ff13ff537f09de5fd791e97de27b6400e"}, - {file = "regex-2020.4.4-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:23069d9c07e115537f37270d1d5faea3e0bdded8279081c4d4d607a2ad393683"}, - {file = "regex-2020.4.4-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:c162a21e0da33eb3d31a3ac17a51db5e634fc347f650d271f0305d96601dc15b"}, - {file = "regex-2020.4.4-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:fb95debbd1a824b2c4376932f2216cc186912e389bdb0e27147778cf6acb3f89"}, - {file = "regex-2020.4.4-cp38-cp38-win32.whl", hash = "sha256:2a3bf8b48f8e37c3a40bb3f854bf0121c194e69a650b209628d951190b862de3"}, - {file = "regex-2020.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bfed051dbff32fd8945eccca70f5e22b55e4148d2a8a45141a3b053d6455ae3"}, - {file = "regex-2020.4.4.tar.gz", hash = "sha256:295badf61a51add2d428a46b8580309c520d8b26e769868b922750cf3ce67142"}, + {file = "regex-2020.7.14-cp27-cp27m-win32.whl", hash = "sha256:e46d13f38cfcbb79bfdb2964b0fe12561fe633caf964a77a5f8d4e45fe5d2ef7"}, + {file = "regex-2020.7.14-cp27-cp27m-win_amd64.whl", hash = "sha256:6961548bba529cac7c07af2fd4d527c5b91bb8fe18995fed6044ac22b3d14644"}, + {file = "regex-2020.7.14-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:c50a724d136ec10d920661f1442e4a8b010a4fe5aebd65e0c2241ea41dbe93dc"}, + {file = "regex-2020.7.14-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:8a51f2c6d1f884e98846a0a9021ff6861bdb98457879f412fdc2b42d14494067"}, + {file = "regex-2020.7.14-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:9c568495e35599625f7b999774e29e8d6b01a6fb684d77dee1f56d41b11b40cd"}, + {file = "regex-2020.7.14-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:51178c738d559a2d1071ce0b0f56e57eb315bcf8f7d4cf127674b533e3101f88"}, + {file = "regex-2020.7.14-cp36-cp36m-win32.whl", hash = "sha256:9eddaafb3c48e0900690c1727fba226c4804b8e6127ea409689c3bb492d06de4"}, + {file = "regex-2020.7.14-cp36-cp36m-win_amd64.whl", hash = "sha256:14a53646369157baa0499513f96091eb70382eb50b2c82393d17d7ec81b7b85f"}, + {file = "regex-2020.7.14-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:1269fef3167bb52631ad4fa7dd27bf635d5a0790b8e6222065d42e91bede4162"}, + {file = "regex-2020.7.14-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:d0a5095d52b90ff38592bbdc2644f17c6d495762edf47d876049cfd2968fbccf"}, + {file = "regex-2020.7.14-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:4c037fd14c5f4e308b8370b447b469ca10e69427966527edcab07f52d88388f7"}, + {file = "regex-2020.7.14-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:bc3d98f621898b4a9bc7fecc00513eec8f40b5b83913d74ccb445f037d58cd89"}, + {file = "regex-2020.7.14-cp37-cp37m-win32.whl", hash = "sha256:46bac5ca10fb748d6c55843a931855e2727a7a22584f302dd9bb1506e69f83f6"}, + {file = "regex-2020.7.14-cp37-cp37m-win_amd64.whl", hash = "sha256:0dc64ee3f33cd7899f79a8d788abfbec168410be356ed9bd30bbd3f0a23a7204"}, + {file = "regex-2020.7.14-cp38-cp38-manylinux1_i686.whl", hash = "sha256:5ea81ea3dbd6767873c611687141ec7b06ed8bab43f68fad5b7be184a920dc99"}, + {file = "regex-2020.7.14-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:bbb332d45b32df41200380fff14712cb6093b61bd142272a10b16778c418e98e"}, + {file = "regex-2020.7.14-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:c11d6033115dc4887c456565303f540c44197f4fc1a2bfb192224a301534888e"}, + {file = "regex-2020.7.14-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:75aaa27aa521a182824d89e5ab0a1d16ca207318a6b65042b046053cfc8ed07a"}, + {file = "regex-2020.7.14-cp38-cp38-win32.whl", hash = "sha256:d6cff2276e502b86a25fd10c2a96973fdb45c7a977dca2138d661417f3728341"}, + {file = "regex-2020.7.14-cp38-cp38-win_amd64.whl", hash = "sha256:7a2dd66d2d4df34fa82c9dc85657c5e019b87932019947faece7983f2089a840"}, + {file = "regex-2020.7.14.tar.gz", hash = "sha256:3a3af27a8d23143c49a3420efe5b3f8cf1a48c6fc8bc6856b03f638abc1833bb"}, ] requests = [ - {file = "requests-2.23.0-py2.py3-none-any.whl", hash = "sha256:43999036bfa82904b6af1d99e4882b560e5e2c68e5c4b0aa03b655f3d7d73fee"}, - {file = "requests-2.23.0.tar.gz", hash = "sha256:b3f43d496c6daba4493e7c431722aeb7dbc6288f52a6e04e7b6023b0247817e6"}, + {file = "requests-2.24.0-py2.py3-none-any.whl", hash = "sha256:fe75cc94a9443b9246fc7049224f75604b113c36acb93f87b80ed42c44cbb898"}, + {file = "requests-2.24.0.tar.gz", hash = "sha256:b3559a131db72c33ee969480840fff4bb6dd111de7dd27c8ee1f820f4f00231b"}, ] requests-futures = [ {file = "requests-futures-1.0.0.tar.gz", hash = "sha256:35547502bf1958044716a03a2f47092a89efe8f9789ab0c4c528d9c9c30bc148"}, @@ -1146,13 +1158,12 @@ s3transfer = [ {file = "s3transfer-0.3.3.tar.gz", hash = "sha256:921a37e2aefc64145e7b73d50c71bb4f26f46e4c9f414dc648c6245ff92cf7db"}, ] six = [ - {file = "six-1.14.0-py2.py3-none-any.whl", hash = "sha256:8f3cd2e254d8f793e7f3d6d9df77b92252b52637291d0f0da013c76ea2724b6c"}, - {file = "six-1.14.0.tar.gz", hash = "sha256:236bdbdce46e6e6a3d61a337c0f8b763ca1e8717c03b369e87a7ec7ce1319c0a"}, + {file = "six-1.15.0-py2.py3-none-any.whl", hash = "sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced"}, + {file = "six-1.15.0.tar.gz", hash = "sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259"}, ] toml = [ - {file = "toml-0.10.0-py2.7.egg", hash = "sha256:f1db651f9657708513243e61e6cc67d101a39bad662eaa9b5546f789338e07a3"}, - {file = "toml-0.10.0-py2.py3-none-any.whl", hash = "sha256:235682dd292d5899d361a811df37e04a8828a5b1da3115886b73cf81ebc9100e"}, - {file = "toml-0.10.0.tar.gz", hash = "sha256:229f81c57791a41d65e399fc06bf0848bab550a9dfd5ed66df18ce5f05e73d5c"}, + {file = "toml-0.10.1-py2.py3-none-any.whl", hash = "sha256:bda89d5935c2eac546d648028b9901107a595863cb36bae0c73ac804a9b4ce88"}, + {file = "toml-0.10.1.tar.gz", hash = "sha256:926b612be1e5ce0634a2ca03470f95169cf16f939018233a670519cb4ac58b0f"}, ] typed-ast = [ {file = "typed_ast-1.4.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:73d785a950fc82dd2a25897d525d003f6378d1cb23ab305578394694202a58c3"}, @@ -1177,6 +1188,11 @@ typed-ast = [ {file = "typed_ast-1.4.1-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:d43943ef777f9a1c42bf4e552ba23ac77a6351de620aa9acf64ad54933ad4d34"}, {file = "typed_ast-1.4.1.tar.gz", hash = "sha256:8c8aaad94455178e3187ab22c8b01a3837f8ee50e09cf31f1ba129eb293ec30b"}, ] +typing-extensions = [ + {file = "typing_extensions-3.7.4.3-py2-none-any.whl", hash = "sha256:dafc7639cde7f1b6e1acc0f457842a83e722ccca8eef5270af2d74792619a89f"}, + {file = "typing_extensions-3.7.4.3-py3-none-any.whl", hash = "sha256:7cb407020f00f7bfc3cb3e7881628838e69d8f3fcab2f64742a5e76b2f841918"}, + {file = "typing_extensions-3.7.4.3.tar.gz", hash = "sha256:99d4073b617d30288f569d3f13d2bd7548c3a7e4c8de87db09a9d29bb3a4a60c"}, +] ujson = [ {file = "ujson-2.0.3-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:7ae13733d9467d16ccac2f38212cdee841b49ae927085c533425be9076b0bc9d"}, {file = "ujson-2.0.3-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:6217c63a36e9b26e9271e686d212397ce7fb04c07d85509dd4e2ed73493320f8"}, @@ -1190,12 +1206,9 @@ urllib3 = [ {file = "urllib3-1.24.2-py2.py3-none-any.whl", hash = "sha256:4c291ca23bbb55c76518905869ef34bdd5f0e46af7afe6861e8375643ffee1a0"}, {file = "urllib3-1.24.2.tar.gz", hash = "sha256:9a247273df709c4fedb38c711e44292304f73f39ab01beda9f6b9fc375669ac3"}, ] -uuid = [ - {file = "uuid-1.30.tar.gz", hash = "sha256:1f87cc004ac5120466f36c5beae48b4c48cc411968eed0eaecd3da82aa96193f"}, -] wcwidth = [ - {file = "wcwidth-0.1.9-py2.py3-none-any.whl", hash = "sha256:cafe2186b3c009a04067022ce1dcd79cb38d8d65ee4f4791b8888d6599d1bbe1"}, - {file = "wcwidth-0.1.9.tar.gz", hash = "sha256:ee73862862a156bf77ff92b09034fc4825dd3af9cf81bc5b360668d425f3c5f1"}, + {file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"}, + {file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"}, ] websocket-client = [ {file = "websocket_client-0.57.0-py2.py3-none-any.whl", hash = "sha256:0fc45c961324d79c781bab301359d5a1b00b13ad1b10415a4780229ef71a5549"}, diff --git a/pyproject.toml b/pyproject.toml index c41cc300..ad62596c 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "poetry.masonry.api" [tool.poetry] name = "splunk_eventgen" -version = "7.1.1" +version = "7.2.0" description = "Splunk Event Generator to produce real-time, representative data" authors = [ "Brian Bingham ", "Tony Lee ", "Jack Meixensperger ",] license = "Apache-2.0" @@ -39,7 +39,7 @@ PyYAML = "^5.3.1" jinja2 = "2.10.3" urllib3 = "1.24.2" httplib2 = "^0.17.2" -uuid = "^1.30" +importlib-metadata = "^1.0.0" [tool.poetry.dev-dependencies] pytest = "4.6.4" @@ -48,6 +48,6 @@ mock = "^4.0.2" pytest-cov = "^2.8.1" coverage = "4.5.4" pytest-mock = "^3.1.0" -flake8 = "^3.7.9" -black = "19.10b0" -isort = "^4.3.15" +flake8 = "^3.8.3" +black = "20.8b1" +isort = "^5.4.2" diff --git a/splunk_eventgen/__main__.py b/splunk_eventgen/__main__.py index 934fc9f2..7d6d5186 100644 --- a/splunk_eventgen/__main__.py +++ b/splunk_eventgen/__main__.py @@ -15,9 +15,32 @@ def _get_version(): """ @return: Version Number """ - import pkg_resources + try: + from sys import version_info + + if version_info[0] < 3 or (version_info[0] and version_info[1] < 8): + from importlib_metadata import PackageNotFoundError, distribution - return pkg_resources.get_distribution("splunk_eventgen").version + try: + dist = distribution("splunk_eventgen") + return dist.version + except PackageNotFoundError: + return "dev" + else: + # module change in python 3.8 + from importlib.metadata import PackageNotFoundError, distribution + + try: + dist = distribution("splunk_eventgen") + return dist.version + except PackageNotFoundError: + return "dev" + except ImportError: + return "Unknown" + except ModuleNotFoundError: + return "Unknown" + except Exception: + return "Unknown" EVENTGEN_VERSION = _get_version() @@ -39,6 +62,7 @@ def parse_args(): version="%(prog)s " + EVENTGEN_VERSION, ) parser.add_argument("--modinput-mode", default=False) + parser.add_argument("--counter-output", action="store_true", default=False) subparsers = parser.add_subparsers( title="commands", help="valid subcommands", dest="subcommand" ) @@ -95,9 +119,6 @@ def parse_args(): generate_subparser.add_argument( "--profiler", action="store_true", help="Turn on cProfiler" ) - generate_subparser.add_argument( - "--log-path", type=str, default="{0}/logs".format(FILE_LOCATION) - ) generate_subparser.add_argument( "--generator-queue-size", type=int, @@ -278,9 +299,16 @@ def build_splunk_app(dest, source=os.getcwd(), remove=True): ) return_code = os.system(install_cmd) if return_code != 0: - print( - "Failed to install dependencies via pip. Please check whether pip is installed." + install_cmd = ( + "pip3 install --requirement splunk_eventgen/lib/requirements.txt --upgrade --no-compile " + + "--no-binary :all: --target " + + install_target ) + return_code = os.system(install_cmd) + if return_code != 0: + print( + "Failed to install dependencies via pip. Please check whether pip is installed." + ) os.system("rm -rf " + os.path.join(install_target, "*.egg-info")) make_tarfile(target_file, directory) @@ -291,12 +319,15 @@ def build_splunk_app(dest, source=os.getcwd(), remove=True): def convert_verbosity_count_to_logging_level(verbosity): - if verbosity == 0: - return logging.ERROR - elif verbosity == 1: - return logging.INFO - elif verbosity == 2: - return logging.DEBUG + if type(verbosity) == int: + if verbosity == 0: + return logging.ERROR + elif verbosity == 1: + return logging.INFO + elif verbosity >= 2: + return logging.DEBUG + else: + return logging.DEBUG else: return logging.ERROR diff --git a/splunk_eventgen/default/eventgen.conf b/splunk_eventgen/default/eventgen.conf index 4e85179b..a8a940cd 100644 --- a/splunk_eventgen/default/eventgen.conf +++ b/splunk_eventgen/default/eventgen.conf @@ -47,3 +47,4 @@ autotimestamps = [["\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}", "%Y-%m-%d %H:%M: autotimestamp = false httpeventWaitResponse = true disableLoggingQueue = true +splitSample = 0 diff --git a/splunk_eventgen/eventgen_core.py b/splunk_eventgen/eventgen_core.py index 719ca7ad..02378963 100644 --- a/splunk_eventgen/eventgen_core.py +++ b/splunk_eventgen/eventgen_core.py @@ -65,6 +65,9 @@ def _load_config(self, configfile, **kwargs): """ # TODO: The old eventgen had strange cli args. We should probably update the module args to match this usage. new_args = {} + # this variable can't exist in the config object inputs, due to how it's set with symbols and needs to be + # pickable. We only want to change it to true if it doesn't exist and isn't linked to a egcounter. + update_counter = False if "args" in kwargs: args = kwargs["args"] outputer = [ @@ -94,17 +97,30 @@ def _load_config(self, configfile, **kwargs): new_args["sample"] = args.sample if getattr(args, "verbosity"): new_args["verbosity"] = args.verbosity + if getattr(args, "counter_output"): + update_counter = True + self.config = Config(configfile, **new_args) self.config.parse() + if update_counter: + if hasattr(self.config, "outputCounter") and isinstance( + self.config.outputCounter, type(None) + ): + self.config.outputCounter = True self.args.multiprocess = ( True if self.config.threading == "process" else self.args.multiprocess ) self._reload_plugins() if "args" in kwargs and getattr(kwargs["args"], "generators"): generator_worker_count = kwargs["args"].generators + # override the config's generatorWorkers to match what was specified on the cli + self.config.generatorWorkers = generator_worker_count else: generator_worker_count = self.config.generatorWorkers + # TODO: Probably should destroy pools better so processes are cleaned. + if self.args.multiprocess: + self.kill_processes() self._setup_pools(generator_worker_count) def _reload_plugins(self): @@ -171,10 +187,14 @@ def _create_timer_threadpool(self, threadcount=100): """ self.sampleQueue = Queue(maxsize=0) num_threads = threadcount + # futures pool allows each process to share an async pool. One per thread. for i in range(num_threads): worker = Thread( target=self._worker_do_work, - args=(self.sampleQueue, self.loggingQueue,), + args=( + self.sampleQueue, + self.loggingQueue, + ), name="TimeThread{0}".format(i), ) worker.setDaemon(True) @@ -198,7 +218,10 @@ def _create_output_threadpool(self, threadcount=1): for i in range(num_threads): worker = Thread( target=self._worker_do_work, - args=(self.outputQueue, self.loggingQueue,), + args=( + self.outputQueue, + self.loggingQueue, + ), name="OutputThread{0}".format(i), ) worker.setDaemon(True) @@ -242,11 +265,10 @@ def _create_generator_pool(self, workercount=20): for i in range(worker_threads): worker = Thread( target=self._generator_do_work, - args=( - self.workerQueue, - self.loggingQueue, - self.output_counters[i], - ), + args=(self.workerQueue, self.loggingQueue), + kwargs={ + "output_counter": self.output_counters[i], + }, ) worker.setDaemon(True) worker.start() @@ -254,7 +276,10 @@ def _create_generator_pool(self, workercount=20): for i in range(worker_threads): worker = Thread( target=self._generator_do_work, - args=(self.workerQueue, self.loggingQueue, None), + args=(self.workerQueue, self.loggingQueue), + kwargs={ + "output_counter": None, + }, ) worker.setDaemon(True) worker.start() diff --git a/splunk_eventgen/lib/__init__.py b/splunk_eventgen/lib/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/splunk_eventgen/lib/eventgenconfig.py b/splunk_eventgen/lib/eventgenconfig.py index 234724ff..ceb060ee 100644 --- a/splunk_eventgen/lib/eventgenconfig.py +++ b/splunk_eventgen/lib/eventgenconfig.py @@ -1,4 +1,3 @@ -import copy import datetime import json import logging.handlers @@ -7,11 +6,12 @@ import random import re import types -import urllib.error -import urllib.parse -import urllib.request from configparser import RawConfigParser +import six.moves.urllib.error +import six.moves.urllib.parse +import six.moves.urllib.request + from splunk_eventgen.lib.eventgenexceptions import FailedLoadingPlugin, PluginNotLoaded from splunk_eventgen.lib.eventgensamples import Sample from splunk_eventgen.lib.eventgentoken import Token @@ -149,6 +149,7 @@ class Config(object): "sequentialTimestamp", "extendIndexes", "disableLoggingQueue", + "splitSample", ] _validTokenTypes = {"token": 0, "replacementType": 1, "replacement": 2} _validHostTokens = {"token": 0, "replacement": 1} @@ -170,6 +171,7 @@ class Config(object): "generatorWorkers", "maxIntervalsBeforeFlush", "maxQueueLength", + "splitSample", "fileMaxBytes", ] _floatSettings = ["randomizeCount", "delay", "timeMultiple"] @@ -236,6 +238,7 @@ class Config(object): "maxQueueLength", "maxIntervalsBeforeFlush", "autotimestamp", + "splitSample", ] _complexSettings = { "sampletype": ["raw", "csv"], @@ -754,7 +757,8 @@ def parse(self): stateFile = open( os.path.join( s.sampleDir, - "state." + urllib.request.pathname2url(token.token), + "state." + + six.moves.urllib.request.pathname2url(token.token), ), "r", ) @@ -768,10 +772,15 @@ def parse(self): sampleFiles = os.listdir(s.sampleDir) for sample in sampleFiles: sample_name = s.name - # If we expect a .csv, append it to the file name - regex matching must include the extension - if s.sampletype == "csv" and not s.name.endswith(".csv"): - sample_name = s.name + "\.csv" results = re.match(sample_name, sample) + if ( + s.sampletype == "csv" + and not s.name.endswith(".csv") + and not results + ): + logger.warning( + "Could not find target csv, try adding .csv into stanza title and filename" + ) if results: # Make sure the stanza name/regex matches the entire file name match_start, match_end = results.regs[0] @@ -781,6 +790,8 @@ def parse(self): results.group(0), s.name ) ) + # Store original name for future regex matching + s._origName = s.name samplePath = os.path.join(s.sampleDir, sample) if os.path.isfile(samplePath): logger.debug( @@ -801,39 +812,33 @@ def parse(self): tempsamples2.append(s) for f in foundFiles: - if re.search(s.name, f): - news = copy.copy(s) - news.filePath = f + if re.search(s._origName, f): + s.filePath = f # 12/3/13 CS TODO These are hard coded but should be handled via the modular config system # Maybe a generic callback for all plugins which will modify sample based on the filename # found? # Override with real name if s.outputMode == "spool" and s.spoolFile == self.spoolFile: - news.spoolFile = f.split(os.sep)[-1] + s.spoolFile = f.split(os.sep)[-1] if s.outputMode == "file" and s.fileName is None: if self.fileName: - news.fileName = self.fileName + s.fileName = self.fileName logger.debug( "Found a global fileName {}. Setting the sample fileName.".format( self.fileName ) ) elif s.spoolFile == self.spoolFile: - news.fileName = os.path.join( - s.spoolDir, f.split(os.sep)[-1] - ) + s.fileName = os.path.join(s.spoolDir, f.split(os.sep)[-1]) elif s.spoolFile is not None: - news.fileName = os.path.join(s.spoolDir, s.spoolFile) - # Override s.name with file name. Usually they'll match unless we've been a regex - # 6/22/12 CS Save original name for later matching - news._origName = news.name - news.name = f.split(os.sep)[-1] - if not news.disabled: - tempsamples2.append(news) + s.fileName = os.path.join(s.spoolDir, s.spoolFile) + s.name = f.split(os.sep)[-1] + if not s.disabled: + tempsamples2.append(s) else: logger.info( "Sample '%s' for app '%s' is marked disabled." - % (news.name, news.app) + % (s.name, s.app) ) # Clear tempsamples, we're going to reuse it diff --git a/splunk_eventgen/lib/eventgensamples.py b/splunk_eventgen/lib/eventgensamples.py index 096ebf74..f0ab74a2 100644 --- a/splunk_eventgen/lib/eventgensamples.py +++ b/splunk_eventgen/lib/eventgensamples.py @@ -5,9 +5,10 @@ import pprint import re import sys -import urllib.error -import urllib.parse -import urllib.request + +import six.moves.urllib.error +import six.moves.urllib.parse +import six.moves.urllib.request from splunk_eventgen.lib.logging_config import logger from splunk_eventgen.lib.timeparser import timeParser @@ -246,7 +247,7 @@ def saveState(self): stateFile = open( os.path.join( self.sampleDir, - "state." + urllib.request.pathname2url(token.token), + "state." + six.moves.urllib.request.pathname2url(token.token), ), "w", ) @@ -363,6 +364,60 @@ def latestTime(self): def utcnow(self): return self.now(utcnow=True) + def processSampleLine(self, filehandler): + """ + Due to a change in python3, utf-8 is now the default trying to read a file. To get around this we need the + process loop outside of the filehandler. + :param filehandler: + :return: + """ + sampleLines = [] + if self.breaker == self.config.breaker: + logger.debug("Reading raw sample '%s' in app '%s'" % (self.name, self.app)) + sampleLines = filehandler.readlines() + # 1/5/14 CS Moving to using only sampleDict and doing the breaking up into events at load time + # instead of on every generation + else: + logger.debug( + "Non-default breaker '%s' detected for sample '%s' in app '%s'" + % (self.breaker, self.name, self.app) + ) + sampleData = filehandler.read() + logger.debug( + "Filling array for sample '%s' in app '%s'; sampleData=%s, breaker=%s" + % (self.name, self.app, len(sampleData), self.breaker) + ) + try: + breakerRE = re.compile(self.breaker, re.M) + except: + logger.error( + "Line breaker '%s' for sample '%s' in app '%s'" + " could not be compiled; using default breaker", + self.breaker, + self.name, + self.app, + ) + self.breaker = self.config.breaker + + # Loop through data, finding matches of the regular expression and breaking them up into + # "lines". Each match includes the breaker itself. + extractpos = 0 + searchpos = 0 + breakerMatch = breakerRE.search(sampleData, searchpos) + while breakerMatch: + logger.debug( + "Breaker found at: %d, %d" + % (breakerMatch.span()[0], breakerMatch.span()[1]) + ) + # Ignore matches at the beginning of the file + if breakerMatch.span()[0] != 0: + sampleLines.append(sampleData[extractpos : breakerMatch.span()[0]]) + extractpos = breakerMatch.span()[0] + searchpos = breakerMatch.span()[1] + breakerMatch = breakerRE.search(sampleData, searchpos) + sampleLines.append(sampleData[extractpos:]) + return sampleLines + def loadSample(self): """ Load sample from disk into self._sample.sampleLines and self._sample.sampleDict, using cached copy if possible @@ -370,61 +425,14 @@ def loadSample(self): if self.sampletype == "raw": # 5/27/12 CS Added caching of the sample file if self.sampleDict is None: - with open(self.filePath, "r") as fh: - if self.breaker == self.config.breaker: - logger.debug( - "Reading raw sample '%s' in app '%s'" - % (self.name, self.app) - ) - self.sampleLines = fh.readlines() - # 1/5/14 CS Moving to using only sampleDict and doing the breaking up into events at load time - # instead of on every generation - else: - logger.debug( - "Non-default breaker '%s' detected for sample '%s' in app '%s'" - % (self.breaker, self.name, self.app) - ) - - sampleData = fh.read() - self.sampleLines = [] - - logger.debug( - "Filling array for sample '%s' in app '%s'; sampleData=%s, breaker=%s" - % (self.name, self.app, len(sampleData), self.breaker) - ) - - try: - breakerRE = re.compile(self.breaker, re.M) - except: - logger.error( - "Line breaker '%s' for sample '%s' in app '%s'" - " could not be compiled; using default breaker", - self.breaker, - self.name, - self.app, - ) - self.breaker = self.config.breaker - - # Loop through data, finding matches of the regular expression and breaking them up into - # "lines". Each match includes the breaker itself. - extractpos = 0 - searchpos = 0 - breakerMatch = breakerRE.search(sampleData, searchpos) - while breakerMatch: - logger.debug( - "Breaker found at: %d, %d" - % (breakerMatch.span()[0], breakerMatch.span()[1]) - ) - # Ignore matches at the beginning of the file - if breakerMatch.span()[0] != 0: - self.sampleLines.append( - sampleData[extractpos : breakerMatch.span()[0]] - ) - extractpos = breakerMatch.span()[0] - searchpos = breakerMatch.span()[1] - breakerMatch = breakerRE.search(sampleData, searchpos) - self.sampleLines.append(sampleData[extractpos:]) - + self.sampleLines = [] + try: + with open(self.filePath, "r") as fh: + self.sampleLines = self.processSampleLine(fh) + except UnicodeDecodeError: + # incase you can't read it in the default encoding, change over to latin-1 + with open(self.filePath, "r", encoding="latin-1") as fh: + self.sampleLines = self.processSampleLine(fh) self.sampleDict = [] for line in self.sampleLines: if line == "\n": diff --git a/splunk_eventgen/lib/eventgentimer.py b/splunk_eventgen/lib/eventgentimer.py index 9a444e03..181eb730 100644 --- a/splunk_eventgen/lib/eventgentimer.py +++ b/splunk_eventgen/lib/eventgentimer.py @@ -1,9 +1,7 @@ -import copy +import datetime import time -from queue import Full from splunk_eventgen.lib.logging_config import logger -from splunk_eventgen.lib.timeparser import timeParserTimeMath class Timer(object): @@ -58,7 +56,11 @@ def __init__( rater_class = self.config.getPlugin( "rater." + self.sample.rater, self.sample ) + backrater_class = self.config.getPlugin("rater.backfill", self.sample) + perdayrater_class = self.config.getPlugin("rater.perdayvolume", self.sample) self.rater = rater_class(self.sample) + self.backrater = backrater_class(self.sample) + self.perdayrater = perdayrater_class(self.sample) self.generatorPlugin = self.config.getPlugin( "generator." + self.sample.generator, self.sample ) @@ -97,7 +99,7 @@ def predict_event_size(self): else: return total_len / sample_count - def run(self): + def run(self, futures_pool=None): """ Simple wrapper method to determine whether we should be running inside python's profiler or not """ @@ -124,9 +126,8 @@ def real_run(self): time.sleep(self.sample.delay) logger.debug("Timer creating plugin for '%s'" % self.sample.name) - + local_time = datetime.datetime.now() end = False - previous_count_left = 0 raw_event_size = self.predict_event_size() if self.end: if int(self.end) == 0: @@ -141,174 +142,86 @@ def real_run(self): % self.sample.name ) while not end: - # Need to be able to stop threads by the main thread or this thread. self.config will stop all threads - # referenced in the config object, while, self.stopping will only stop this one. - if self.config.stopping or self.stopping: - end = True - continue - count = self.rater.rate() - # First run of the generator, see if we have any backfill work to do. - if self.countdown <= 0: - if self.sample.backfill and not self.sample.backfilldone: - realtime = self.sample.now(realnow=True) - if "-" in self.sample.backfill[0]: - mathsymbol = "-" - else: - mathsymbol = "+" - backfillnumber = "" - backfillletter = "" - for char in self.sample.backfill: - if char.isdigit(): - backfillnumber += char - elif char != "-": - backfillletter += char - backfillearliest = timeParserTimeMath( - plusminus=mathsymbol, - num=backfillnumber, - unit=backfillletter, - ret=realtime, - ) - while backfillearliest < realtime: - if self.end and self.executions == int(self.end): - logger.info( - "End executions %d reached, ending generation of sample '%s'" - % (int(self.end), self.sample.name) - ) - break - et = backfillearliest - lt = timeParserTimeMath( - plusminus="+", num=self.interval, unit="s", ret=et - ) - copy_sample = copy.copy(self.sample) - tokens = copy.deepcopy(self.sample.tokens) - copy_sample.tokens = tokens - genPlugin = self.generatorPlugin(sample=copy_sample) - # need to make sure we set the queue right if we're using multiprocessing or thread modes - genPlugin.updateConfig( - config=self.config, outqueue=self.outputQueue + try: + # Need to be able to stop threads by the main thread or this thread. self.config will stop all threads + # referenced in the config object, while, self.stopping will only stop this one. + if self.config.stopping or self.stopping: + end = True + self.rater.update_options( + config=self.config, + sample=self.sample, + generatorQueue=self.generatorQueue, + outputQueue=self.outputQueue, + outputPlugin=self.outputPlugin, + generatorPlugin=self.generatorPlugin, + ) + count = self.rater.rate() + # First run of the generator, see if we have any backfill work to do. + if self.countdown <= 0: + if self.sample.backfill and not self.sample.backfilldone: + self.backrater.update_options( + config=self.config, + sample=self.sample, + generatorQueue=self.generatorQueue, + outputQueue=self.outputQueue, + outputPlugin=self.outputPlugin, + generatorPlugin=self.generatorPlugin, + samplerater=self.rater, ) - genPlugin.updateCounts(count=count, start_time=et, end_time=lt) - try: - self.generatorQueue.put(genPlugin, True, 3) - self.executions += 1 - backfillearliest = lt - except Full: - logger.warning( - "Generator Queue Full. Reput the backfill generator task later." - " %d backfill generators are dispatched.", - self.executions, - ) - backfillearliest = et - realtime = self.sample.now(realnow=True) - - self.sample.backfilldone = True - else: - # 12/15/13 CS Moving the rating to a separate plugin architecture - # Save previous interval count left to avoid perdayvolumegenerator drop small tasks - if self.sample.generator == "perdayvolumegenerator": - count = self.rater.rate() + previous_count_left - if 0 < count < raw_event_size: - logger.info( - "current interval size is {}, which is smaller than a raw event size {}.".format( - count, raw_event_size - ) - + "Wait for the next turn." - ) - previous_count_left = count - self.countdown = self.interval - self.executions += 1 - continue - else: - previous_count_left = 0 + self.backrater.queue_it(count) else: - count = self.rater.rate() - - et = self.sample.earliestTime() - lt = self.sample.latestTime() - - try: - if count < 1 and count != -1: - logger.info( - "There is no data to be generated in worker {0} because the count is {1}.".format( - self.sample.config.generatorWorkers, count - ) + if self.sample.generator == "perdayvolumegenerator": + self.perdayrater.update_options( + config=self.config, + sample=self.sample, + generatorQueue=self.generatorQueue, + outputQueue=self.outputQueue, + outputPlugin=self.outputPlugin, + generatorPlugin=self.generatorPlugin, + samplerater=self.rater, + raweventsize=raw_event_size, ) - else: - # Spawn workers at the beginning of job rather than wait for next interval - logger.info( - "Starting '%d' generatorWorkers for sample '%s'" - % ( - self.sample.config.generatorWorkers, - self.sample.name, - ) - ) - for worker_id in range(self.config.generatorWorkers): - copy_sample = copy.copy(self.sample) - tokens = copy.deepcopy(self.sample.tokens) - copy_sample.tokens = tokens - genPlugin = self.generatorPlugin(sample=copy_sample) - # Adjust queue for threading mode - genPlugin.updateConfig( - config=self.config, outqueue=self.outputQueue - ) - genPlugin.updateCounts( - count=count, start_time=et, end_time=lt - ) - - try: - self.generatorQueue.put(genPlugin) - logger.debug( - ( - "Worker# {0}: Put {1} MB of events in queue for sample '{2}'" - + "with et '{3}' and lt '{4}'" - ).format( - worker_id, - round((count / 1024.0 / 1024), 4), - self.sample.name, - et, - lt, - ) - ) - except Full: - logger.warning( - "Generator Queue Full. Skipping current generation." - ) - self.executions += 1 - except Exception as e: - logger.exception(str(e)) - if self.stopping: - end = True - pass - - # Sleep until we're supposed to wake up and generate more events + self.perdayrater.rate() + self.perdayrater.queue_it(count) + self.rater.queue_it(count) + self.countdown = self.interval + self.executions += 1 + + except Exception as e: + logger.exception(str(e)) + if self.stopping: + end = True + pass + + # Sleep until we're supposed to wake up and generate more events + if self.countdown == 0: self.countdown = self.interval - # 8/20/15 CS Adding support for ending generation at a certain time - - if self.end: - if int(self.end) == -1: - time.sleep(self.time) - self.countdown -= self.time - continue - # 3/16/16 CS Adding support for ending on a number of executions instead of time - # Should be fine with storing state in this sample object since each sample has it's own unique - # timer thread - if not self.endts: - if self.executions >= int(self.end): - logger.info( - "End executions %d reached, ending generation of sample '%s'" - % (int(self.end), self.sample.name) - ) - self.stopping = True - end = True - elif lt >= self.endts: + # 8/20/15 CS Adding support for ending generation at a certain time + + if self.end: + if int(self.end) == -1: + time.sleep(self.time) + self.countdown -= self.time + continue + # 3/16/16 CS Adding support for ending on a number of executions instead of time + # Should be fine with storing state in this sample object since each sample has it's own unique + # timer thread + if not self.endts: + if self.executions >= int(self.end): logger.info( - "End Time '%s' reached, ending generation of sample '%s'" - % (self.sample.endts, self.sample.name) + "End executions %d reached, ending generation of sample '%s'" + % (int(self.end), self.sample.name) ) self.stopping = True end = True + elif local_time >= self.endts: + logger.info( + "End Time '%s' reached, ending generation of sample '%s'" + % (self.sample.endts, self.sample.name) + ) + self.stopping = True + end = True - else: - time.sleep(self.time) - self.countdown -= self.time + time.sleep(self.time) + self.countdown -= self.time diff --git a/splunk_eventgen/lib/eventgentimestamp.py b/splunk_eventgen/lib/eventgentimestamp.py index 19554a46..734c2e21 100644 --- a/splunk_eventgen/lib/eventgentimestamp.py +++ b/splunk_eventgen/lib/eventgentimestamp.py @@ -46,11 +46,11 @@ def get_random_timestamp_backfill(earliest, latest, sample_earliest, sample_late pivot_time = earliest_in_epoch - sample_earliest_in_seconds = EventgenTimestamp._convert_time_difference_to_seconds( - sample_earliest + sample_earliest_in_seconds = ( + EventgenTimestamp._convert_time_difference_to_seconds(sample_earliest) ) - sample_latest_in_seconds = EventgenTimestamp._convert_time_difference_to_seconds( - sample_latest + sample_latest_in_seconds = ( + EventgenTimestamp._convert_time_difference_to_seconds(sample_latest) ) earliest_pivot_time = pivot_time + sample_earliest_in_seconds diff --git a/splunk_eventgen/lib/eventgentoken.py b/splunk_eventgen/lib/eventgentoken.py index 5feb5110..49b27050 100644 --- a/splunk_eventgen/lib/eventgentoken.py +++ b/splunk_eventgen/lib/eventgentoken.py @@ -6,11 +6,12 @@ import random import re import time -import urllib.error -import urllib.parse -import urllib.request import uuid +import six.moves.urllib.error +import six.moves.urllib.parse +import six.moves.urllib.request + from splunk_eventgen.lib.logging_config import logger from splunk_eventgen.lib.timeparser import timeDelta2secs @@ -380,7 +381,9 @@ def _getReplacement( replacement += chr(random.randint(33, 126)) # Practice safe strings replacement = re.sub( - "%[0-9a-fA-F]+", "", urllib.parse.quote(replacement) + "%[0-9a-fA-F]+", + "", + six.moves.urllib.parse.quote(replacement), ) return replacement diff --git a/splunk_eventgen/lib/generatorplugin.py b/splunk_eventgen/lib/generatorplugin.py index 195e72ea..2f72fa51 100644 --- a/splunk_eventgen/lib/generatorplugin.py +++ b/splunk_eventgen/lib/generatorplugin.py @@ -2,13 +2,13 @@ import pprint import random import time -import urllib.error -import urllib.parse -import urllib.request from xml.dom import minidom from xml.parsers.expat import ExpatError import httplib2 +import six.moves.urllib.error +import six.moves.urllib.parse +import six.moves.urllib.request from splunk_eventgen.lib.eventgenoutput import Output from splunk_eventgen.lib.eventgentimestamp import EventgenTimestamp @@ -153,7 +153,7 @@ def setupBackfill(self): s.backfillSearchUrl + "/services/search/jobs", "POST", headers={"Authorization": "Splunk %s" % s.sessionKey}, - body=urllib.parse.urlencode( + body=six.moves.urllib.parse.urlencode( { "search": s.backfillSearch, "earliest_time": s.backfill, diff --git a/splunk_eventgen/lib/logging_config/__init__.py b/splunk_eventgen/lib/logging_config/__init__.py index 2564a769..c58ad603 100644 --- a/splunk_eventgen/lib/logging_config/__init__.py +++ b/splunk_eventgen/lib/logging_config/__init__.py @@ -22,13 +22,11 @@ "filters": {}, "handlers": { "console": { - "level": DEFAULT_LOGGING_LEVEL, "class": "logging.StreamHandler", "formatter": "default", }, "eventgen_main": { "class": "logging.handlers.RotatingFileHandler", - "level": DEFAULT_LOGGING_LEVEL, "formatter": "default", "filters": [], "maxBytes": 1024 * 1024, @@ -36,7 +34,6 @@ }, "eventgen_controller": { "class": "logging.handlers.RotatingFileHandler", - "level": DEFAULT_LOGGING_LEVEL, "formatter": "default", "filters": [], "maxBytes": 1024 * 1024, @@ -44,7 +41,6 @@ }, "eventgen_httpevent": { "class": "logging.handlers.RotatingFileHandler", - "level": DEFAULT_LOGGING_LEVEL, "formatter": "default", "filters": [], "maxBytes": 1024 * 1024, @@ -60,7 +56,6 @@ }, "eventgen_metrics": { "class": "logging.handlers.RotatingFileHandler", - "level": DEFAULT_LOGGING_LEVEL, "formatter": "default", "filters": [], "maxBytes": 1024 * 1024, @@ -68,7 +63,6 @@ }, "eventgen_server": { "class": "logging.handlers.RotatingFileHandler", - "level": DEFAULT_LOGGING_LEVEL, "formatter": "default", "filters": [], "maxBytes": 1024 * 1024, @@ -77,13 +71,13 @@ }, "loggers": { "eventgen": { - "handlers": ["eventgen_main"], + "handlers": ["eventgen_main", "eventgen_error"], "level": DEFAULT_LOGGING_LEVEL, "propagate": False, }, "eventgen_metrics": { "handlers": ["eventgen_metrics"], - "level": DEFAULT_LOGGING_LEVEL, + "level": "INFO", "propagate": False, }, "eventgen_server": { diff --git a/splunk_eventgen/lib/outputplugin.py b/splunk_eventgen/lib/outputplugin.py index 4dd30cbb..049468d5 100644 --- a/splunk_eventgen/lib/outputplugin.py +++ b/splunk_eventgen/lib/outputplugin.py @@ -1,6 +1,6 @@ from collections import deque -from splunk_eventgen.lib.logging_config import logger +from splunk_eventgen.lib.logging_config import logger, metrics_logger class OutputPlugin(object): @@ -41,6 +41,9 @@ def run(self): self.output_counter.collect( len(self.events), sum([len(e["_raw"]) for e in self.events]) ) + metrics_logger.info( + "Current Counts: {0}".format(self.output_counter.__dict__) + ) self.events = None self._output_end() diff --git a/splunk_eventgen/lib/plugins/__init__.py b/splunk_eventgen/lib/plugins/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/splunk_eventgen/lib/plugins/generator/__init__.py b/splunk_eventgen/lib/plugins/generator/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/splunk_eventgen/lib/plugins/generator/counter.py b/splunk_eventgen/lib/plugins/generator/counter.py new file mode 100644 index 00000000..f3d47b32 --- /dev/null +++ b/splunk_eventgen/lib/plugins/generator/counter.py @@ -0,0 +1,157 @@ +import datetime +from datetime import timedelta + +from splunk_eventgen.lib.generatorplugin import GeneratorPlugin +from splunk_eventgen.lib.logging_config import logger + + +class CounterGenerator(GeneratorPlugin): + validSettings = ["count_template", "start_count", "end_count", "count_by"] + defaultableSettings = ["count_template", "start_count", "end_count", "count_by"] + + def __init__(self, sample): + GeneratorPlugin.__init__(self, sample) + self.start_count = 0.0 + self.end_count = 0.0 + self.count_by = 1.0 + self.count_template = ( + "{event_ts}-0700 Counter for sample:{samplename}, " + + "Now processing event counting {loop_count} of {max_loop} cycles. Counter Values:" + + " Start_Count: {start_count} Current_Counter:{current_count}" + + " End_Count:{end_count} Counting_By: {count_by}" + ) + + def update_start_count(self, target): + try: + if "." in target: + self.start_count = round(float(target), 5) + else: + self.start_count = int(target) + except Exception: + logger.warn( + "Failed setting start count to {0}. Make sure start_count is an int/float".format( + target + ) + ) + logger.warn("Setting start_count to 0") + self.start_count = 0 + + def update_end_count(self, target): + try: + if "." in target: + self.end_count = round(float(target), 5) + else: + self.end_count = int(target) + + except Exception: + logger.warn( + "Failed setting end count to {0}. Make sure end_count is an int/float".format( + target + ) + ) + logger.warn("Setting end_count to 0") + self.end_count = 0.0 + + def update_count_by(self, target): + try: + if "." in target: + self.count_by = round(float(target), 5) + else: + self.count_by = int(target) + except Exception: + logger.warn( + "Failed setting count_by to {0}. Make sure count_by is an int/float".format( + target + ) + ) + logger.warn("Setting count_by to 1") + self.count_by = 1.0 + + def update_count_template(self, target): + self.count_template = str(target) + + def gen(self, count, earliest, latest, samplename=None): + try: + if hasattr(self._sample, "start_count"): + self.update_start_count(self._sample.start_count) + if hasattr(self._sample, "end_count"): + self.update_end_count(self._sample.end_count) + if hasattr(self._sample, "count_by"): + self.update_count_by(self._sample.count_by) + if hasattr(self._sample, "count_template"): + self.update_count_template(self._sample.count_template) + # count if not supplied is set to -1 + if count < 0: + # if the user didn't supply end_count and they didn't supply a count, just take a guess they want the + # default assuming that start_count is larger than the end_count (counting backwards) + if not self.end_count and not self.start_count > self.end_count: + logger.warn( + "Sample size not found for count=-1 and generator=splitcounter, defaulting to count=60" + ) + self.update_end_count(60) + count = 1 + else: + count = 1 + elif not self.end_count and count != 1: + self.update_end_count(count) + count = 1 + # if the end_count is lower than start_count, check if they want to count backwards. Some people might not + # want to do math, so if end_count is lower, but they want to count by a positive number, instead assume + # they are trying to say "start at number x, count by y, and end after z cyles of y". + if self.end_count < self.start_count: + if self.count_by > 0: + logger.warn( + "end_count lower than start_count. Assuming you want start_count + end_count" + ) + self.end_count = self.start_count + self.end_count + elif self.count_by == 0: + logger.warn("Can't count by 0, assuming 1 instead.") + self.count_by = 1 + countdiff = abs(self.end_count - self.start_count) + time_interval = timedelta.total_seconds((latest - earliest)) / countdiff + for i in range(count): + current_count = self.start_count + while current_count != self.end_count: + current_time_object = earliest + datetime.timedelta( + 0, time_interval * (current_count + 1) + ) + msg = self.count_template.format( + samplename=samplename, + event_ts=current_time_object, + loop_count=i + 1, + max_loop=count, + start_count=self.start_count, + current_count=current_count, + end_count=self.end_count, + count_by=self.count_by, + ) + self._out.send(msg) + if type(current_count) == float or type(self.count_by) == float: + current_count = round(current_count + self.count_by, 5) + else: + current_count = current_count + self.count_by + # Since the while loop counts both directions, we end when they are equal + # however we need to make sure we don't forget to run the last iteration + else: + current_time_object = earliest + datetime.timedelta( + 0, time_interval * (current_count + 1) + ) + msg = self.count_template.format( + samplename=samplename, + event_ts=current_time_object, + loop_count=i + 1, + max_loop=count, + start_count=self.start_count, + current_count=current_count, + end_count=self.end_count, + count_by=self.count_by, + ) + self._out.send(msg) + self._out.flush() + return 0 + except Exception as e: + raise e + + +def load(): + return CounterGenerator diff --git a/splunk_eventgen/lib/plugins/generator/jinja.py b/splunk_eventgen/lib/plugins/generator/jinja.py index d9464bd5..d5d6e27e 100644 --- a/splunk_eventgen/lib/plugins/generator/jinja.py +++ b/splunk_eventgen/lib/plugins/generator/jinja.py @@ -192,7 +192,9 @@ def _increment_count(self, lines): self.current_count = self.current_count + 1 else: raise Exception( - f"Unable to process target count style: {self.jinja_count_type}" + "Unable to process target count style: {0}".format( + self.jinja_count_type + ) ) def gen(self, count, earliest, latest, samplename=None): diff --git a/splunk_eventgen/lib/plugins/generator/replay.py b/splunk_eventgen/lib/plugins/generator/replay.py index 0743f7db..e4a773cc 100644 --- a/splunk_eventgen/lib/plugins/generator/replay.py +++ b/splunk_eventgen/lib/plugins/generator/replay.py @@ -2,7 +2,6 @@ import datetime import time -from splunk_eventgen.lib.eventgentimestamp import EventgenTimestamp from splunk_eventgen.lib.generatorplugin import GeneratorPlugin from splunk_eventgen.lib.logging_config import logger @@ -21,61 +20,58 @@ def __init__(self, sample): self._currentevent = 0 self._timeSinceSleep = datetime.timedelta() self._times = [] + self.replayLock = None - def set_time_and_send(self, rpevent, event_time, earliest, latest): - # temporary time append - rpevent["_raw"] = rpevent["_raw"][:-1] - rpevent["_time"] = (event_time - datetime.datetime(1970, 1, 1)).total_seconds() + def updateConfig(self, config, outqueue, replayLock=None): + super(ReplayGenerator, self).updateConfig(config, outqueue) + self.replayLock = replayLock - event = rpevent["_raw"] + def set_time_and_tokens(self, replayed_event, event_time, earliest, latest): + send_event = {} + # temporary time append + send_event["_raw"] = replayed_event["_raw"][:-1] + send_event["host"] = replayed_event["host"] + send_event["source"] = replayed_event["source"] + send_event["sourcetype"] = replayed_event["sourcetype"] + send_event["index"] = replayed_event["index"] + send_event["_time"] = ( + event_time - datetime.datetime(1970, 1, 1) + ).total_seconds() # Maintain state for every token in a given event # Hash contains keys for each file name which is assigned a list of values # picked from a random line in that file - mvhash = {} + mvhash = dict() # Iterate tokens + eventraw = replayed_event["_raw"] for token in self._sample.tokens: token.mvhash = mvhash if token.replacementType in ["timestamp", "replaytimestamp"]: - event = token.replace( - event, et=event_time, lt=event_time, s=self._sample + eventraw = token.replace( + eventraw, et=event_time, lt=event_time, s=self._sample ) else: - event = token.replace(event, s=self._sample) + eventraw = token.replace(eventraw, s=self._sample) if self._sample.hostToken: # clear the host mvhash every time, because we need to re-randomize it self._sample.hostToken.mvhash = {} - host = rpevent["host"] + host = replayed_event["host"] if self._sample.hostToken: - rpevent["host"] = self._sample.hostToken.replace(host, s=self._sample) - - rpevent["_raw"] = event - self._out.bulksend([rpevent]) - - def gen(self, count, earliest, latest, samplename=None): - # 9/8/15 CS Check to make sure we have events to replay - self._sample.loadSample() - previous_event = None - previous_event_timestamp = None - self.current_time = self._sample.now() - - # If backfill exists, calculate the start of the backfill time relative to the current time. - # Otherwise, backfill time equals to the current time - self.backfill_time = self._sample.get_backfill_time(self.current_time) + send_event["host"] = self._sample.hostToken.replace(host, s=self._sample) - if not self._sample.backfill or self._sample.backfilldone: - self.backfill_time = EventgenTimestamp.get_random_timestamp_backfill( - earliest, latest, self._sample.earliest, self._sample.latest - ) + send_event["_raw"] = eventraw + return send_event + def load_sample_file(self): + line_list = [] for line in self._sample.get_loaded_sample(): # Add newline to a raw line if necessary try: if line["_raw"][-1] != "\n": line["_raw"] += "\n" - + current_event_timestamp = False index = line.get("index", self._sample.index) host = line.get("host", self._sample.host) hostRegex = line.get("hostRegex", self._sample.hostRegex) @@ -101,17 +97,17 @@ def gen(self, count, earliest, latest, samplename=None): "source": self._sample.source, "sourcetype": self._sample.sourcetype, } - - # If timestamp doesn't exist, the sample file should be fixed to include timestamp for every event. try: current_event_timestamp = self._sample.getTSFromEvent( rpevent[self._sample.timeField] ) + rpevent["base_time"] = current_event_timestamp except Exception: try: current_event_timestamp = self._sample.getTSFromEvent( line[self._sample.timeField] ) + rpevent["base_time"] = current_event_timestamp except Exception: try: logger.error( @@ -125,34 +121,77 @@ def gen(self, count, earliest, latest, samplename=None): except Exception: logger.exception("Extracting timestamp from an event failed.") continue + line_list.append(rpevent) + # now interate the list 1 time and figure out the time delta of every event + current_event = None + previous_event = None + for index, line in enumerate(line_list): + current_event = line + # if it's the first event, there is no previous event. + if index == 0: + previous_event = current_event + else: + previous_event = line_list[index - 1] + # Refer to the last event to calculate the new backfill time + time_difference = ( + current_event["base_time"] - previous_event["base_time"] + ) * self._sample.timeMultiple + current_event["timediff"] = time_difference + return line_list - # Always flush the first event + def gen(self, count, earliest, latest, samplename=None): + # 9/8/15 CS Check to make sure we have events to replay + self._sample.loadSample() + self.current_time = self._sample.now() + line_list = self.load_sample_file() + # If backfill exists, calculate the start of the backfill time relative to the current time. + # Otherwise, backfill time equals to the current time + self.backfill_time = self._sample.get_backfill_time(self.current_time) + # if we have backfill, replay the events backwards until we hit the backfill + if self.backfill_time != self.current_time and not self._sample.backfilldone: + backfill_count_time = self.current_time + current_backfill_index = len(line_list) - 1 + backfill_events = [] + while backfill_count_time >= self.backfill_time: + rpevent = line_list[current_backfill_index] + backfill_count_time = backfill_count_time - rpevent["timediff"] + backfill_events.append( + self.set_time_and_tokens( + rpevent, backfill_count_time, earliest, latest + ) + ) + current_backfill_index -= 1 + if current_backfill_index < 0: + current_backfill_index = len(line_list) - 1 + backfill_events.reverse() + self._out.bulksend(backfill_events) + self._sample.backfilldone = True + previous_event = None + for index, rpevent in enumerate(line_list): if previous_event is None: - previous_event = rpevent - previous_event_timestamp = current_event_timestamp - self.set_time_and_send(rpevent, self.backfill_time, earliest, latest) + current_event = self.set_time_and_tokens( + rpevent, self.backfill_time, earliest, latest + ) + previous_event = current_event + previous_event_timediff = rpevent["timediff"] + self._out.bulksend([current_event]) continue - - # Refer to the last event to calculate the new backfill time - time_difference = datetime.timedelta( - seconds=( - current_event_timestamp - previous_event_timestamp - ).total_seconds() - * self._sample.timeMultiple - ) - - if self.backfill_time + time_difference >= self.current_time: - sleep_time = time_difference - (self.current_time - self.backfill_time) - if self._sample.backfill and not self._sample.backfilldone: - time.sleep(sleep_time.seconds) - self.current_time += sleep_time - self.backfill_time = self.current_time - else: - self.backfill_time += time_difference + try: + time.sleep(previous_event_timediff.total_seconds()) + except ValueError: + logger.error( + "Can't sleep for negative time, please make sure your events are in time order." + "see line Number{0}".format(index) + ) + logger.error("Event: {0}".format(rpevent)) + pass + current_time = datetime.datetime.now() previous_event = rpevent - previous_event_timestamp = current_event_timestamp - self.set_time_and_send(rpevent, self.backfill_time, earliest, latest) - + previous_event_timediff = rpevent["timediff"] + send_event = self.set_time_and_tokens( + rpevent, current_time, earliest, latest + ) + self._out.bulksend([send_event]) self._out.flush(endOfInterval=True) return diff --git a/splunk_eventgen/lib/plugins/output/__init__.py b/splunk_eventgen/lib/plugins/output/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/splunk_eventgen/lib/plugins/output/awss3.py b/splunk_eventgen/lib/plugins/output/awss3.py index 820e0261..2672a864 100644 --- a/splunk_eventgen/lib/plugins/output/awss3.py +++ b/splunk_eventgen/lib/plugins/output/awss3.py @@ -182,8 +182,8 @@ def _transmitEvents(self, payloadstring): ) logger.debug("Uploading %d events into s3 key: %s " % (len(records), s3keyname)) if self.awsS3compressiontype == "gz": - import io import gzip + import io out = io.StringIO() with gzip.GzipFile(fileobj=out, mode="w") as f: diff --git a/splunk_eventgen/lib/plugins/output/counter.py b/splunk_eventgen/lib/plugins/output/counter.py index f91a2ad0..abe14400 100755 --- a/splunk_eventgen/lib/plugins/output/counter.py +++ b/splunk_eventgen/lib/plugins/output/counter.py @@ -1,3 +1,5 @@ +from __future__ import print_function + import datetime import pprint import sys diff --git a/splunk_eventgen/lib/plugins/output/httpevent_core.py b/splunk_eventgen/lib/plugins/output/httpevent_core.py index 95f22449..191565a2 100644 --- a/splunk_eventgen/lib/plugins/output/httpevent_core.py +++ b/splunk_eventgen/lib/plugins/output/httpevent_core.py @@ -1,17 +1,20 @@ import random -import urllib.error -import urllib.parse -import urllib.request + +import six.moves.urllib.error +import six.moves.urllib.parse +import six.moves.urllib.request from splunk_eventgen.lib.logging_config import logger from splunk_eventgen.lib.outputplugin import OutputPlugin try: + from concurrent.futures import ThreadPoolExecutor + import requests from requests import Session from requests_futures.sessions import FuturesSession - from concurrent.futures import ThreadPoolExecutor -except: + +except ImportError: pass try: import ujson as json @@ -68,7 +71,7 @@ def _urlencode(value): :param value: string :return: urlencoded string """ - return urllib.parse.quote(value) + return six.moves.urllib.parse.quote(value) @staticmethod def _bg_convert_json(sess, resp): diff --git a/splunk_eventgen/lib/plugins/output/scsout.py b/splunk_eventgen/lib/plugins/output/scsout.py index 2f4e2185..b0fc2a6c 100644 --- a/splunk_eventgen/lib/plugins/output/scsout.py +++ b/splunk_eventgen/lib/plugins/output/scsout.py @@ -93,7 +93,7 @@ def flush(self, events): self.scsRenewToken = False self.header = { - "Authorization": f"Bearer {self.scsAccessToken}", + "Authorization": "Bearer {0}".format(self.scsAccessToken), "Content-Type": "application/json", } @@ -106,7 +106,7 @@ def flush(self, events): } for i in range(self.scsRetryNum + 1): - logger.debug(f"Sending data to the scs endpoint. Num:{i}") + logger.debug("Sending data to the scs endpoint. Num:{0}".format(i)) self._sendHTTPEvents(events) if not self.checkResults(): @@ -128,7 +128,9 @@ def checkResults(self): return False elif response.status_code != 200: logger.error( - f"Data transmisison failed with {response.status_code} and {response.text}" + "Data transmisison failed with {0} and {1}".format( + response.status_code, response.text + ) ) return False logger.debug("Data transmission successful") diff --git a/splunk_eventgen/lib/plugins/output/splunkstream.py b/splunk_eventgen/lib/plugins/output/splunkstream.py index f1088b69..797595a3 100644 --- a/splunk_eventgen/lib/plugins/output/splunkstream.py +++ b/splunk_eventgen/lib/plugins/output/splunkstream.py @@ -1,11 +1,11 @@ import http.client -import urllib.error -import urllib.parse -import urllib.request from collections import deque from xml.dom import minidom import httplib2 +import six.moves.urllib.error +import six.moves.urllib.parse +import six.moves.urllib.request from splunk_eventgen.lib.logging_config import logger from splunk_eventgen.lib.outputplugin import OutputPlugin @@ -71,7 +71,7 @@ def __init__(self, sample, output_counter=None): self._splunkUrl + "/services/auth/login", "POST", headers={}, - body=urllib.parse.urlencode( + body=six.moves.urllib.parse.urlencode( {"username": self._splunkUser, "password": self._splunkPass} ), )[1] @@ -155,7 +155,7 @@ def flush(self, q): if host: urlparams.append(("host", host)) url = "/services/receivers/simple?%s" % ( - urllib.parse.urlencode(urlparams) + six.moves.urllib.parse.urlencode(urlparams) ) headers = { "Authorization": "Splunk %s" % self._sample.sessionKey diff --git a/splunk_eventgen/lib/plugins/rater/__init__.py b/splunk_eventgen/lib/plugins/rater/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/splunk_eventgen/lib/plugins/rater/backfill.py b/splunk_eventgen/lib/plugins/rater/backfill.py new file mode 100644 index 00000000..a3d7821e --- /dev/null +++ b/splunk_eventgen/lib/plugins/rater/backfill.py @@ -0,0 +1,74 @@ +from queue import Full + +from splunk_eventgen.lib.logging_config import logger +from splunk_eventgen.lib.plugins.rater.config import ConfigRater +from splunk_eventgen.lib.timeparser import timeParserTimeMath + + +class BackfillRater(ConfigRater): + name = "BackfillRater" + stopping = False + + def __init__(self, sample): + super(BackfillRater, self).__init__(sample) + logger.debug( + "Starting BackfillRater for %s" % sample.name + if sample is not None + else "None" + ) + self.sample = sample + self.samplerater = None + + def queue_it(self, count): + try: + realtime = self.sample.now(realnow=True) + if "-" in self.sample.backfill[0]: + mathsymbol = "-" + else: + mathsymbol = "+" + backfillnumber = "" + backfillletter = "" + for char in self.sample.backfill: + if char.isdigit(): + backfillnumber += char + elif char != "-": + backfillletter += char + backfillearliest = timeParserTimeMath( + plusminus=mathsymbol, + num=backfillnumber, + unit=backfillletter, + ret=realtime, + ) + while backfillearliest < realtime: + et = backfillearliest + lt = timeParserTimeMath( + plusminus="+", num=self.sample.interval, unit="s", ret=et + ) + genPlugin = self.generatorPlugin(sample=self.sample) + genPlugin.updateCounts(count=count, start_time=et, end_time=lt) + genPlugin.updateConfig(config=self.config, outqueue=self.outputQueue) + try: + # Need to lock on replay mode since event duration is dynamic. Interval starts counting + # after the replay has finished. + if self.sample.generator == "replay": + genPlugin.run() + else: + self.generatorQueue.put(genPlugin) + except Full: + logger.warning("Generator Queue Full. Skipping current generation.") + # due to replays needing to iterate in reverse, it's more efficent to process backfill + # after the file has been parsed. This section is to allow replay mode to take + # care of all replays on it's first run. and sets backfilldone + if self.sample.generator == "replay": + backfillearliest = realtime + else: + backfillearliest = lt + if self.sample.generator != "replay": + self.sample.backfilldone = True + + except Exception as e: + logger.error("Failed queuing backfill, exception: {0}".format(e)) + + +def load(): + return BackfillRater diff --git a/splunk_eventgen/lib/plugins/rater/config.py b/splunk_eventgen/lib/plugins/rater/config.py index 35b7ac18..56ecdae7 100644 --- a/splunk_eventgen/lib/plugins/rater/config.py +++ b/splunk_eventgen/lib/plugins/rater/config.py @@ -1,163 +1,50 @@ -import datetime -import random - from splunk_eventgen.lib.logging_config import logger +from splunk_eventgen.lib.raterplugin import RaterPlugin -class ConfigRater(object): +class ConfigRater(RaterPlugin): name = "ConfigRater" stopping = False def __init__(self, sample): - logger.debug( - "Starting ConfigRater for %s" % sample.name - if sample is not None - else "None" - ) - - self._sample = sample - self._generatorWorkers = self._sample.config.generatorWorkers - - def __str__(self): - """Only used for debugging, outputs a pretty printed representation of this output""" - # Eliminate recursive going back to parent - # temp = dict([(key, value) for (key, value) in self.__dict__.items() if key != '_c']) - # return pprint.pformat(temp) - return "" - - def __repr__(self): - return self.__str__() - - def rate(self): - self._sample.count = int(self._sample.count) - # Let generators handle infinite count for themselves - if self._sample.count == -1 and self._sample.generator == "default": - if not self._sample.sampleDict: - logger.error( - "No sample found for default generator, cannot generate events" - ) - self._sample.count = len(self._sample.sampleDict) - self._generatorWorkers = int(self._generatorWorkers) - count = self._sample.count / self._generatorWorkers - # 5/8/12 CS We've requested not the whole file, so we should adjust count based on - # hourOfDay, dayOfWeek and randomizeCount configs - rateFactor = 1.0 - if self._sample.randomizeCount: - try: - logger.debug( - "randomizeCount for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, self._sample.randomizeCount) - ) - # If we say we're going to be 20% variable, then that means we - # can be .1% high or .1% low. Math below does that. - randBound = round(self._sample.randomizeCount * 1000, 0) - rand = random.randint(0, randBound) - randFactor = 1 + ((-((randBound / 2) - rand)) / 1000) - logger.debug( - "randFactor for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, randFactor) - ) - rateFactor *= randFactor - except: - import traceback - - stack = traceback.format_exc() - logger.error( - "Randomize count failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.hourOfDayRate) == dict: - try: - rate = self._sample.hourOfDayRate[str(self._sample.now().hour)] - logger.debug( - "hourOfDayRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Hour of day rate failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.dayOfWeekRate) == dict: - try: - weekday = datetime.date.weekday(self._sample.now()) - if weekday == 6: - weekday = 0 - else: - weekday += 1 - rate = self._sample.dayOfWeekRate[str(weekday)] - logger.debug( - "dayOfWeekRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Hour of day rate failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.minuteOfHourRate) == dict: - try: - rate = self._sample.minuteOfHourRate[str(self._sample.now().minute)] + super(ConfigRater, self).__init__(sample) + + def single_queue_it(self, count): + super(ConfigRater, self).single_queue_it(count) + + def multi_queue_it(self, count): + logger.info("Entering multi-processing division of sample") + numberOfWorkers = self.config.generatorWorkers + logger.debug("Number of Workers: {0}".format(numberOfWorkers)) + # this is a redundant check, but will prevent some missed call to multi_queue without a valid setting + if bool(self.sample.splitSample): + # if split = 1, then they want to divide by number of generator workers, else use the splitSample + if self.sample.splitSample == 1: + logger.debug("SplitSample = 1, using all availible workers") + targetWorkersToUse = numberOfWorkers + else: logger.debug( - "minuteOfHourRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Minute of hour rate failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.dayOfMonthRate) == dict: - try: - rate = self._sample.dayOfMonthRate[str(self._sample.now().day)] - logger.debug( - "dayOfMonthRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Day of Month rate for sample '%s' failed. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.monthOfYearRate) == dict: - try: - rate = self._sample.monthOfYearRate[str(self._sample.now().month)] - logger.debug( - "monthOfYearRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Month Of Year rate failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) + "SplitSample != 1, using {0} workers.".format( + self.sample.splitSample + ) ) - ret = int(round(count * rateFactor, 0)) - if rateFactor != 1.0: + targetWorkersToUse = self.sample.splitSample + else: logger.debug( - "Original count: %s Rated count: %s Rate factor: %s" - % (count, ret, rateFactor) + "SplitSample set to disable multithreading for just this sample." ) - return ret + self.single_queue_it() + currentWorkerPrepCount = 0 + remainingCount = count + targetLoopCount = int(count) / targetWorkersToUse + while currentWorkerPrepCount < targetWorkersToUse: + currentWorkerPrepCount = currentWorkerPrepCount + 1 + # check if this is the last loop, if so, add in the remainder count + if currentWorkerPrepCount < targetWorkersToUse: + remainingCount = count - targetLoopCount + else: + targetLoopCount = remainingCount + self.single_queue_it(targetLoopCount) def load(): diff --git a/splunk_eventgen/lib/plugins/rater/counter.py b/splunk_eventgen/lib/plugins/rater/counter.py new file mode 100644 index 00000000..632b7fa8 --- /dev/null +++ b/splunk_eventgen/lib/plugins/rater/counter.py @@ -0,0 +1,72 @@ +from queue import Full + +from splunk_eventgen.lib.logging_config import logger +from splunk_eventgen.lib.raterplugin import RaterPlugin + + +class CountRater(RaterPlugin): + name = "CountRater" + stopping = False + + def __init__(self, sample): + super(CountRater, self).__init__(sample) + + def single_queue_it(self, count, remaining_count=None): + """ + This method is used for specifying how to queue your rater plugin based on single process + :param count: Used to count number of events in a bundle + :return: + """ + et = self.sample.earliestTime() + lt = self.sample.latestTime() + if count < 1 and count != -1: + logger.info( + "There is no data to be generated in worker {0} because the count is {1}.".format( + self.sample.config.generatorWorkers, count + ) + ) + else: + genPlugin = self.generatorPlugin(sample=self.sample) + # Adjust queue for threading mode + genPlugin.updateConfig(config=self.config, outqueue=self.outputQueue) + genPlugin.updateCounts(count=count, start_time=et, end_time=lt) + try: + self.generatorQueue.put(genPlugin) + logger.info( + ( + "Put {0} MB of events in queue for sample '{1}'" + + "with et '{2}' and lt '{3}'" + ).format( + round((count / 1024.0 / 1024), 4), self.sample.name, et, lt + ) + ) + except Full: + logger.warning("Generator Queue Full. Skipping current generation.") + + def multi_queue_it(self, count): + logger.info("Entering multi-processing division of sample") + numberOfWorkers = self.config.generatorWorkers + # this is a redundant check, but will prevent some missed call to multi_queue without a valid setting + if bool(self.sample.splitSample): + # if split = 1, then they want to divide by number of generator workers, else use the splitSample + if self.sample.splitSample == 1: + targetWorkersToUse = numberOfWorkers + else: + targetWorkersToUse = self.sample.splitSample + else: + self.single_queue_it() + currentWorkerPrepCount = 0 + remainingCount = count + targetLoopCount = int(count) / targetWorkersToUse + while currentWorkerPrepCount < targetWorkersToUse: + currentWorkerPrepCount = currentWorkerPrepCount + 1 + # check if this is the last loop, if so, add in the remainder count + if currentWorkerPrepCount < targetWorkersToUse: + remainingCount = count - targetLoopCount + else: + targetLoopCount = remainingCount + self.single_queue_it(targetLoopCount) + + +def load(): + return CountRater diff --git a/splunk_eventgen/lib/plugins/rater/perdayvolume.py b/splunk_eventgen/lib/plugins/rater/perdayvolume.py index f3a670ee..7094ec23 100644 --- a/splunk_eventgen/lib/plugins/rater/perdayvolume.py +++ b/splunk_eventgen/lib/plugins/rater/perdayvolume.py @@ -1,5 +1,4 @@ -import datetime -import random +from queue import Full from splunk_eventgen.lib.logging_config import logger from splunk_eventgen.lib.plugins.rater.config import ConfigRater @@ -10,20 +9,47 @@ class PerDayVolume(ConfigRater): stopping = False def __init__(self, sample): + super(PerDayVolume, self).__init__(sample) + # Logger already setup by config, just get an instance logger.debug( "Starting PerDayVolumeRater for %s" % sample.name if sample is not None else "None" ) - self._sample = sample - self._generatorWorkers = self._sample.config.generatorWorkers + self.previous_count_left = 0 + self.raweventsize = 0 + + def queue_it(self, count): + count = count + self.previous_count_left + if 0 < count < self.raweventsize: + logger.info( + "current interval size is {}, which is smaller than a raw event size {}.".format( + count, self.raweventsize + ) + + "Wait for the next turn." + ) + self.update_options(previous_count_left=count) + else: + self.update_options(previous_count_left=0) + et = self.sample.earliestTime() + lt = self.sample.latestTime() + # self.generatorPlugin is only an instance, now we need a real plugin. Make a copy of + # of the sample in case another generator corrupts it. + genPlugin = self.generatorPlugin(sample=self.sample) + # Adjust queue for threading mode + genPlugin.updateConfig(config=self.config, outqueue=self.outputQueue) + genPlugin.updateCounts(count=count, start_time=et, end_time=lt) + try: + self.generatorQueue.put(genPlugin) + except Full: + logger.warning("Generator Queue Full. Skipping current generation.") def rate(self): - perdayvolume = float(self._sample.perDayVolume) / self._generatorWorkers + perdayvolume = float(self.sample.perDayVolume) # Convert perdayvolume to bytes from GB perdayvolume = perdayvolume * 1024 * 1024 * 1024 - interval = self._sample.interval - if self._sample.interval == 0: + interval = self.sample.interval + if self.sample.interval == 0: logger.debug("Running perDayVolume as if for 24hr period.") interval = 86400 logger.debug( @@ -31,120 +57,8 @@ def rate(self): ) intervalsperday = 86400 / interval perintervalvolume = perdayvolume / intervalsperday - count = self._sample.count - - # 5/8/12 CS We've requested not the whole file, so we should adjust count based on - # hourOfDay, dayOfWeek and randomizeCount configs - rateFactor = 1.0 - if self._sample.randomizeCount != 0 and self._sample.randomizeCount is not None: - try: - logger.debug( - "randomizeCount for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, self._sample.randomizeCount) - ) - # If we say we're going to be 20% variable, then that means we - # can be .1% high or .1% low. Math below does that. - randBound = round(self._sample.randomizeCount * 1000, 0) - rand = random.randint(0, randBound) - randFactor = 1 + ((-((randBound / 2) - rand)) / 1000) - logger.debug( - "randFactor for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, randFactor) - ) - rateFactor *= randFactor - except: - import traceback - - stack = traceback.format_exc() - logger.error( - "Randomize count failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.hourOfDayRate) == dict: - try: - rate = self._sample.hourOfDayRate[str(self._sample.now().hour)] - logger.debug( - "hourOfDayRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Hour of day rate failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.dayOfWeekRate) == dict: - try: - weekday = datetime.date.weekday(self._sample.now()) - if weekday == 6: - weekday = 0 - else: - weekday += 1 - rate = self._sample.dayOfWeekRate[str(weekday)] - logger.debug( - "dayOfWeekRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Hour of day rate failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.minuteOfHourRate) == dict: - try: - rate = self._sample.minuteOfHourRate[str(self._sample.now().minute)] - logger.debug( - "minuteOfHourRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Minute of hour rate failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.dayOfMonthRate) == dict: - try: - rate = self._sample.dayOfMonthRate[str(self._sample.now().day)] - logger.debug( - "dayOfMonthRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Day of Month rate for sample '%s' failed. Stacktrace %s" - % (self._sample.name, stack) - ) - if type(self._sample.monthOfYearRate) == dict: - try: - rate = self._sample.monthOfYearRate[str(self._sample.now().month)] - logger.debug( - "monthOfYearRate for sample '%s' in app '%s' is %s" - % (self._sample.name, self._sample.app, rate) - ) - rateFactor *= rate - except KeyError: - import traceback - - stack = traceback.format_exc() - logger.error( - "Month Of Year rate failed for sample '%s'. Stacktrace %s" - % (self._sample.name, stack) - ) + count = self.sample.count + rateFactor = self.adjust_rate_factor() logger.debug( "Size per interval: %s, rate factor to adjust by: %s" % (perintervalvolume, rateFactor) diff --git a/splunk_eventgen/lib/raterplugin.py b/splunk_eventgen/lib/raterplugin.py new file mode 100644 index 00000000..c45474eb --- /dev/null +++ b/splunk_eventgen/lib/raterplugin.py @@ -0,0 +1,237 @@ +from __future__ import division + +import datetime +import random +from queue import Full + +from splunk_eventgen.lib.logging_config import logger + + +class RaterPlugin(object): + name = "RaterPlugin" + stopping = False + + def __init__(self, sample): + self.sample = sample + self.config = None + self.generatorQueue = None + self.outputQueue = None + self.outputPlugin = None + self.generatorPlugin = None + self.replayLock = None + self.executions = 0 + + def __str__(self): + """Only used for debugging, outputs a pretty printed representation of this output""" + # Eliminate recursive going back to parent + # temp = dict([(key, value) for (key, value) in self.__dict__.items() if key != '_c']) + # return pprint.pformat(temp) + return "" + + def __repr__(self): + return self.__str__() + + def update_options(self, **kwargs): + allowed_attrs = [attr for attr in dir(self) if not attr.startswith("__")] + for key in kwargs: + if kwargs[key] and key in allowed_attrs: + self.__dict__.update({key: kwargs[key]}) + + def adjust_rate_factor(self): + # 5/8/12 CS We've requested not the whole file, so we should adjust count based on + # hourOfDay, dayOfWeek and randomizeCount configs + rateFactor = 1.0 + if self.sample.randomizeCount: + try: + logger.debug( + "randomizeCount for sample '%s' in app '%s' is %s" + % (self.sample.name, self.sample.app, self.sample.randomizeCount) + ) + # If we say we're going to be 20% variable, then that means we + # can be .1% high or .1% low. Math below does that. + randBound = round(self.sample.randomizeCount * 1000, 0) + rand = random.randint(0, randBound) + randFactor = 1 + ((-((randBound / 2) - rand)) / 1000) + logger.debug( + "randFactor for sample '%s' in app '%s' is %s" + % (self.sample.name, self.sample.app, randFactor) + ) + rateFactor *= randFactor + except: + import traceback + + stack = traceback.format_exc() + logger.error( + "Randomize count failed for sample '%s'. Stacktrace %s" + % (self.sample.name, stack) + ) + if type(self.sample.hourOfDayRate) == dict: + try: + rate = self.sample.hourOfDayRate[str(self.sample.now().hour)] + logger.debug( + "hourOfDayRate for sample '%s' in app '%s' is %s" + % (self.sample.name, self.sample.app, rate) + ) + rateFactor *= rate + except KeyError: + import traceback + + stack = traceback.format_exc() + logger.error( + "Hour of day rate failed for sample '%s'. Stacktrace %s" + % (self.sample.name, stack) + ) + if type(self.sample.dayOfWeekRate) == dict: + try: + weekday = datetime.date.weekday(self.sample.now()) + if weekday == 6: + weekday = 0 + else: + weekday += 1 + rate = self.sample.dayOfWeekRate[str(weekday)] + logger.debug( + "dayOfWeekRate for sample '%s' in app '%s' is %s" + % (self.sample.name, self.sample.app, rate) + ) + rateFactor *= rate + except KeyError: + import traceback + + stack = traceback.format_exc() + logger.error( + "Hour of day rate failed for sample '%s'. Stacktrace %s" + % (self.sample.name, stack) + ) + if type(self.sample.minuteOfHourRate) == dict: + try: + rate = self.sample.minuteOfHourRate[str(self.sample.now().minute)] + logger.debug( + "minuteOfHourRate for sample '%s' in app '%s' is %s" + % (self.sample.name, self.sample.app, rate) + ) + rateFactor *= rate + except KeyError: + import traceback + + stack = traceback.format_exc() + logger.error( + "Minute of hour rate failed for sample '%s'. Stacktrace %s" + % (self.sample.name, stack) + ) + if type(self.sample.dayOfMonthRate) == dict: + try: + rate = self.sample.dayOfMonthRate[str(self.sample.now().day)] + logger.debug( + "dayOfMonthRate for sample '%s' in app '%s' is %s" + % (self.sample.name, self.sample.app, rate) + ) + rateFactor *= rate + except KeyError: + import traceback + + stack = traceback.format_exc() + logger.error( + "Day of Month rate for sample '%s' failed. Stacktrace %s" + % (self.sample.name, stack) + ) + if type(self.sample.monthOfYearRate) == dict: + try: + rate = self.sample.monthOfYearRate[str(self.sample.now().month)] + logger.debug( + "monthOfYearRate for sample '%s' in app '%s' is %s" + % (self.sample.name, self.sample.app, rate) + ) + rateFactor *= rate + except KeyError: + import traceback + + stack = traceback.format_exc() + logger.error( + "Month Of Year rate failed for sample '%s'. Stacktrace %s" + % (self.sample.name, stack) + ) + return rateFactor + + def single_queue_it(self, count): + """ + This method is used for specifying how to queue your rater plugin based on single process + :param count: + :return: + """ + et = self.sample.earliestTime() + lt = self.sample.latestTime() + if count < 1 and count != -1: + logger.info( + "There is no data to be generated in worker {0} because the count is {1}.".format( + self.sample.config.generatorWorkers, count + ) + ) + else: + genPlugin = self.generatorPlugin(sample=self.sample) + # Adjust queue for threading mode + genPlugin.updateCounts(count=count, start_time=et, end_time=lt) + genPlugin.updateConfig(config=self.config, outqueue=self.outputQueue) + try: + logger.info( + ( + "Put {0} MB of events in queue for sample '{1}'" + + "with et '{2}' and lt '{3}'" + ).format( + round((count / 1024.0 / 1024), 4), self.sample.name, et, lt + ) + ) + if self.sample.generator == "replay": + # lock on to replay mode, this will keep the timer knowing when to continue cycles since + # replay mode has a dynamic replay time and interval doesn't mean the same thing. + if ( + hasattr(self.config, "outputCounter") + and self.config.outputCounter + ): + from splunk_eventgen.lib.outputcounter import OutputCounter + + output_counter = OutputCounter() + elif hasattr(self.config, "outputCounter"): + output_counter = self.config.outputCounter + genPlugin.run(output_counter=output_counter) + else: + self.generatorQueue.put(genPlugin) + except Full: + logger.warning("Generator Queue Full. Skipping current generation.") + + def multi_queue_it(self, count): + """ + This method is used for specifying how to queue your rater plugin based on multi-process + by default this method will just call the single_queue_it. + :param count: + :return: + """ + self.single_queue_it(count) + + def queue_it(self, count): + if self.sample.splitSample > 0: + self.multi_queue_it(count) + else: + self.single_queue_it(count) + + def rate(self): + self.sample.count = int(self.sample.count) + # Let generators handle infinite count for themselves + if self.sample.count == -1 and self.sample.generator == "default": + if not self.sample.sampleDict: + logger.error( + "No sample found for default generator, cannot generate events" + ) + self.sample.count = len(self.sample.sampleDict) + count = self.sample.count + rateFactor = self.adjust_rate_factor() + ret = int(round(count * rateFactor, 0)) + if rateFactor != 1.0: + logger.debug( + "Original count: %s Rated count: %s Rate factor: %s" + % (count, ret, rateFactor) + ) + return ret + + +def load(): + return RaterPlugin diff --git a/splunk_eventgen/lib/requirements.txt b/splunk_eventgen/lib/requirements.txt index d9cd22f7..0e96757d 100644 --- a/splunk_eventgen/lib/requirements.txt +++ b/splunk_eventgen/lib/requirements.txt @@ -2,3 +2,4 @@ ujson==2.0.3 jinja2==2.10.3 requests-futures==1.0.0 urllib3==1.24.2 +six==1.15.0 diff --git a/splunk_eventgen/splunk_app/README/eventgen.conf.spec b/splunk_eventgen/splunk_app/README/eventgen.conf.spec index 78adf8a6..dc63d07e 100644 --- a/splunk_eventgen/splunk_app/README/eventgen.conf.spec +++ b/splunk_eventgen/splunk_app/README/eventgen.conf.spec @@ -265,6 +265,12 @@ mode = sample | replay * Default is sample, which will generate count (+/- rating) events every configured interval * Replay will instead read the file and leak out events, replacing timestamps, +splitSample = + * only works with mode sample + * default value set to 0 + * Value of 1 will default to number of threads / processes enabled + * some generators may not have the ability split threads and guarantee transaction order. + sampletype = raw | csv * Raw are raw events (default) * CSV are from an outputcsv or export from Splunk. @@ -274,7 +280,7 @@ sampletype = raw | csv OVERRIDES FOR DEFAULT FIELDS WILL ONLY WITH WITH outputMode SPLUNKSTREAM. interval = - * Only valid in mode = sample + * Delay between exections. This number in replay mode occurs after the replay has finished. * How often to generate sample (in seconds). * 0 means disabled. * Defaults to 60 seconds. diff --git a/splunk_eventgen/splunk_app/default/app.conf b/splunk_eventgen/splunk_app/default/app.conf index ec88db30..8943aba1 100644 --- a/splunk_eventgen/splunk_app/default/app.conf +++ b/splunk_eventgen/splunk_app/default/app.conf @@ -14,7 +14,7 @@ build = 1 [launcher] author = Splunk Inc. -version = 7.1.1 +version = 7.2.0 description = SA-Eventgen app for dynamic data generation [package] diff --git a/splunk_eventgen/splunk_app/lib/mod_input/__init__.py b/splunk_eventgen/splunk_app/lib/mod_input/__init__.py index 0e722e41..fe4bec3e 100644 --- a/splunk_eventgen/splunk_app/lib/mod_input/__init__.py +++ b/splunk_eventgen/splunk_app/lib/mod_input/__init__.py @@ -748,7 +748,7 @@ def set_checkpoint_data(self, filename, data, checkpoint_dir=None): except ValueError: logger.exception( 'msg="ValueError when saving checkpoint data (perhaps invalid JSON)"' - + f' checkpoint_dir="{checkpoint_dir}" filename="{filename}"' + + 'checkpoint_dir="{0}" filename="{1}"'.format(checkpoint_dir, filename) ) except Exception: logger.exception( @@ -789,7 +789,9 @@ def get_checkpoint_data( except (IOError, ValueError) as e: logger.exception( 'msg="Exception when reading checkpoint data" ' - + f'checkpoint_dir="{checkpoint_dir}" filename="{filename}" exception="{e}"' + + 'checkpoint_dir="{0}" filename="{1}" exception="{2}"'.format( + checkpoint_dir, filename, e + ) ) if raise_known_exceptions: raise diff --git a/tests/large/conf/eventgen_replay_backfill.conf b/tests/large/conf/eventgen_replay_backfill.conf index 94759212..2bd9aa73 100644 --- a/tests/large/conf/eventgen_replay_backfill.conf +++ b/tests/large/conf/eventgen_replay_backfill.conf @@ -4,6 +4,7 @@ backfill = -5s sampletype = raw outputMode = stdout mode = replay +interval = 0 end = 2 token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} diff --git a/tests/large/conf/eventgen_replay_backfill_greater_interval.conf b/tests/large/conf/eventgen_replay_backfill_greater_interval.conf index 3f537c54..5d79b735 100644 --- a/tests/large/conf/eventgen_replay_backfill_greater_interval.conf +++ b/tests/large/conf/eventgen_replay_backfill_greater_interval.conf @@ -5,6 +5,7 @@ sampletype = raw outputMode = file fileName = tests/large/results/eventgen_replay_backfill.result mode = replay +interval = 0 end = 2 token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} diff --git a/tests/large/conf/eventgen_replay_csv.conf b/tests/large/conf/eventgen_replay_csv.conf index cda1a3a9..4c662fc6 100755 --- a/tests/large/conf/eventgen_replay_csv.conf +++ b/tests/large/conf/eventgen_replay_csv.conf @@ -1,9 +1,11 @@ -[timeorder] +[timeorder\.csv] sampleDir = ../sample mode = replay sampletype = csv timeField = _time outputMode = stdout +interval = 0 +end = 1 token.0.token = \d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2} token.0.replacementType = replaytimestamp diff --git a/tests/large/conf/eventgen_replay_csv_with_tz.conf b/tests/large/conf/eventgen_replay_csv_with_tz.conf index 21ac7878..6d4ba2b8 100755 --- a/tests/large/conf/eventgen_replay_csv_with_tz.conf +++ b/tests/large/conf/eventgen_replay_csv_with_tz.conf @@ -1,10 +1,11 @@ -[timezone] +[timezone\.csv] sampleDir = ../sample mode = replay sampletype = csv outputMode = stdout timezone = -0100 timeField = _raw +interval = 0 token.0.token = \d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2},\d{3,6} token.0.replacementType = timestamp diff --git a/tests/large/conf/eventgen_replay_end_1.conf b/tests/large/conf/eventgen_replay_end_1.conf index 9f312233..db787a04 100755 --- a/tests/large/conf/eventgen_replay_end_1.conf +++ b/tests/large/conf/eventgen_replay_end_1.conf @@ -4,6 +4,7 @@ mode = replay earliest = -5s sampletype = raw outputMode = stdout +interval = 0 end = 2 token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} diff --git a/tests/large/conf/eventgen_replay_end_2.conf b/tests/large/conf/eventgen_replay_end_2.conf index f273ad9a..5724f529 100755 --- a/tests/large/conf/eventgen_replay_end_2.conf +++ b/tests/large/conf/eventgen_replay_end_2.conf @@ -5,6 +5,7 @@ earliest = -5s sampletype = raw outputMode = stdout end = -1 +interval = 0 token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} token.0.replacementType = replaytimestamp diff --git a/tests/large/conf/eventgen_replay_timeMultiple.conf b/tests/large/conf/eventgen_replay_timeMultiple.conf index d256cd8b..8300576b 100755 --- a/tests/large/conf/eventgen_replay_timeMultiple.conf +++ b/tests/large/conf/eventgen_replay_timeMultiple.conf @@ -4,6 +4,7 @@ mode = replay sampletype = raw outputMode = stdout timeMultiple = 0.5 +interval=0 token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} token.0.replacementType = replaytimestamp diff --git a/tests/large/conf/eventgen_sample_csv.conf b/tests/large/conf/eventgen_sample_csv.conf index 33cd335d..f56200bb 100755 --- a/tests/large/conf/eventgen_sample_csv.conf +++ b/tests/large/conf/eventgen_sample_csv.conf @@ -1,4 +1,4 @@ -[timeorder] +[timeorder\.csv] sampleDir = ../sample mode = sample sampletype = csv diff --git a/tests/large/conf/eventgen_tutorial1.conf b/tests/large/conf/eventgen_tutorial1.conf index d66073dd..4f8c8ea6 100644 --- a/tests/large/conf/eventgen_tutorial1.conf +++ b/tests/large/conf/eventgen_tutorial1.conf @@ -1,28 +1,28 @@ -[tutorial1] +[tutorial1\.csv] sampleDir = ../sample mode = replay sampletype = csv -timeMultiple = 2 +timeMultiple = .1 outputMode = file fileName = tests/large/results/eventgen_tutorial1.result end = 1 token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3,6} -token.0.replacementType = timestamp +token.0.replacementType = replaytimestamp token.0.replacement = %Y-%m-%d %H:%M:%S,%f token.1.token = \d{2}-\d{2}-\d{4} \d{2}:\d{2}:\d{2}.\d{3,6} -token.1.replacementType = timestamp +token.1.replacementType = replaytimestamp token.1.replacement = %m-%d-%Y %H:%M:%S.%f token.2.token = \d{2}/\w{3}/\d{4}:\d{2}:\d{2}:\d{2}.\d{3,6} -token.2.replacementType = timestamp +token.2.replacementType = replaytimestamp token.2.replacement = %d/%b/%Y:%H:%M:%S.%f token.3.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} -token.3.replacementType = timestamp +token.3.replacementType = replaytimestamp token.3.replacement = %Y-%m-%d %H:%M:%S token.4.token = \d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2} -token.4.replacementType = timestamp +token.4.replacementType = replaytimestamp token.4.replacement = %Y-%m-%dT%H:%M:%S diff --git a/tests/large/conftest.py b/tests/large/conftest.py index 6f85d9ff..3545c548 100644 --- a/tests/large/conftest.py +++ b/tests/large/conftest.py @@ -1,5 +1,4 @@ import pytest - from utils.eventgen_test_helper import EventgenTestHelper diff --git a/tests/large/test_mode_replay.py b/tests/large/test_mode_replay.py index dca5bd39..dbb14ddb 100644 --- a/tests/large/test_mode_replay.py +++ b/tests/large/test_mode_replay.py @@ -31,21 +31,28 @@ def test_mode_replay_end_2(eventgen_test_helper): def test_mode_replay_backfill(eventgen_test_helper): - """Test normal replay mode with backfill = -5s which should be ignore since backfill < interval""" + """Test normal replay mode with backfill = -5s, Backfill will count backwards from 0 and play from the last event + to the the start of the file. End 2 is set in the replay sample, and a backfill of -5 should add more lines than + just playing the file twice.""" events = eventgen_test_helper("eventgen_replay_backfill.conf").get_events() # assert the events length is twice of the events in the sample file - assert len(events) == 24 + assert len(events) == 27 def test_mode_replay_backfill_greater_interval(eventgen_test_helper): - """Test normal replay mode with backfill = -120s""" - current_datetime = datetime.now() + """Test normal replay mode with backfill = -120s, the replay file 12 events spanning is 21s, + this should backfill. Since the backfill is 120s, it should replay the entire file 5 times, and then add in 15 more + seconds of backfill before replaying twice. Since the last 5 events in replay span 15s, there should be an output + of (60 (5 full backfills) + 5 (15s of the end of the file back) + 24 (2 full replays of the file))""" events = eventgen_test_helper( "eventgen_replay_backfill_greater_interval.conf" ).get_events() # assert the events length is twice of the events in the sample file - assert len(events) == 24 + assert len(events) == 89 pattern = re.compile(r"\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}") + # wait a second to make sure we're after the last microsecond cut off + time.sleep(1) + current_datetime = datetime.now() for event in events: result = pattern.match(event) assert result is not None diff --git a/tests/large/test_mode_sample.py b/tests/large/test_mode_sample.py index cc330a58..9b423b5d 100644 --- a/tests/large/test_mode_sample.py +++ b/tests/large/test_mode_sample.py @@ -122,6 +122,8 @@ def test_mode_sample_regex_wildcard(eventgen_test_helper): def test_mode_sample_regex_csv(eventgen_test_helper): - """tTest sample mode with a regex wildcard pattern in the stanza name ('sample*')""" + """tTest sample mode with a regex wildcard pattern in the stanza name ('sample*') + This currently matches 3 files with 10events in each file. + """ events = eventgen_test_helper("eventgen_sample_regex_csv.conf").get_events() - assert len(events) == 20 + assert len(events) == 30 diff --git a/tests/large/utils/eventgen_test_helper.py b/tests/large/utils/eventgen_test_helper.py index 5aee9394..74d28c3e 100644 --- a/tests/large/utils/eventgen_test_helper.py +++ b/tests/large/utils/eventgen_test_helper.py @@ -24,7 +24,7 @@ def __init__(self, conf, timeout=None, mode=None, env=None): self.output_mode = self._get_output_mode() self.file_name = self._get_file_name() self.breaker = self._get_breaker() - cmd = ["splunk_eventgen", "generate", self.conf] + cmd = ["python3", "-m", "splunk_eventgen", "generate", self.conf] if mode == "process": cmd.append("--multiprocess") env_var = os.environ.copy() diff --git a/tests/sample_eventgen_conf/backfill/eventgen.conf.backfillreplay b/tests/sample_eventgen_conf/backfill/eventgen.conf.backfillreplay index e4cd4adf..1730d999 100755 --- a/tests/sample_eventgen_conf/backfill/eventgen.conf.backfillreplay +++ b/tests/sample_eventgen_conf/backfill/eventgen.conf.backfillreplay @@ -4,6 +4,7 @@ generator = replay timeMultiple = 2 backfill = -15m end = 3 +interval=0 outputMode = stdout index = main @@ -16,5 +17,5 @@ splunkUser = admin splunkPass = changeme token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} -token.0.replacementType = timestamp +token.0.replacementType = replaytimestamp token.0.replacement = %Y-%m-%d %H:%M:%S diff --git a/tests/sample_eventgen_conf/counter/eventgen.conf.counter b/tests/sample_eventgen_conf/counter/eventgen.conf.counter new file mode 100644 index 00000000..934668e8 --- /dev/null +++ b/tests/sample_eventgen_conf/counter/eventgen.conf.counter @@ -0,0 +1,130 @@ +#[counter1] +# Should output 10 events, counting 1-10 +#generator = counter +#earliest = -3s +#latest = now +#count = 10 +#end = 1 +#outputMode = stdout + +#[counter2] +# Should output 3 events, counting 1-3 +#generator = counter +#earliest = -3s +#latest = now +#count = 3 +#end = 1 +#outputMode = stdout + +#[counter3] +# Should output 3 events, counting 1-3 +#generator = counter +#earliest = -3s +#latest = now +#end_count = 3 +#end = 1 +#outputMode = stdout + +#[counter4] +# Should output 40 events, counting 4(1-10) yes, can you not hear me? +#generator = counter +#earliest = -3s +#latest = now +#end_count = 10 +#count = 4 +#end = 1 +#outputMode = stdout + +#[counter5] +## outputs 4 events by counting 1-1 without cycles every 3 seconds +#generator = counter +#earliest = -3s +#latest = now +#end_count = 1 +#count = 1 +#end = 4 +#interval = 3 +#outputMode = stdout + +#[counter6] +## outputs 10 events by counting down by 1 +#generator = counter +#earliest = -3s +#latest = now +#start_count = 10 +#end_count = 0 +#count_by = -1 +#end = 1 +#outputMode = stdout + +#[counter7] +# outputs 10 events by counting down by .1 +#generator = counter +#earliest = -3s +#latest = now +#start_count = 1 +#end_count = 0 +#count_by = -.1 +#end = 1 +#outputMode = stdout + +#[counter8] +# outputs 10 events by counting up by .1 +#generator = counter +#earliest = -3s +#latest = now +#start_count = 0 +#end_count = 1 +#count_by = .1 +#end = 1 +#outputMode = stdout + +#[counter9] +# outputs 10 events by counting up by .1, but with a custom output line +#count_template = {event_ts}-0700 Counter for sample:{samplename}, I like loops! counting {loop_count} of {max_loop}. BLAH: Start_Count: {start_count} Current_Counter:{current_count} End_Count:{end_count} Counting_By: {count_by} +#generator = counter +#earliest = -3s +#latest = now +#start_count = 0 +#end_count = 1 +#count_by = .1 +#end = 1 +#outputMode = stdout + +#[counter0] +# outputs 10 events by counting up by .1, but with a custom output line, this time without some fields +#count_template = {event_ts}-0700 Printing event {current_count} of {end_count} +#generator = counter +#earliest = -3s +#latest = now +#start_count = 1 +#end_count = 10 +#count_by = 1 +#end = 1 +#outputMode = stdout + +#[billion1] +# outputs 1000000000 events by counting up by 1, but with a custom output line, this time without some fields +#count_template = {event_ts}-0700 Printing event {current_count} of {end_count} +#generator = counter +#earliest = -3s +#latest = now +#start_count = 1 +#end_count = 1000000000 +#count_by = 1 +#end = 1 +#outputMode = stdout + +[100multiproc] +# outputs 100 events by counting up by 1, but with a custom output line, and splitting into multiprocess +# Please note, that in order to use mutliprocessing, you must change from the default rater, not the generator. +# This rater will then correctly set the generator plugin. This will always divide into 4 processes regardless +# of generator worker count. +count_template = {event_ts}-0700 Printing event {current_count} of {end_count} +rater = counter +start_count = 1 +splitSample = 4 +end_count = 100 +count_by = 1 +end = 1 +outputMode = stdout diff --git a/tests/sample_eventgen_conf/jinja/templates/CxlRejReason.template b/tests/sample_eventgen_conf/jinja/templates/CxlRejReason.template new file mode 100644 index 00000000..852fe8ef --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/CxlRejReason.template @@ -0,0 +1,11 @@ +{% set errors = [("0", "Too late to cancel", 1), ("1", "Unknown Order", 1), ("2", "Broker / Exchange Option", 5), ("99", "Other", 2)] -%} +{% set elist = [] -%} +{% for id, msg, pri in errors -%} + {% for _ in range(0, pri) %} + {% do elist.append((id, msg)) -%} + {% endfor -%} +{% endfor -%} + +{% set reason = elist | random %} + +{{ reason }} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/OrdRejReason_103.template b/tests/sample_eventgen_conf/jinja/templates/OrdRejReason_103.template new file mode 100644 index 00000000..0c521bdb --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/OrdRejReason_103.template @@ -0,0 +1,9 @@ +{% set errors = [("0", "Broker option", 5), ("1", "Unknown Symbol", 1), ("3", "Order exceeds limit", 1), ("7", "Duplicate of verbally committed order", 1), ("8", "Stale order", 2)] -%} +{% set elist = [] -%} +{% for id, msg, pri in errors -%} + {% for _ in range(0, pri) %} + {% do elist.append((id, msg)) -%} + {% endfor -%} +{% endfor -%} + +{% set reason = elist | random %} diff --git a/tests/sample_eventgen_conf/jinja/templates/count_test.template b/tests/sample_eventgen_conf/jinja/templates/count_test.template new file mode 100644 index 00000000..317cc29d --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/count_test.template @@ -0,0 +1,46 @@ +{% set events = 3 -%} <-- pass +{% set slices = 20 -%} <-- pass +{% set max = slices // events -%} + +{% set crange = [] -%} +{% for _ in range(0, events) %} + {% set minr = max*loop.index0+1 -%} + {% set maxr = max*loop.index -%} + {% do crange.append((minr, maxr)) -%} +{% endfor %} + +{% set cyc = cycler(crange) -%} + +{% for _ in range(0, events) %} + {% set newct = range(crange[1][0], crange[1][1], 1) | random -%} + {"_time":"{{ time_target_epoch }}", "_raw":"{{ newct }}"} +{% endfor %} + + + +{# +{{ range(max*loop.index0+1) :: range(max*loop.index)}} + + +20 // 3 = 6 + +1, max (1) +7,12 (2) +13,18 (3) <- loop.index + +40 // 3 = 39 / 3 = 13 + +1 , 13 +14 , 26 +27, 39 + + +max * li0 (0) + 1 = 1 || max * li (1) = 6 +max * li0 (1) + 1 = 7 || max * li (2) = 12 +max * li0 (2) + 1 = 13 || max * li (3) = 18 +max * li0 (3) + 1 = 19|| max * li (4) =24 + + + +range(max*loop.index0+1) +#} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/examples/test_jinja_advanced.template b/tests/sample_eventgen_conf/jinja/templates/examples/test_jinja_advanced.template new file mode 100644 index 00000000..96fd3b35 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/examples/test_jinja_advanced.template @@ -0,0 +1,8 @@ +{% with %} + {% import 'fix_includes.template' as fixinc %} + {% include "order_over_max2.template" %} + {%- time_now -%} + {"_time":"{{ time_now_epoch }}", "_raw":"8=FIX.4.29=14735=D34=3949={{fixinc.ACCOUNTS[0]}}52=20120404-20:05:46.11356={{fixinc.LOCAL}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}38={{fixinc.ORDQTYM}}40=144={{fixinc.PRICE}}47=A54={{fixinc.SIDE}}55={{fixinc.SYMBOLS[0]}}167=FUT200=201206204=0207=BTEX10=184", "source": "user", "sourcetype": "userEvent" } + {%- time_now -%} + {"_time":"{{ time_now_epoch }}", "_raw":"8=FIX.4.29=0041635=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0] }}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=6152=20120404-20:05:46.11555={{fixinc.SYMBOLS[0]}}48=BTRD062012167=FUT207=BTEX15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=011={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}58=From Gateway: BT0Q33 > BTEQ33 over max qty200=201206103=0151=014=054={{fixinc.SIDE}}40=177=O59=0150=820=039=8442=144={{fixinc.PRICE}}38={{fixinc.ORDQTYM}}6=060=20120404-20:05:46.115146=010=056", "source": "user", "sourcetype": "userEvent" } +{% endwith %} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/examples/test_jinja_advanced_extension.template b/tests/sample_eventgen_conf/jinja/templates/examples/test_jinja_advanced_extension.template new file mode 100644 index 00000000..bcbaeadc --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/examples/test_jinja_advanced_extension.template @@ -0,0 +1,3 @@ +{% block head -%} + {"_time": "MockTimeBlock", "_raw": "If you see this, you successfully imported another jinja template."} +{%- endblock -%} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/examples/trans_jinja.template b/tests/sample_eventgen_conf/jinja/templates/examples/trans_jinja.template new file mode 100644 index 00000000..c67bde06 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/examples/trans_jinja.template @@ -0,0 +1,7 @@ +{% for _ in range(0, large_number) %} +{%- time_now -%} + {"_time":"{{ time_now_epoch }}", "_raw":"{{ time_now_formatted }} {{ LOCAL|random }} {{ large_number }} " } + {"_time":"{{ time_now_epoch }}", "_raw":"{{ time_now_formatted }} {{ LOCAL|random }} {{ large_number }} " } + {"_time":"{{ time_now_epoch }}", "_raw":"{{ time_now_formatted }} {{ LOCAL|random }} {{ large_number }} " } + {"_time":"{{ time_now_epoch }}", "_raw":"{{ time_now_formatted }} {{ LOCAL|random }} {{ large_number }} " } +{% endfor %} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/filled_order_cancel.template b/tests/sample_eventgen_conf/jinja/templates/filled_order_cancel.template new file mode 100644 index 00000000..834f5b18 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/filled_order_cancel.template @@ -0,0 +1,19 @@ +{% macro time_delta(diff) -%}{{ eventgen_earliest_epoch - diff}}{%- endmacro -%} +{% set earliest = time_delta(60) %} + +{% with -%} + {% set events = 5 -%} + {% set slicect = 21 -%} + {% import 'random_slice_count.template' as randomslice %} + {% import 'fix_includes.template' as fixinc %} + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 0), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target_epoch}}", "_raw":"8=FIX.4.29=16935=D34=6249={{fixinc.ACCOUNTS[0]}}52={{time_target_formatted}}56={{fixinc.LOCAL}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}38={{fixinc.ORDQTY}}40=144={{fixinc.PRICE}}47=A54={{fixinc.SIDE}}55={{fixinc.SYMBOLS[0]}}60=20051205-09:11:59.134200=201206167=FUT204=0207=BTE10=007", "source": "filled_order_cancel", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 1), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target_epoch}}", "_raw":"8=FIX.4.29=0036435=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0]}}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=6252={{time_target_formatted}}55={{fixinc.SYMBOLS[0]}}48=00A0FM00ESZ167=FUT207=BTE15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=011={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}-0198={{fixinc.SECORDID}}200=201206151=1014=054={{fixinc.SIDE}}40=177=O59=0150=020=039=0442=144={{fixinc.PRICE}}38={{fixinc.ORDQTY}}6=060=20120327-20:33:19.22410=056", "source": "filled_order_cancel", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 2), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target_epoch}}", "_raw":"8=FIX.4.29=0048835=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0]}}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=6352={{time_target_formatted}}55={{fixinc.SYMBOLS[0]}}48=00A0FM00ESZ167=FUT207=BTE15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=011={{fixinc.CLORDID}}375=BTE000A37={{fixinc.ORDID}}17={{fixinc.EXECID}}-158=Fill198={{fixinc.SECORDID}}200=20120632=10151=014=1075=2012032854={{fixinc.SIDE}}40=177=O59=0150=220=039=2442=144={{fixinc.PRICE}}38={{fixinc.ORDQTY}}31=1405006=14050060=20120327-20:33:19.22410=074", "source": "filled_order_cancel", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 3), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target_epoch}}", "_raw":"8=FIX.4.29=11235=F34=6349={{fixinc.ACCOUNTS[0]}}52={{time_target_formatted}}56={{fixinc.LOCAL}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}37={{fixinc.ORDID}}60=20051205-09:15:50.45610=132", "source": "filled_order_cancel", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 4), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target_epoch}}", "_raw":"8=FIX.4.29=0020535=949={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0]}}57=NONE50=NONE34=6452={{time_target_formatted}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}37={{fixinc.ORDID}}58=Order is not in the market198={{fixinc.SECORDID}}102=0434=139=260=20120327-20:33:55.74410=017", "source": "filled_order_cancel", "sourcetype": "fix" } +{%- endwith -%} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/fix_includes.template b/tests/sample_eventgen_conf/jinja/templates/fix_includes.template new file mode 100644 index 00000000..1c7ea31e --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/fix_includes.template @@ -0,0 +1,18 @@ +{% macro r_int() -%}{% for n in [0,1,2,3,4,5] %}{{ [0,1,2,3,4,5,6,7,8,9]|random }}{% endfor %}{%- endmacro -%} + +{% macro guid() -%}{{ [r_int(),r_int(),r_int(),r_int(),r_int()]|join('-')}}{%- endmacro -%} + +{% set LOCAL = ['BTSYS-ONE', 'BTSYS-TWO', 'BTSYS-THREE']|random %} +{% set CLORDID = guid() %} +{% set NEWCLORDID = guid() %} +{% set ORDID = guid() %} +{% set EXECID = guid() %} +{% set SECORDID = guid() %} +{% set ACCOUNTS = [("REMOTE-ONE", "RONE"), ("REMOTE-TWO", "RTWO"), ("REMOTE-THREE", "RTHREE")] | random | list %} +{% set SIDE = [1,2] | random %} +{% set SYMBOLS = [("NBCT", 128, 130), ("NAMZ", 100, 102), ("NAPL", 48, 49)] | random | list %} +{% set PRICE = [range(SYMBOLS[1], SYMBOLS[2], 1)| random, '%02d' % range(00,99,1) | random]|join('.') %} +{% set ORDQTYM = range(10000,12000,1) | random %} +{% set ORDQTY = range(10000,12000,1) | random %} +{#% set SUBSEC0 = '%03d' % range(000,199,1) | random %#} + diff --git a/tests/sample_eventgen_conf/jinja/templates/fix_tags b/tests/sample_eventgen_conf/jinja/templates/fix_tags new file mode 100644 index 00000000..fa7f4929 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/fix_tags @@ -0,0 +1,250 @@ +1 +10 +11 +14 +146 +15 +150 +151 +167 +17 +198 +20 +204 +207 +308 +309 +31 +310 +311 +313 +318 +319 +32 +34 +35 +37 +38 +39 +40 +44 +442 +47 +48 +49 +50 +52 +54 +55 +56 +57 +58 +59 +6 +60 +77 +9 +1 +10 +103 +11 +14 +146 +15 +150 +151 +167 +17 +20 +200 +204 +207 +34 +35 +37 +38 +39 +40 +44 +442 +47 +48 +49 +50 +52 +54 +55 +56 +57 +58 +59 +6 +60 +77 +9 +1 +10 +102 +11 +14 +15 +150 +151 +167 +17 +198 +20 +200 +204 +207 +31 +32 +34 +35 +37 +375 +38 +39 +40 +434 +44 +442 +47 +48 +49 +50 +52 +54 +55 +56 +57 +58 +59 +6 +60 +75 +77 +9 +1 +10 +102 +11 +14 +15 +150 +151 +167 +17 +198 +20 +200 +204 +207 +31 +32 +34 +35 +37 +375 +38 +39 +40 +434 +44 +442 +47 +48 +49 +50 +52 +54 +55 +56 +57 +58 +59 +6 +60 +75 +77 +9 +1 +10 +102 +11 +14 +15 +150 +151 +167 +17 +198 +20 +200 +204 +207 +31 +32 +34 +35 +37 +375 +38 +39 +40 +434 +44 +442 +47 +48 +49 +50 +52 +54 +55 +56 +57 +58 +59 +6 +60 +75 +77 +9 +1 +10 +11 +14 +15 +150 +151 +167 +17 +198 +20 +200 +204 +207 +34 +35 +37 +38 +39 +40 +41 +44 +442 +47 +48 +49 +50 +52 +54 +55 +56 +57 +59 +6 +60 +77 +9 diff --git a/tests/sample_eventgen_conf/jinja/templates/import_test.template b/tests/sample_eventgen_conf/jinja/templates/import_test.template new file mode 100644 index 00000000..87df3769 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/import_test.template @@ -0,0 +1,5 @@ +{% from 'random_slice_count.template' import getcount %} +{% from 'fix_includes.template' import %} + +{%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=getcount(1, 20, 0), slices=20, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target__epoch }}", "_raw":"49={{ACCOUNTS[0]}}52={{time_target_formatted}}56={{LOCAL}}1={{ACCOUNTS[1]}}", "source": "new_order_with_fill", "sourcetype": "fix" } \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/new_order_with_fill.template b/tests/sample_eventgen_conf/jinja/templates/new_order_with_fill.template new file mode 100644 index 00000000..dcf2df4d --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/new_order_with_fill.template @@ -0,0 +1,15 @@ +{% macro time_delta(diff) -%}{{ eventgen_earliest_epoch - diff}}{%- endmacro -%} +{% set earliest = time_delta(3) %} + +{% with -%} + {% set events = 3 -%} + {% set slicect = 20 -%} + {% import 'random_slice_count.template' as randomslice %} + {% import 'fix_includes.template' as fixinc %} + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 0), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target_epoch }}", "_raw":"8=FIX.4.29=15735=D34=249={{fixinc.ACCOUNTS[0]}}52={{time_target_formatted}}56={{fixinc.LOCAL}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}38={{fixinc.ORDQTY}}40=144={{fixinc.PRICE}}47=A48=BTRD062012SPD09201254={{fixinc.SIDE}}77=O204=1207=BTEX55={{fixinc.SYMBOLS[0]}}10=030", "source": "new_order_with_fill", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 1), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"8=FIX.4.29=0052735=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0]}}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=252={{time_target_formatted}}55={{fixinc.SYMBOLS[0]}}48=BTRD062012SPD092012167=MLEG207=BTEX15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=111={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}-0198={{fixinc.SECORDID}}151=1214=054={{fixinc.SIDE}}40=177=O59=0150=020=039=0442=344={{fixinc.PRICE}}38={{fixinc.ORDQTY}}6=060={{time_target_formatted}}.{{fixinc.SUBSEC0}}146=2311=BTRD309=BTRD062012310=FUT308=BTEX318=USD313=201206319=1311=BTRD309=BTRD092012310=FUT308=BTEX318=USD313=201209319=110=181", "source": "new_order_with_fill", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 2), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"8=FIX.4.29=0062435=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0]}}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=552={{time_target_formatted}}55={{fixinc.SYMBOLS[0]}}48=BTRD062012SPD092012167=MLEG207=BTEX15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=111={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}-158=Summary Fill198={{fixinc.SECORDID}}32=12151=014=1254={{fixinc.SIDE}}40=177=O59=037={{fixinc.ORDID}}20=039=2442=344={{fixinc.PRICE}}38={{fixinc.ORDQTY}}31=3.56=360={{time_target_formatted}}.{{fixinc.SUBSEC2}}146=2311=BTRD309=BTRD062012310=FUT308=BTEX318=USD313=201206319=1311=BTRD309=BTRD092012310=FUT308=BTEX318=USD313=201209319=110=146", "source": "new_order_with_fill", "sourcetype": "fix" } +{%- endwith -%} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/new_order_with_fill_times.template b/tests/sample_eventgen_conf/jinja/templates/new_order_with_fill_times.template new file mode 100644 index 00000000..11fad596 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/new_order_with_fill_times.template @@ -0,0 +1,9 @@ +{% with %} + {% import 'fix_includes.template' as fixinc %} + {%- time_now -%} + {"_time":"{{ time_now_epoch }}", "_raw":"8=FIX.4.29=15735=D34=249={{fixinc.ACCOUNTS[0]}}52=20120329-20:36:44.53456={{fixinc.LOCAL}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}38={{fixinc.ORDQTY}}40=144={{fixinc.PRICE}}47=A48=BTRD062012SPD09201254={{fixinc.SIDE}}77=O204=1207=BTEX55={{fixinc.SYMBOLS[0]}}10=030", "source": "new_order_with_fill", "sourcetype": "fix" } + {%- time_now -%} + {"_time":"{{ time_now_epoch }}", "_raw":"8=FIX.4.29=0052735=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0]}}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=252=20120329-20:36:44.53655={{fixinc.SYMBOLS[0]}}48=BTRD062012SPD092012167=MLEG207=BTEX15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=111={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}-0198={{fixinc.SECORDID}}151=1214=054={{fixinc.SIDE}}40=177=O59=0150=020=039=0442=344={{fixinc.PRICE}}38={{fixinc.ORDQTY}}6=060=20120329-20:36:44.380146=2311=BTRD309=BTRD062012310=FUT308=BTEX318=USD313=201206319=1311=BTRD309=BTRD092012310=FUT308=BTEX318=USD313=201209319=110=181", "source": "new_order_with_fill", "sourcetype": "fix" } + {%- time_now -%} + {"_time":"{{ time_now_epoch }}", "_raw":"8=FIX.4.29=0062435=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0]}}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=552=20120329-20:38:11.06755={{fixinc.SYMBOLS[0]}}48=BTRD062012SPD092012167=MLEG207=BTEX15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=111={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}-158=Summary Fill198={{fixinc.SECORDID}}32=12151=014=1254={{fixinc.SIDE}}40=177=O59=037={{fixinc.ORDID}}20=039=2442=344={{fixinc.PRICE}}38={{fixinc.ORDQTY}}31=3.56=360=20120329-20:38:10.970146=2311=BTRD309=BTRD062012310=FUT308=BTEX318=USD313=201206319=1311=BTRD309=BTRD092012310=FUT308=BTEX318=USD313=201209319=110=146", "source": "new_order_with_fill", "sourcetype": "fix" } +{% endwith %} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/order_cancel_request.template b/tests/sample_eventgen_conf/jinja/templates/order_cancel_request.template new file mode 100644 index 00000000..cd921a97 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/order_cancel_request.template @@ -0,0 +1,17 @@ +{% macro time_delta(diff) -%}{{ eventgen_earliest_epoch - diff}}{%- endmacro -%} +{% set earliest = time_delta(5) %} + +{% with -%} + {% set events = 4 -%} + {% set slicect = 25 -%} + {% import 'random_slice_count.template' as randomslice %} + {% import 'fix_includes.template' as fixinc %} + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 0), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", ="_raw":"8=FIX.4.29=16835=D34=3449={{fixinc.ACCOUNTS[0]}}52={{time_target_formatted}}56={{fixinc.LOCAL}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}38={{fixinc.ORDQTY}}40=144={{fixinc.PRICE}}47=A54={{fixinc.SIDE}}55={{fixinc.SYMBOLS[0]}}60=20051205-09:11:59.343200=201206167=FUT204=0207=BTE10=187", "source": "order_cancel_request", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 1), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"8=FIX.4.29=0036335=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0]}}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=3452={{time_target_formatted}}55={{fixinc.SYMBOLS[0]}}48=00A0FM00ESZ167=FUT207=BTE15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=011={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}-0198={{fixinc.SECORDID}}200=201206151=1014=054={{fixinc.SIDE}}40=177=O59=0150=020=039=0442=144={{fixinc.PRICE}}38={{fixinc.ORDQTY}}6=060=20120327-20:06:08.77610=253", "source": "order_cancel_request", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 2), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"8=FIX.4.29=11235=F34=3649={{fixinc.ACCOUNTS[0]}}52={{time_target_formatted}}56={{fixinc.LOCAL}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}37={{fixinc.ORDID}}60=20051205-09:15:50.76310=146", "source": "order_cancel_request", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 3), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"8=FIX.4.29=0037535=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0]}}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=3652={{time_target_formatted}}55={{fixinc.SYMBOLS[0]}}48=00A0FM00ESZ167=FUT207=BTE15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=011={{fixinc.NEWCLORDID}}41={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}-1198={{fixinc.SECORDID}}200=201206151=014=054={{fixinc.SIDE}}40=177=O59=0150=420=039=4442=144={{fixinc.PRICE}}38={{fixinc.ORDQTY}}6=060=20120327-20:07:56.97710=090", "source": "order_cancel_request", "sourcetype": "fix" } +{%- endwith -%} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/order_errors.template b/tests/sample_eventgen_conf/jinja/templates/order_errors.template new file mode 100644 index 00000000..c2a9c13d --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/order_errors.template @@ -0,0 +1,14 @@ +{% macro time_delta(diff) -%}{{ eventgen_earliest_epoch - diff}}{%- endmacro -%} +{% set earliest = time_delta(5) %} + +{%- with -%} + {% set events = 2 -%} + {% set slicect = 13 -%} + {% import 'random_slice_count.template' as randomslice %} + {% import 'fix_includes.template' as fixinc %} + {% import 'OrdRejReason_103.template' as ordrejreason %} + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 0), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"8=FIX.4.29=14735=D34=3949={{fixinc.ACCOUNTS[0]}}52={{time_target_formatted}}56={{fixinc.LOCAL}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}38={{fixinc.ORDQTYM}}40=144={{fixinc.PRICE}}47=A54={{fixinc.SIDE}}55={{fixinc.SYMBOLS[0]}}167=FUT200=201206204=0207=BTEX10=184", "source": "order_over_max", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 1), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"8=FIX.4.29=0041635=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0] }}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=6152={{time_target_formatted}}55={{fixinc.SYMBOLS[0]}}48=BTRD062012167=FUT207=BTEX15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=011={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}58={{ordrejreason.reason[1]}}200=201206103={{ordrejreason.reason[0]}}151=014=054={{fixinc.SIDE}}40=177=O59=0150=820=039=8442=144={{fixinc.PRICE}}38={{fixinc.ORDQTYM}}6=060=20120404-20:05:46.115146=010=056", "source": "order_over_max", "sourcetype": "fix" } +{%- endwith -%} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/order_over_max.template b/tests/sample_eventgen_conf/jinja/templates/order_over_max.template new file mode 100644 index 00000000..d11ed3d8 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/order_over_max.template @@ -0,0 +1,13 @@ +{% macro time_delta(diff) -%}{{ eventgen_earliest_epoch - diff}}{%- endmacro -%} +{% set earliest = time_delta(5) %} + +{%- with -%} + {% set events = 2 -%} + {% set slicect = 13 -%} + {% import 'random_slice_count.template' as randomslice %} + {% import 'fix_includes.template' as fixinc %} + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 0), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"8=FIX.4.29=14735=D34=3949={{fixinc.ACCOUNTS[0]}}52={{time_target_formatted}}56={{fixinc.LOCAL}}1={{fixinc.ACCOUNTS[1]}}11={{fixinc.CLORDID}}38={{fixinc.ORDQTYM}}40=144={{fixinc.PRICE}}47=A54={{fixinc.SIDE}}55={{fixinc.SYMBOLS[0]}}167=FUT200=201206204=0207=BTEX10=184", "source": "order_over_max", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 1), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"8=FIX.4.29=0041635=849={{fixinc.LOCAL}}56={{fixinc.ACCOUNTS[0] }}50=BTORD{{fixinc.ACCOUNTS[1]}}57=NONE34=6152={{time_target_formatted}}55={{fixinc.SYMBOLS[0]}}48=BTRD062012167=FUT207=BTEX15=USD1={{fixinc.ACCOUNTS[1]}}47=A204=011={{fixinc.CLORDID}}37={{fixinc.ORDID}}17={{fixinc.EXECID}}58=From Gateway: BT0Q33 > BTEQ33 over max qty200=201206103=0151=014=054={{fixinc.SIDE}}40=177=O59=0150=820=039=8442=144={{fixinc.PRICE}}38={{fixinc.ORDQTYM}}6=060=20120404-20:05:46.115146=010=056", "source": "order_over_max", "sourcetype": "fix" } +{%- endwith -%} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/random_slice.template b/tests/sample_eventgen_conf/jinja/templates/random_slice.template new file mode 100644 index 00000000..0c4326f2 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/random_slice.template @@ -0,0 +1,17 @@ +{%- macro time_delta(diff) -%}{{ eventgen_earliest_epoch - diff}}{%- endmacro -%} +{%- set earliest = time_delta(1) -%} + + + +{%- with -%} + {% import 'fix_includes.template' as fixinc %} + {% set events = 3 -%} + {% set slicect = 20 -%} + {% import 'random_slice_count.template' as randomslice %} + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 0), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target_epoch}}", "_raw":"1: {{randomslice.getcount(events, slicect, 0)}} > {{time_target_formatted}}"} + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect,1), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target_epoch}}", "_raw":"2: {{randomslice.getcount(events, slicect, 1)}} > {{time_target_formatted}}"} + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=randomslice.getcount(events, slicect, 2), slices=slicect, date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{time_target_epoch}}", "_raw":"3: {{randomslice.getcount(events, slicect, 2)}} > {{time_target_formatted}}"} +{%- endwith -%} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/random_slice_count.template b/tests/sample_eventgen_conf/jinja/templates/random_slice_count.template new file mode 100644 index 00000000..3a551990 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/random_slice_count.template @@ -0,0 +1,8 @@ +{% macro getcount(pevent, pslice, ploop) -%} + {% set max = pslice //pevent -%} + {% set crange = [] -%} + {%- for _ in range(0, pevent) -%} + {% do crange.append((max*loop.index0+1, max*loop.index)) -%} + {%- endfor -%} + {{ range(crange[ploop][0], crange[ploop][1], 1) | random }} +{%- endmacro -%} diff --git a/tests/sample_eventgen_conf/jinja/templates/test_event.template b/tests/sample_eventgen_conf/jinja/templates/test_event.template new file mode 100644 index 00000000..582dd0e4 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/test_event.template @@ -0,0 +1,17 @@ +{#% set earliest_epoch = time_delta(5) %#} +{#% set latest_epoch = eventgen_earliest_epoch - eventgen_rcount %#} +{#% macro time_delta(diff) -%}{{latest_epoch - diff}}{%- endmacro -%#} + +{% macro time_delta(diff) -%}{{eventgen_earliest_epoch-diff}}{%- endmacro -%} +{% set earliest = time_delta(5) %} + +{% with %} + {% import 'fix_includes.template' as fixinc %} + {% import 'fix_includes.template' as fixinc %} + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=1, slices="20", date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"{{yay}} Event {{rcount}}: Range 1: {{time_target_formatted}}", "source": "test_event_timeslice", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=2, slices="20", date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"Event {{rcount}}: Range 2: {{time_target_formatted}}", "source": "test_event_timeslice", "sourcetype": "fix" } + {%- time_slice earliest=earliest, latest=eventgen_earliest_epoch, count=3, slices="20", date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"Event {{rcount}}: Range 3: {{time_target_formatted}}", "source": "test_event_timeslice", "sourcetype": "fix" } +{% endwith %} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/test_event2.template b/tests/sample_eventgen_conf/jinja/templates/test_event2.template new file mode 100644 index 00000000..15d0812f --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/test_event2.template @@ -0,0 +1,12 @@ +{% set errors = [("0", "Too late to cancel", 1), ("1", "Unknown Order", 1), ("2", "Broker / Exchange Option", 5), ("99", "Other", 2)] -%} +{% set elist = [] -%} +{% for id, msg, pri in errors -%} + {% for _ in range(0, pri) %} + {% do elist.append((id, msg)) -%} + {% endfor -%} +{% endfor -%} + + +{%- time_now date_format="%Y%m%d-%H:%M:%S.%f" -%} + {"_time":"{{ time_now_epoch }}", "_raw":"{{time_now_formatted}} :: {{ prioe }}", "source": "errors", "sourcetype": "fix" } + diff --git a/tests/sample_eventgen_conf/jinja/templates/test_event3.template b/tests/sample_eventgen_conf/jinja/templates/test_event3.template new file mode 100644 index 00000000..15799650 --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/test_event3.template @@ -0,0 +1,5 @@ +{% set host_ip = '345' %} +{% set port = '1234' %} + +{%- time_now -%} + {"_time":"{{time_now_epoch}}", "_raw": "{{host_ip}} :: {{port}}", "source": "errors", "sourcetype": "fix" } \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/test_jinja_timeslice 2.template b/tests/sample_eventgen_conf/jinja/templates/test_jinja_timeslice 2.template new file mode 100644 index 00000000..78b90f6f --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/test_jinja_timeslice 2.template @@ -0,0 +1,6 @@ +{% for _ in range(0, large_number) %} + {%- time_slice earliest="1549314369", latest="1549400769", count=loop.index, slices="5" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"{{ time_slice_epoch }} I like little windbags + Im at: {{ loop.index }} out of: {{ large_number }} + I'm also hungry, can I have a pizza?"} +{% endfor %} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/jinja/templates/timeslice_examples.template b/tests/sample_eventgen_conf/jinja/templates/timeslice_examples.template new file mode 100644 index 00000000..6630013c --- /dev/null +++ b/tests/sample_eventgen_conf/jinja/templates/timeslice_examples.template @@ -0,0 +1,82 @@ +{# Jinja allows for some variables to be passed back and forth with every sample: +eventgen_count - The current count of the count type. set via jinja_count_type with options: ["line_count", "cycles", "perDayVolume"] + Defaults to Cycles. +eventgen_maxcount - The max count requested in the stanza. +eventgen_earliest - The earliest specified item in ISO8601 - https://en.wikipedia.org/wiki/ISO_8601 +eventgen_earliest_epoch - earliest converted to epoch time based on specified value and host time. +eventgen_latest - the latest specified time item in ISO8601 - https://en.wikipedia.org/wiki/ISO_8601 +eventgen_latest_epoch - latest time converted to epoch + +You can also pass in your own custom variables via the jinja_variables setting in a stanza. This setting must be in valid json, but +can set several variables at once. + +jinja_variables = {"large_number":10} + +#} + +{%- time_now -%} + {"_time":"{{ time_now_epoch }}", "_raw":"{{ time_now_formatted }} Current Settings: + eventgen_count: {{ eventgen_count }} + eventgen_maxcount: {{ eventgen_maxcount }} + eventgen_earliest: {{ eventgen_earliest }} + eventgen_earliest_epoch: {{ eventgen_earliest_epoch }} + eventgen_latest: {{ eventgen_latest }} + eventgen_latest_epoch: {{ eventgen_latest_epoch }} + large_number: {{ large_number }} + "} + +{# as shown in the last example, you also have access to a jinja modules. As a custom module that ships with Eventgen, you'll have the +ability to access time functions. These time functions are as follows: + +time_now - Will tell the time module to find the current spot in time based on the current host's machine time. +time_slice - Used to divide up time based on earliest and latest. Let's say I wanted to know, "If I gave you a start window + and an end window, and wanted you to divide that time slot into set buckets of time. Given my 3rd event, give me back a time + period that fits the expected slice." + +FUTURE_REQUEST: time_delta - Given a time period, let me specify + or - seconds and have it return a new time period +FUTURE_REQUEST: time_backfill - Spread out my count based on a time period of (cycles / events / volume), spread my events out evenly back earliest until "now" + +Below are examples of most of those elements, and their corresponding options. Please note, future requests are not listed below. + +#} + + +{# Create a fake loop of 10 events (please note that our "count" is set to cycles, so this loop takes place internally), where the earliest time we want to see is 1234, and the latest time is 2345 (1111 seconds) +then take the lopp.index to get count of the event. We'll take our earliest time of 1234, and our latest time of 2345, and divide it into 5 blocks (222). Take our events and select time pieces based on the count, + and spread it out between those sections. Event 1 and 2 are in the first block of 222 seconds, 3 and 4 in the next block of 222 (223-444) and so on. If you count past your slice count, eventgen will just assume + you wanted to slice the time based on the smaller slice count, but you wanted to continue that offset into the future. + + By using the time_slice function, we can pass in the following: + earliest - earliest time in epoch start slice time periods + latest - latest time in epoch to end slice time period + count - Which slice to use + slices - Total number of slices to divide time into + FUTURE_REQUEST: slice_type - [random, lower, upper, middle] grab either the lowest time of the slice, the middle of the slice, the top of the slice, or randomly between the slice bounds + #} + +{# Perfectly match the slice count #} +{% for _ in range(0, 5) %} + {%- time_slice earliest="1234", latest="2345", count=loop.index, slices="5" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"Formatted_Time: {{ time_target_formatted }} Epoch_Time: {{ time_target_epoch }} + I like little windbags + Im at: {{ loop.index }} out of: 5 + I'm also hungry, can I have a pizza?"} +{% endfor %} + +{# Ask for slices outside the latest timerange #} +{% for _ in range(0, 10) %} + {%- time_slice earliest="1234", latest="2345", count=loop.index, slices="5" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"Formatted_Time: {{ time_target_formatted }} Epoch_Time: {{ time_target_epoch }} + I like little windbags + Im at: {{ loop.index }} out of: 10 + I'm also hungry, can I have a pizza?"} +{% endfor %} + +{# Change date format #} +{% for _ in range(0, 5) %} + {%- time_slice earliest="1234", latest="2345", count=loop.index, slices="5" -%} + {"_time":"{{ time_target_epoch }}", "_raw":"Formatted_Time: {{ time_target_formatted }} Epoch_Time: {{ time_target_epoch }} + I like little windbags + Im at: {{ loop.index }} out of: 5 + I'm also hungry, can I have a pizza?"} +{% endfor %} \ No newline at end of file diff --git a/tests/sample_eventgen_conf/sample/eventgen.conf.notoken b/tests/sample_eventgen_conf/sample/eventgen.conf.notoken new file mode 100755 index 00000000..0668de0d --- /dev/null +++ b/tests/sample_eventgen_conf/sample/eventgen.conf.notoken @@ -0,0 +1,8 @@ +[sample] +sampleDir = . +outputMode = stdout +count = 3 +earliest = -3s +latest = now +interval = 3 +end = 1 diff --git a/tests/sample_eventgen_conf/splitsample/eventgen.conf.splitcounter b/tests/sample_eventgen_conf/splitsample/eventgen.conf.splitcounter new file mode 100644 index 00000000..090c7b9d --- /dev/null +++ b/tests/sample_eventgen_conf/splitsample/eventgen.conf.splitcounter @@ -0,0 +1,14 @@ +[sample] +sampleDir = ../sample +outputMode = stdout +earliest = now +latest = now +interval = 1 +randomizeEvents = true +end = 1 +count = 33 +splitSample = 2 + +token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} +token.0.replacementType = timestamp +token.0.replacement = %Y-%m-%d %H:%M:%S diff --git a/tests/sample_eventgen_conf/windbag/eventgen.conf.windbag.end b/tests/sample_eventgen_conf/windbag/eventgen.conf.windbag.end index a6d5becc..3df6e84e 100644 --- a/tests/sample_eventgen_conf/windbag/eventgen.conf.windbag.end +++ b/tests/sample_eventgen_conf/windbag/eventgen.conf.windbag.end @@ -10,6 +10,7 @@ outputMode = stdout [windbag2] generator = windbag earliest = -3s +backfill = 1m latest = now interval = 3 count = 5 diff --git a/tests/unit/test_timeparser.py b/tests/unit/test_timeparser.py index 1b6e6f20..b044786e 100644 --- a/tests/unit/test_timeparser.py +++ b/tests/unit/test_timeparser.py @@ -15,7 +15,7 @@ @pytest.mark.parametrize("delta,expect", time_delta_test_params) def test_time_delta_2_second(delta, expect): - """ Test timeDelta2secs function, convert time delta object to seconds + """Test timeDelta2secs function, convert time delta object to seconds Normal cases: case 1: time delta is 1 day, expect is 86400 case 2: time delta is 1 day 3 hour 15 minutes 32 seconds, expect is 98132