The toolkit includes an integration testing feature that allows you to test your deployments against simulated HTTP responses from the Azure platform. This enables you to quickly verify that your deployment templates and parameter files work as expected.
This integration testing feature is built using the Azure SDK for Python’s test functionality. This functionality allows you to make a baseline recording of responses from the Azure platform using a known good version of your deployment. These recordings can then be played back during simulated deployments to quickly test your latest code.
Before using the toolkit’s integration testing functionality, you will need to make sure several types of files are in place and properly configured.
Be sure that you have followed the setup instructions appropriate to your environment.
Each of the sample deployments contains a top level archetype.test.json
test file that is used during integration testing. This file contains placeholders for a number of important fields used during deployments such subscription IDs and user names. It is used as the basis for the actual archetype.json
you will have created to use in your deployments.
For integration testing to work, the existing test files should only be modified to include new parameter fields you may have added to the toolkit’s default archetype configuration that you may have modified.
Test archetype configuration files are expected to be checked into your source control. They should never contain sensitive information such as real subscriptions, tenant, or user accounts. Instead, they should continue to use placeholder values consistent with existing values.
The integration testing functionality also depends on two files that are not included in the toolkit code repository:
-
tools/devtools_testutils/testsettings_local.cfg
-
tools/devtools_testutils/vdc_settings_real.py
Both need to exist in the toolkit’s tools/devtools_testutils
folder. These files are listed in the toolkit’s .gitignore
file to prevent their inclusion in your repository by default, so you will need to create and configure these files before running any tests.
The testsettings_local.cfg
file consists of a single live-mode
parameter which tells the testing functionality if it should be running in playback mode (offline testing mode) or recording mode (where an actual deployment is used to record data for offline testing). If this file is absent this live-mode
value will default to false.
The content of this file should be a single line:
live-mode: false
The file tools/devtools_testutils/fake_settings.py
, which is included in the toolkit, contains the placeholder values used by for tests running in playback mode. The vdc_settings_real.py
file contains the actual subscription, AAD tenant, and credentials you will use when in recording mode. If you do not create this real file, you will only be able to run offline tests using pre-recorded data.
To set up this file create a blank vdc_settings_real.py
in the devtools_testutils
folder, then copy the contents of fake_settings.py
into it. Next you will need to update the real file by modifying the following variables:
Variable name | Description | Placeholder value |
---|---|---|
|
Subscription ID of your simulated on-premises environment. |
00000000-0000-0000-0000-000000000000 |
|
Subscription ID of your shared services deployment. |
00000000-0000-0000-0000-000000000000 |
|
Subscription ID of your workload deployment. |
00000000-0000-0000-0000-000000000000 |
|
Domain used by for your AD tenant. |
myaddomain.onmicrosoft.com |
|
ID of your Azure AD Tenant. |
00000000-0000-0000-0000-000000000000 |
|
Object ID of the Azure AD user that will be assigned as the key vault service principle during your deployments. |
00000000-0000-0000-0000-000000000000 |
In addition to these values you will need to update the real file’s get_credentials
function to replace the fake basic authentication token using either the ServicePrincipalCredentials
or UserPassCredentials
. Both methods are included but commented out in the fake version of the file.
For more information on how to set up these credentials see the Getting Azure Credentials section of the Azure SDK for Python documentation.
Each test should have a sub-folder in the tests/integration_tests
folder. Each of these test sub-folders contains a test_all_resources.py
file which specifies what resources should be included as part of the test. Each test sub-folder also contain a recordings
folder that contains the pre-recorded HTTP response data used for offline testing.
The toolkit includes pre-configured tests and recorded data for each of the sample deployments:
Test folder | Sample deployment |
---|---|
Before running offline (playback mode) tests, make sure your testsettings_local.cfg
file has the live-mode
parameter set to false
.
Integration tests use the pytest test runner Python module. Start a test by navigating to the toolkit root folder in a terminal or command-line interface and running the following command:
python -m pytest tests/integration_tests/{deployment-test-folder}/{test-file-name}.py
python3 -m pytest tests/integration_tests/{deployment-test-folder}/{test-file-name}.py
py -m pytest tests/integration_tests/{deployment-test-folder}/{test-file-name}.py
An offline test should take less than 30 seconds to complete.
Running a test in online (recording mode) will deploy all resources defined in the relevant test_all_resources.py
file. This deployment process will use the subscription, tenant, and user information stored in your vdc_settings_real.py
. Other settings will be pulled from the archetype.test.json
file.
The test will record all HTTP traffic to and from the Azure Resource Manager APIs during this deployment and update the data in recordings
folder for later use in offline testing. Make sure your online deployment completely succeeds before checking in recording files to your code repository.
To set the integration testing to online mode, update your testsettings_local.cfg
file’s live-mode
parameter to false
. Then start the deployment by navigating to the toolkit root folder in a terminal or command-line interface and running the following command (same command used for offline mode):
Docker
python -m pytest tests/integration_tests/{deployment-test-folder}/{test-file-name}.py
python3 -m pytest tests/integration_tests/{deployment-test-folder}/{test-file-name}.py
py -m pytest tests/integration_tests/{deployment-test-folder}/{test-file-name}.py
Online mode will take a long time as it will provision all of the resources for a deployment.
Using the existing sample tests as a base you should be able to easily create your own custom tests for new archetypes.
To create a test for a new deployment, create a new folder in tests/integration_tests/
. Copy one of the existing test_all_resources.py
files into this new folder.
This file’s setUp
function has a _workload_configuration_path
variable (alternatively _shared_services_configuration_path
or _on_premises_configuration_path
depending on the environment type) that will need to point to the root folder of your archetype. This is the same path used when running the vdc.py
script). You will also need to configure the _environment_type
variables.
def setUp(self):
super(AllResourcesUsing, self).setUp()
parameters_file = ''
if self.is_live:
parameters_file = 'archetype.json'
else:
parameters_file = 'archetype.test.json'
self._workload_path = join(
Path(__file__).parents[3],
'archetypes',
'{new deployment folder name}',
parameters_file)
self._environment_type = 'workload'
Inside the test_all_resources.py
file, including a module in a test is done by adding a function that will in turn call the _execute_deployment_test
function of the VDCBaseTestCase
for the module when the test is executed.
Each deployment test should always include these functions for the ops
, kv
, nsg
, and net
modules. Additional modules should be added using this standardized format:
def test_x_workload_{module name}_creation(self):
self.set_resource_to_deploy('{module name}', args)
self.upload_scripts(args, False)
self.create_vdc_storage(args, False)
successful: bool = self.execute_deployment_test(
args,
self._workload_path,
self._environment_type)
self.assertEqual(successful, True)
Be sure to reach out to us with feedback. Open an issue on the GitHub repository with any questions.