Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test.sh fails when run with -p #54

Open
nam20485 opened this issue Apr 29, 2018 · 12 comments
Open

test.sh fails when run with -p #54

nam20485 opened this issue Apr 29, 2018 · 12 comments

Comments

@nam20485
Copy link
Member

Running bin/test.sh -p causes the following command to be run:

docker-compose -p tests run --entrypoint /code/bin/test-entrypoint.sh -p 8000 --all -f

This command is not correct, and results in the following error output:

$ bin/test.sh -p
Run a one-off command on a service.
For example:
    $ docker-compose run web python manage.py shell
By default, linked services will be started, unless they are already
running. If you do not want to start linked services, use
`docker-compose run --no-deps SERVICE COMMAND [ARGS...]`.
Usage: run [options] [-v VOLUME...] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
Options:
    -d                    Detached mode: Run container in the background, print
                          new container name.
    --name NAME           Assign a name to the container
    --entrypoint CMD      Override the entrypoint of the image.
    -e KEY=VAL            Set an environment variable (can be used multiple times)
    -u, --user=""         Run as specified username or uid
    --no-deps             Don't start linked services.
    --rm                  Remove container after run. Ignored in detached mode.
    -p, --publish=[]      Publish a container's port(s) to the host
    --service-ports       Run command with the service's ports enabled and mapped
                          to the host.
    -v, --volume=[]       Bind mount a volume (default [])
    -T                    Disable pseudo-tty allocation. By default `docker-compose run`
                          allocates a TTY.
    -w, --workdir=""      Working directory inside the container
@BrianHGrant
Copy link
Member

Yeah, this was earlier command, and actually removed from current setup:

echo "still needs building"

We need to build out.

@bhgrant8
Copy link
Member

bhgrant8 commented May 1, 2018

so general scaffold we will need, following discussion in #51:

the -p command should be more along a build and run like this:

 docker-compose -f travis-docker-compose.yml build
          docker-compose -f travis-docker-compose.yml run \
          --entrypoint /code/bin/test-entrypoint.sh $DOCKER_IMAGE

test-entrpoint.sh - should roughly be:

#!/bin/bash
export PATH=$PATH:~/.local/bin

python manage.py test --no-input --keepdb

since we are using the --keepdb variable, and assuming tests will be run against the prod database:

  • read-only creds assigned for test connection.
  • you'll want to use a test runner like py.test, nose, something else?
  • don't add or destroy data as part of tests
  • might be better pattern to test types of data returned vs. specific data,
  • more important to test custom code
  • endpoint tests - want to ensure correct response codes

@nam20485
Copy link
Member Author

nam20485 commented May 1, 2018

I made an attempt at the travis configuration and have something working now. I basically just followed the patterns I found in the budget and transporation backend repos you referenced as example, using the Docker images and bin/*.sh scripts that were already in our repos. I know it may not be exactly what we want to go with but I figured it would be worth getting the major components working.

If you look in the .travis.yml and the test-entrypoint.sh you will see that it is very similar to what you posted above.

.travis.yml:

...
script:
  - bin/build.sh -p
  - bin/test.sh -p

after_success:
  - bin/deploy.sh
...

test.sh:

docker-compose -f production-docker-compose.yml run --entrypoint /code/bin/test-entrypoint.sh api_production -p 8000 --rm

test-entrypoint.sh:

...
python manage.py test --nomigrations
...

tests.py:

class NeighborhoodUnitsTestCase(TestCase):
    def setUp(self):
        #preexisting_models.NeighborhoodUnits.objects.create()
        pass

    def test_example(self):
        self.assertTrue(True)

Travis builds the production API image and then runs any Django unit tests in found in api/tests/tests.py (currently just one fake unit test case example). Additional unit tests that backend developers will want to create can be added easily to this file.

After that bin/deploy.sh is run but it is mostly commented-out and blank at this point.

The only piece that is not accounted for are the database credentials, which I currently am resorting to hard-coding securely in the Travis build config settings.

@znmeb
Copy link
Contributor

znmeb commented May 1, 2018

Is this something generalizable and worth adding to the examplar? It seems like it would be.

@bhgrant8
Copy link
Member

bhgrant8 commented May 1, 2018

awesome, going in the right direction.

A few things.

  • test.sh, after our review of last year's deployment, it appears that we built the docker containers once, ran the tests, and then deployed these same containers to aws. As such it does not look like we will want to have the --rm flag here, we do want to preserve the containers created.

  • "currently am resorting to hard-coding securely in the Travis build config settings." - if you are saying you are setting these within the settings in the Travis UI, with the lock activated to secure them, this is correct plan. From looking over your logs and github briefly, this seems to be the case.

  • "the patterns I found in the budget and transportation backend repos you referenced as example, using the Docker images"

  • so, in the end it turned out transportation from last year did not write any actual tests so should not be used as a base pattern, we cannot verify what actually worked.
  • it appears that you are not using the --keepdb flag currently, where is the test db being spun up and destroyed? in other words, are you connecting to an aws hosted database server? if so, the current one or last year's emergency response ec2 instance, or something else?
  • at some point we will need to replace the 'api_production' hardcoded name to start using $DOCKER_IMAGE env var, it may be better though that we make the env var updates their own pr request and just note this for later.

@nam20485
Copy link
Member Author

nam20485 commented May 1, 2018

Cool, once we settle on the details and have it working how you would like I was thinking that I would open a PR to merge this into the exemplar repo. Then generated quickstart API apps would take advantage of the travis functionality straight out of the box.

test.sh, after our review of last year's deployment, it appears that we built the docker containers once, ran the tests, and then deployed these same containers to aws. As such it does not look like we will want to have the --rm flag here, we do want to preserve the containers created.

I will remove the --rm flag.

"currently am resorting to hard-coding securely in the Travis build config settings." - if you are saying you are setting these within the settings in the Travis UI, with the lock activated to secure them, this is correct plan. From looking over your logs and github briefly, this seems to be the case.

Correct, I have them in the Travis UI settings, secured with the privacy lock so they aren't advertised in the logs.

"the patterns I found in the budget and transportation backend repos you referenced as example, using the Docker images" so, in the end it turned out transportation from last year did not write any actual tests so should not be used as a base pattern, we cannot verify what actually worked.

I think it should still be okay. The only thing I really took from their pattern was how to structure the unit test directory. Everything else is either blank or laid out according to Django convention. What I have now is a tests/ directory in the Django application (api/) directory with a file called tests.py in it. Django convention is search for that file at that path and run any classes it finds in the file that extend the TestCase base class. I put one unit TestCase in the file to provide an example. Developers that have a backend repo with a generated quickstart app and filled-in API can then easily add and build out their api and data unit test cases by adding more cases in that file. I think this process for adding and running unit tests is pretty efficient for newcomers. Also, the bin/test.sh -d command works to run the development images and their unit tests locally on the dev machine.

it appears that you are not using the --keepdb flag currently, where is the test db being spun up and destroyed? in other words, are you connecting to an aws hosted database server? if so, the current one or last year's emergency response ec2 instance, or something else?

I have since added the --keepdb flag. Yes, currently it is pointed at the 2017 emergency-response postgres on AWS ec2, which has a copy of the disaster database. Here is the most current form I am invoking:

python manage.py test --nomigrations --no-input --keepdb

at some point we will need to replace the 'api_production' hardcoded name to start using $DOCKER_IMAGE env var, it may be better though that we make the env var updates their own pr request and just note this for later.

OK. @znmeb can probably handle changing the image name if necessary. Do you want me to create an issue so we can keep track of that requirement?

@znmeb
Copy link
Contributor

znmeb commented May 1, 2018

Definitely create an issue - I read GitHub more than Slack!

@nam20485
Copy link
Member Author

nam20485 commented May 1, 2018

Created, see #57

@znmeb You seem to be proficient with the Docker images, would this be a good one for you to tackle?

@znmeb
Copy link
Contributor

znmeb commented May 2, 2018

Yeah ... If nobody else wants it I'll take it

@MikeTheCanuck
Copy link
Contributor

MikeTheCanuck commented May 5, 2018

I believe the issues discussed are resolved with #60

@bhgrant8
Copy link
Member

bhgrant8 commented May 5, 2018

We will need to confirm this pattern works for projects connecting to the current database server, with read-only credentials, and running an actual test.

I had run into issues, which is why currently using py.test on transportation-systems-backend.

@nam20485
Copy link
Member Author

This issue will be resolved when related issue #82 becomes resolved and we settle on a read-only access DB unit testing solution. (Probably pytest)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants