Skip to content

Commit

Permalink
ch9-13
Browse files Browse the repository at this point in the history
  • Loading branch information
luanchang committed Feb 25, 2020
1 parent e880461 commit 7266768
Show file tree
Hide file tree
Showing 14 changed files with 290 additions and 301 deletions.
12 changes: 6 additions & 6 deletions appendix_csvs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ follow from <<chapter_06_uow>>.

Just as we finish building out our Flask API and getting it ready for release,
the business comes to us apologetically, saying they're not ready to use our API
and could we build a thing that reads just batches and orders from a couple of
CSVs and outputs a third with allocations.
and asking if we could build a thing that reads just batches and orders from a couple of
CSVs and outputs a third CSV with allocations.

Ordinarily this is the kind of thing that might have a team cursing and spitting
and making notes for their memoirs. But not us! Oh no, we've ensured that
Expand Down Expand Up @@ -169,7 +169,7 @@ def test_cli_app_also_reads_existing_allocations_and_can_append_to_them(


And we could keep hacking about and adding extra lines to that `load_batches` function,
and some sort of way of tracking and saving new allocations—but we already have a model for doing that! It's called our Repository and our Unit of Work patterns.
and some sort of way of tracking and saving new allocations—but we already have a model for doing that! It's called our Repository and Unit of Work patterns.

All we need to do ("all we need to do") is reimplement those same abstractions, but
with CSVs underlying them instead of a database. And as you'll see, it really is relatively straightforward.
Expand All @@ -180,8 +180,8 @@ with CSVs underlying them instead of a database. And as you'll see, it really is

Here's what a CSV-based repository could look like.((("repositories", "CSV-based repository"))) It abstracts away all the
logic for reading CSVs from disk, including the fact that it has to read _two
different CSVs_, one for batches and one for allocations, and it just gives us
the familiar `.list()` API, which gives us the illusion of an in-memory
different CSVs_ (one for batches and one for allocations), and it gives us just
the familiar `.list()` API, which provides the illusion of an in-memory
collection of domain objects:

[[csv_repository]]
Expand Down Expand Up @@ -266,7 +266,7 @@ class CsvUnitOfWork(unit_of_work.AbstractUnitOfWork):


And once we have that, our CLI app for reading and writing batches
and allocations to CSV is pared down to what it should be: a bit
and allocations to CSV is pared down to what it should bea bit
of code for reading order lines, and a bit of code that invokes our
_existing_ service layer:

Expand Down
35 changes: 16 additions & 19 deletions appendix_django.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ package next to our main allocation code:

[TIP]
====
You can find the code for this chapter is in the
https://github.com/cosmicpython/code/tree/appendix_django[appendix_django] branch on GitHub.
The code for this appendix is in the
https://github.com/cosmicpython/code/tree/appendix_django[appendix_django] branch on GitHub:
----
git clone https://github.com/cosmicpython/code.git
Expand All @@ -62,11 +62,11 @@ git checkout appendix_django

=== Repository Pattern with Django

We used a plug in called
We used a plug-in called
https://github.com/pytest-dev/pytest-django[`pytest-django`] to help with test
database management.((("Repository pattern", "with Django", id="ix_RepoDjango")))((("Django", "Repository pattern with", id="ix_DjangoRepo")))

Rewriting the first repository test was a minimal change, just rewriting
Rewriting the first repository test was a minimal changejust rewriting
some raw SQL with a call to the Django ORM/QuerySet language:


Expand Down Expand Up @@ -209,7 +209,7 @@ class OrderLine(models.Model):

<1> For value objects, `objects.get_or_create` can work, but for entities,
you probably need an explicit try-get/except to handle the upsert.footnote:[
`@mr-bo-jangles` suggested you might be able to use https://oreil.ly/HTq1r[`update_or_create`]
`@mr-bo-jangles` suggested you might be able to use https://oreil.ly/HTq1r[`update_or_create`],
but that's beyond our Django-fu.]

<2> We've shown the most complex example here. If you do decide to do this,
Expand All @@ -220,7 +220,7 @@ class OrderLine(models.Model):


NOTE: As in <<chapter_02_repository>>, we use dependency inversion.
The ORM (Django) depends on the model, and not the other way around.((("Repository pattern", "with Django", startref="ix_RepoDjango")))((("Django", "Repository pattern with", startref="ix_DjangoRepo")))
The ORM (Django) depends on the model and not the other way around.((("Repository pattern", "with Django", startref="ix_RepoDjango")))((("Django", "Repository pattern with", startref="ix_DjangoRepo")))



Expand Down Expand Up @@ -309,7 +309,7 @@ class DjangoUnitOfWork(AbstractUnitOfWork):
====

<1> `set_autocommit(False)` was the best way to tell Django to stop
automatically committing each ORM operation immediately, and
automatically committing each ORM operation immediately, and to
begin a transaction.

<2> Then we use the explicit rollback and commits.
Expand Down Expand Up @@ -401,13 +401,12 @@ to a Django app?((("Django", "applying patterns to Django app"))) We'd say the f

* The Repository and Unit of Work patterns are going to be quite a lot of work. The
main thing they will buy you in the short term is faster unit tests, so
evaluate whether that feels worth it in your case. In the longer term, they
evaluate whether that benefit feels worth it in your case. In the longer term, they
decouple your app from Django and the database, so if you anticipate wanting
to migrate away from either of those, Repository and UoW are a good idea.

* The Service Layer pattern might be of interest if you're seeing a lot of duplication in
your _views.py_. It can be a good way of thinking about your use cases,
separately from your web endpoints.
your _views.py_. It can be a good way of thinking about your use cases separately from your web endpoints.

* You can still theoretically do DDD and domain modeling with Django models,
tightly coupled as they are to the database; you may be slowed by
Expand All @@ -422,23 +421,21 @@ https://oreil.ly/Nbpjj[word
in the Django community] is that people find that the fat models approach runs into
scalability problems of its own, particularly around managing interdependencies
between apps. In those cases, there's a lot to be said for extracting out a
business logic or domain layer to sit between your views and forms, and
business logic or domain layer to sit between your views and forms and
your _models.py_, which you can then keep as minimal as possible.

=== Steps Along the Way

Suppose you're working on a Django project that you're not sure is going
to get complex enough to warrant the patterns we recommend, but you still
want to put a few steps in place to make your life easier, both in the medium
term, and if you want to migrate to some of our patterns later.((("Django", "applying patterns to Django app", "steps along the way"))) Consider the following:
term and if you want to migrate to some of our patterns later.((("Django", "applying patterns to Django app", "steps along the way"))) Consider the following:

* One piece of advice we've heard is to put a __logic.py__ into every Django app,
from day one. This gives you a place to put business logic, and to keep your
* One piece of advice we've heard is to put a __logic.py__ into every Django app from day one. This gives you a place to put business logic, and to keep your
forms, views, and models free of business logic. It can become a stepping-stone
for moving to a fully decoupled domain model and/or service layer later.

* A business-logic layer might start out working with Django model objects,
and only later become fully decoupled from the framework and work on
* A business-logic layer might start out working with Django model objects and only later become fully decoupled from the framework and work on
plain Python data structures.

* For the read side, you can get some of the benefits of CQRS by putting reads
Expand All @@ -449,10 +446,10 @@ term, and if you want to migrate to some of our patterns later.((("Django", "app
concerns will cut across them.


NOTE: We'd like to give a shout out to David Seddon and Ashia Zawaduk for
talking through some of the ideas in this chapter. They did their best to
NOTE: We'd like to give a shout-out to David Seddon and Ashia Zawaduk for
talking through some of the ideas in this appendix. They did their best to
stop us from saying anything really stupid about a topic we don't really
have enough personal experience of, but they may have failed.

For more ((("Django", startref="ix_Django")))thoughts and actual lived experience dealing with existing
applications, refer to the <<epilogue_1_how_to_get_there_from_here>>.
applications, refer to the pass:[<a href="epilogue_1_how_to_get_there_from_here">epilogue</a>].
4 changes: 2 additions & 2 deletions appendix_ds1_table.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ image::images/apwp_aa01.png["diagram showing all components: flask+eventconsumer
__Defines the business logic.__


| Entity | A domain object whose attributes may change, but that has a recognizable identity over time.
| Entity | A domain object whose attributes may change but that has a recognizable identity over time.

| Value object | An immutable domain object whose attributes entirely define it. It is fungible with other identical objects.

Expand All @@ -34,7 +34,7 @@ __Defines the business logic.__

__Defines the jobs the system should perform and orchestrates different components.__

| Handler | Receives a command or event and performs what needs to happen.
| Handler | Receives a command or an event and performs what needs to happen.
| Unit of work | Abstraction around data integrity. Each unit of work represents an atomic update. Makes repositories available. Tracks new events on retrieved aggregates.
| Message bus (internal) | Handles commands and events by routing them to the appropriate handler.

Expand Down
31 changes: 15 additions & 16 deletions appendix_project_structure.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ be of interest to outline the moving parts.((("projects", "template project stru

[TIP]
====
The code for this chapter is in the
https://github.com/cosmicpython/code/tree/appendix_project_structure[appendix_project_structure] branch on GitHub.
The code for this appendix is in the
https://github.com/cosmicpython/code/tree/appendix_project_structure[appendix_project_structure] branch on GitHub:
----
git clone https://github.com/cosmicpython/code.git
Expand Down Expand Up @@ -69,7 +69,7 @@ The basic folder structure looks like this:
====

<1> Our _docker-compose.yml_ and our _Dockerfile_ are the main bits of configuration
for the containers that run our app, and can also run the tests (for CI). A
for the containers that run our app, and they can also run the tests (for CI). A
more complex project might have several Dockerfiles, although we've found that
minimizing the number of images is usually a good idea.footnote:[Splitting
out images for production and testing is sometimes a good idea, but we've tended
Expand All @@ -85,20 +85,20 @@ The basic folder structure looks like this:
team knows Python (or at least knows it better than Bash!).] This is optional. You could just use
`docker-compose` and `pytest` directly, but if nothing else, it's nice to
have all the "common commands" in a list somewhere, and unlike
documentation, a Makefile is code so it has less tendency to become out-of-date.
documentation, a Makefile is code so it has less tendency to become out of date.

<3> All the source code for our app, including the domain model, the
Flask app, and infrastructure code, lives in a Python package inside
_src_,footnote:[https://hynek.me/articles/testing-packaging["Testing and Packaging"] by Hynek Schlawack provides more information on _src_ folders.]
which we install using `pip install -e` and the _setup.py_ file. This makes
imports easy. Currently, the structure within this module is totally flat,
but for a more complex project, you'd expect to grow a folder hierarchy
including _domain_model/_, _infrastructure/_, _services/_, and _api/_.
that includes _domain_model/_, _infrastructure/_, _services/_, and _api/_.


<4> Tests live in their own folder. Subfolders distinguish different test
types and allow you to run them separately. We can keep shared fixtures
(_conftest.py_) in the main tests folder, and nest more specific ones if we
(_conftest.py_) in the main tests folder and nest more specific ones if we
wish. This is also the place to keep _pytest.ini_.


Expand All @@ -120,7 +120,7 @@ config settings for the following:

- Running on the containers themselves, with "real" ports and hostnames

- Different container environments (dev, staging, prod, and so on).
- Different container environments (dev, staging, prod, and so on)

Configuration through environment variables as suggested by the
https://12factor.net/config[12-factor] manifesto will solve this problem,
Expand Down Expand Up @@ -169,17 +169,16 @@ An elegant Python package called
https://github.com/hynek/environ-config[_environ-config_] is worth looking
at if you get tired of hand-rolling your own environment-based config functions.

TIP: Don't let this config module become a dumping ground full of things that
are only vaguely related to config, and is then imported all over the place.
TIP: Don't let this config module become a dumping ground that is full of things only vaguely related to config and that is then imported all over the place.
Keep things immutable and modify them only via environment variables.
If you decide to use a <<chapter_13_dependency_injection,bootstrap script>>,
you can make it the only place (other than tests) that config is imported.
you can make it the only place (other than tests) that config is imported to.

=== Docker-Compose and Containers Config

We use a lightweight Docker container orchestration tool called _docker-compose_.
It's main configuration is via a YAML file (sigh):footnote:[Harry is a bit YAML-weary.
It's _everywhere_ and yet he can never remember the syntax or how it's supposed
It's _everywhere_, and yet he can never remember the syntax or how it's supposed
to indent.]


Expand Down Expand Up @@ -242,7 +241,7 @@ services:
local dev machine and the container, the `PYTHONDONTWRITEBYTECODE` environment variable
tells Python to not write _.pyc_ files, and that will save you from
having millions of root-owned files sprinkled all over your local filesystem,
being all annoying to delete, and causing weird Python compiler errors besides.
being all annoying to delete and causing weird Python compiler errors besides.

<6> Mounting our source and test code as `volumes` means we don't need to rebuild
our containers every time we make a code change.
Expand Down Expand Up @@ -300,7 +299,7 @@ setup(
That's all you need. `packages=` specifies the names of subfolders that you
want to install as top-level modules. The `name` entry is just cosmetic, but
it's required. For a package that's never actually going to hit PyPI, it'll
do fine.footnote:[For more _setup.py_ tips see
do fine.footnote:[For more _setup.py_ tips, see
https://hynek.me/articles/testing-packaging[this article on packaging] by Hynek.]


Expand Down Expand Up @@ -344,13 +343,13 @@ CMD flask run --host=0.0.0.0 --port=80
prod dependencies; we haven't here, for simplicity)
<3> Copying and installing our source
<4> Optionally configuring a default startup command (you'll probably override
this a lot from the command-line)
this a lot from the command line)

TIP: One thing to note is that we install things in the order of how frequently they
are likely to change. This allows us to maximize Docker build cache reuse. I
can't tell you how much pain and frustration underlies this lesson. For this,
can't tell you how much pain and frustration underlies this lesson. For this
and many more Python Dockerfile improvement tips, check out
https://pythonspeed.com/docker[Production-Ready Docker Packaging].
https://pythonspeed.com/docker["Production-Ready Docker Packaging"].

=== Tests

Expand Down
Loading

0 comments on commit 7266768

Please sign in to comment.