Note: Probably the best article on testing with sqlalchemy is John Sontek’s Writing tests for pyramid and sqlalchemy
If you’re anything like me then you’ve probably done some research and found that most examples of how to test sqlalchemy use one of two techniques:
If your tests start exercising the db then you’ve left the realm of “unit tests” and are now writing functional or integration tests. Once you’re writing integration tests you probably should be testing that integration works against the real db.
So from here the obvious step would seem to be to eschew the use of sqlite and go simply drop and recreate the tables on each test run. One problem with this is that it’s slow. The slower a test suite is the less likely it is to be run and while it’s fair to say that the extra (milli)seconds added by creating and recreating the tables will likely not be the main bottleneck every little helps when keeping the tests fast. More importantly dropping and recreating the tables prevents a very easy speedup - running the tests in parallel. A similar drawback of the drop/create approach I encountered while working with a client is that if your CI server runs your tests against multiple branches (say develop and master) then you’ll possibly end up with intermittently failing tests because a test running against the develop branch dropped your tables at the same time as a test running against master tried to read from them.
So what’s the solution? Run the tests inside of transactions and simply reset the db after each test. Here’s how to achieve this with py.test.
py.test has a feature called fixtures which provides “a fixed baseline upon
which tests can reliably and repeatedly execute”. Conceptually this is similar
to the classical xUnit setup and teardown methods but is a lot more flexible.
Writing a py.test fixture is pretty simple (copying the example from the
response, msg = smtp.ehlo()
assert response == 250
py.test uses dependency injection to allow passing arguments to a test
test_ehlo function takes an
smtp argument and so the py.test
runner knows to pass the corresponding
smtp fixture to that function when
executing it. As-is, this fixture will be executed once for each test function
that uses it but it is also possible to define fixtures to be either module,
class or session level in which case the fixture would be executed once (per
scope level) and a memoised version of the result is passed on subsequent
calls. We can use session-scoped fixtures to create our SqlAlchemy session for
from sqlalchemy import create_engine
from foo import models
engine = create_engine('postgresql://bar@/test_bar')
connection = engine.connect()
models.Base.metadata.bind = engine
def db_session(request, connection):
from transaction import abort
trans = connection.begin()
from foo.models import DBSession
py.test fixtures can themselves have dependencies injected and we make use of this feature in two ways here.
Firstly we define a
connection fixture which accepts
a py.test builtin fixture which allows some interaction with the context of the
current test session. We create and configure an sqlalchemy engine +
connection, create all tables and use
request.addfinalizer to add a post test
session callback that will drop the created tables. The
that our tests will be using makes use of two fixtures - the builtin
connection fixture we just defined.
db_session simply begins the
implicit transaction associated with a db connection and adds another
request.addfinalizer callback to rollback and abort the transaction.
connection is scoped to a session and so the connection is created
db_session will be called for each test that uses it. To make
use of this session we write a test as follows.
model_instance = models.Widget()
assert 1 == db_session.query(models.Widget).count()
assert 0 == db_session.query(models.Widget).count()