Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add rudimentary containerized integration tests #27

Closed
wants to merge 9 commits into from

Conversation

WillNilges
Copy link
Collaborator

This is a rough setup, but something like this would be neat maybe. This also includes a change to Install Status to add an Enum. I added the tests so that I could test that 🙃

This was referenced Aug 26, 2023
No need to have a whole extra docker compose idiot for this. I think
having a script that runs a docker container once is probably fine.

Could un-containerize it by providing access to the network to your
host. Probably the logical conclusion, though that would require a venv
and dependency management... Details, I guess.
@WillNilges WillNilges force-pushed the willnilges/podman-tests branch 3 times, most recently from c63ebd3 to 3b7f446 Compare August 27, 2023 14:19
@WillNilges WillNilges force-pushed the willnilges/podman-tests branch 2 times, most recently from 80b07e5 to 11d6b4a Compare August 27, 2023 14:53
Not sure if there's a way to configure the compose to use the host
network. Would be nice if we could just mimic this script in the
configuration. There IS syntax for it.
Change the style of the environment variables to be more unique and
greppable
I moved the black runner to `lint.yaml` to be more descriptive
Copy link
Collaborator

@quoncc quoncc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is good - will not merge yet and leave for others

# Gross
entrypoint: ["sh", "-c", "sleep 5 && gunicorn api:app --bind=0.0.0.0:8080"]
entrypoint: ["sh", "-c", "sleep 3 && gunicorn api:app --bind=0.0.0.0:8080"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use something like https://docs.docker.com/engine/reference/builder/#healthcheck to make the container setup process more deterministic?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, that was a shitty hack I left in.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in #36

@@ -26,7 +27,7 @@ def test_data():
)
danielInstall = install(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we possibly explicitly check the status of these writes somehow, or will they throw an error if something is wrong?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not totally sure. If an exception is raised (like the ones we got on the call a few days ago), the test will show as failed; I think as long as an exception is raised at all, we're good.

@andybaumgar
Copy link
Collaborator

Looks good! Added some comments.

Copy link
Member

@Andrew-Dickinson Andrew-Dickinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will preface this with the caveat that I don't have a ton of docker development workflow experience. At work we use a totally separate dev environment versioning system and containers are basically only used for packing & deployment.

However, overall this pattern seems a bit strange to me. Creating this new container separate from the application that connects directly to the database container in order to run tests spooks me a little bit. I get that this is a bit of a messy problem and having the application container be able to run with different entrypoints wouldn't work very well either, I'm just a bit concerned that now we have two separate environments to keep in sync.

Is there a structure here that lets the testing container set the application container image as its upstream, and then its Dockerfile just installs test dependencies and sets the different entrypoint?

I also think there's a lot of value just in unit tests that run directly in the developer laptop (or Ubuntu CI host's) .venv and also in "API-level" integration tests that are constrained to use the publicly exposed HTTP endpoints (and therefore can run in a completely isolated env that requires no special connections to the DB, just requests). We can go pretty far with those two kinds of tests, though I suppose the closer you get to the DB, the trickier it is to capture all possible behaviors with these two approaches

from db.database import create_db_engine


def test_data():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never worked with sqlalchemy, so pardon if my ignorance is showing here, but what exactly is this test testing? It looks like we're just using the ORM to create some rows an then committing them to the DB? Is this just checking that our model's create interface doesn't throw an exception?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's just a "hello world" test, but I agree.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah basically this. It doesn't really test the API, only our ORM configuration, which is still a valuable thing to test, but this is absolutely not comprehensive.

@Andrew-Dickinson
Copy link
Member

Also the .env changes here will conflict with mine in #33 so we'll need to sequence these

@andybaumgar
Copy link
Collaborator

I wonder if there's a pattern where we can use the same Dockerfile for integration tests and production? If we had a problem with the Dockerfile itself, it would be nice for that to get included in the test.

For unit tests, you can use CMD vs ENTRYPOINT to run a test script like pytest. Maybe for integration testing, we have a separate conditionally run container that makes requests to the other API and DB containers?

@WillNilges
Copy link
Collaborator Author

Thanks for the input, gang. To @Andrew-Dickinson 's questions about the containerized environment, I agree, something feels really weird about this. I haven't really ever had to build something like this before and I just took a crack at it to make something that works. The main thing that's weird to me is the issue of maintaining separate environments. To fix that, as you mentioned, we can definitely have a multi-stage build that depends on another container, and there are a few ways to do it. I can probably script it so that we build the API container, and then build another "testing" container on top of its layers. The reason I separated it, though, was because I kinda wanted the testing container to mimic a "client" connecting to the API (the current test doesn't really illustrate that right now, admittedly). I can play around with it more, maybe ask some friends if they have any ideas.

I also like the idea of just running pytest in a venv, the issue is how to easily configure the dev environment to connect to the container network when that happens. One possible solution is to run the API on the host network all the time within the compose file, which might be convenient for developing, say, changes to the API, and a separate project simultaneously.

@WillNilges
Copy link
Collaborator Author

And yup, this branch has a lot of conflicts :)

@WillNilges
Copy link
Collaborator Author

After some contemplation, I think I want to do away with the testing container if possible, and just pytest from the host/workflow directly. I'd like to see if I can configure networking to work differently depending on if you're running it locally, or in production.

@WillNilges
Copy link
Collaborator Author

Might close this PR, break out the important changes (the enums) then start over. Learned a lot lol

@quoncc
Copy link
Collaborator

quoncc commented Aug 28, 2023

Doing some quick reading, if you start a container using "docker run" it will override the default entrypoint and use the command you've given it instead.

What this would look like for us, I'd imagine, is compiling one image, one dockerfile, with the unit tests sitting as a python executable somewhere, and then the CI/CD will deploy it to testing and use the above command to override the default entrypoint.

Sounds like a good way to do it imo.

@andybaumgar
Copy link
Collaborator

andybaumgar commented Aug 28, 2023

@quoncc That's a good point and it's what we've done in the past for unit testing. I think the goal of this PR is integration testing though (testing the API + DB system as a whole).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants