Skip to content

.dockerignore, Dockerfile: add a zig cache#59

Merged
ee7 merged 5 commits intoexercism:mainfrom
ee7:zig-cache
Aug 23, 2023
Merged

.dockerignore, Dockerfile: add a zig cache#59
ee7 merged 5 commits intoexercism:mainfrom
ee7:zig-cache

Conversation

@ee7
Copy link
Copy Markdown
Member

@ee7 ee7 commented Aug 23, 2023

Running the tests for a Zig exercise was much slower than it should be, because every test run was the first time that Zig was run in the image.

Let's try an initial simple approach: run zig test once (for tests/example-success) and copy the resulting zig cache into the Docker image. This seems like a 2.2x speedup when running a subsequent exercise test, at the cost of adding a 41 MB zig cache to the image:

33 MB  /root/.cache/zig/z/
 8 MB  /root/.cache/zig/o/
22 kB  /root/.cache/zig/h/

Edit: Later, I believe we can speedup further by caching the result of compiling more functions from std.testing.
The speedup was better with the hello-world files because we cached testing.expectEqualStrings. But let's add a different test file in a follow-up PR that uses a bunch of common functions.

Closes: #28


This is sufficient to produce a significant speedup locally (from about 5 seconds to 2.3 seconds). And I think it's sufficient to produce the same significant speedup in production, but I'm not certain. I think it's easiest to just measure the approximate duration of a test run from the online editor before merging, merge, and test again after deploying.

Instead of adding the hello-world files to this repo, we could fetch them from the track repo at build time. But I think I'd rather avoid the extra network request.

To-do:

  • Rather than adding hello-world files, consider initializing the cache by testing an existing file in the tests directory

pub fn leap(year: u32) bool {
return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0);
}

This seems like a 2.6x speedup when running a subsequent exercise test,
at the cost of adding a 45 MB zig cache to the image:

    33 MB  /root/.cache/zig/z/
    12 MB  /root/.cache/zig/o/
    22 kB  /root/.cache/zig/h/

Closes: exercism#28
@ee7 ee7 requested a review from a team as a code owner August 23, 2023 08:37
ee7 added 2 commits August 23, 2023 10:40
Initialize the cache by using the exact same process that runs an
exercise solution in production.
@ee7 ee7 requested a review from ErikSchierboom August 23, 2023 08:47
ee7 added 2 commits August 23, 2023 11:37
The previous approach doesn't produce an error at build time if the
tests fail.
@ee7 ee7 changed the title Dockerfile, init-zig-cache: add a zig cache .dockerignore, Dockerfile: add a zig cache Aug 23, 2023
Copy link
Copy Markdown
Member

@ErikSchierboom ErikSchierboom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice

@ee7
Copy link
Copy Markdown
Member Author

ee7 commented Aug 23, 2023

I ran the tests for leap from the online editor 10 times, starting around 2023-08-23T10:20:00Z. The execution time mean and sample standard deviation: 11 ± 1 s.

Let's see if it improves after merging.

@ee7 ee7 merged commit 82b56af into exercism:main Aug 23, 2023
@ee7 ee7 deleted the zig-cache branch August 23, 2023 10:51
@ee7
Copy link
Copy Markdown
Member Author

ee7 commented Aug 23, 2023

Starting at 2023-08-23T10:55:00Z (after the deployed test runner updated to 82b56af) I repeated the above test for the same exercise, and I got 14 ± 3 s.

Edit: same thing at 2023-08-23T11:20:00Z. And it seems the same for other exercises: I don't think it's a special case of only leap being slow, as a result of producing the compilation cache by running the tests for leap.

@ErikSchierboom do we already have a process for comparing an exercise's test execution time across two different test runner images? If so, can you reproduce the above slow down? I'm certain it's a large speedup for me locally. But if it seems to be legitmately worse in production, I'll revert the commit. I realize there's some noise in the measurement.

@ErikSchierboom
Copy link
Copy Markdown
Member

do we already have a process for comparing an exercise's test execution time across two different test runner images

I'm afraid we don't, sorry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Dockerfile: consider adding zig cache

2 participants