Conversation
|
@freakboy3742 Looks like this is a bit unstable. Any ideas on ways to make this a bit more resilient? Not sure how common this is, just worth making sure you don't see a quick way to make it better. :) You can look at https://github.com/pypa/cibuildwheel/actions/runs/15527960535/job/43711065385?pr=2455, but here's the CoPilot summary: The failure occurred in the
AnalysisThe error seems to stem from either:
Suggested Fixes
Updated Code ExampleUpdate the test function @pytest.mark.serial
@pytest.mark.parametrize(
"build_config",
[
{"CIBW_PLATFORM": "ios"},
{"CIBW_PLATFORM": "ios", "CIBW_BUILD_FRONTEND": "build"},
],
)
def test_ios_platforms(tmp_path, build_config, monkeypatch, capfd):
skip_if_ios_testing_not_supported()
# Ensure the `true` command is available
if shutil.which("true") is None:
pytest.skip("`true` command not found on the system")
tools_dir = tmp_path / "bin"
tools_dir.mkdir()
tools_dir.joinpath("does-exist").symlink_to(shutil.which("true"))
monkeypatch.setenv("PATH", str(tools_dir), prepend=os.pathsep)
project_dir = tmp_path / "project"
setup_py_add = "import subprocess\nsubprocess.run('does-exist', check=True)\n"
basic_project = test_projects.new_c_project(setup_py_add=setup_py_add)
basic_project.files.update(basic_project_files)
basic_project.generate(project_dir)
actual_wheels = utils.cibuildwheel_run(
project_dir,
add_env={
"CIBW_BEFORE_BUILD": "does-exist",
"CIBW_BUILD": "cp313-*",
"CIBW_XBUILD_TOOLS": "does-exist",
"CIBW_TEST_SOURCES": "tests",
"CIBW_TEST_COMMAND": "python -m this && python -m unittest discover tests test_platform.py",
"CIBW_BUILD_VERBOSITY": "3", # Increased verbosity
**build_config,
},
)
expected_wheels = utils.expected_wheels(
"spam", "0.1.0", platform="ios", python_abi_tags=["cp313-cp313"]
)
assert set(actual_wheels) == set(expected_wheels)
captured = capfd.readouterr()
assert "'does-exist' will be included in the cross-build environment" in captured.out
assert "Zen of Python" in captured.out |
a61c179 to
9f898f5
Compare
Unfortunately, I don't have any ideas on how to address the resiliency issue beyond the tweaks we've already made (single process, etc) I see similar failures from time to time; it appears to be a "weather" thing - you'll get three failures in a row, and then everything starts working again. I can only presume there are certain machines in the CI pool that are either (a) under significant load or, (b) are old/near EOL and have poor performance; once you get the job allocated to a different machine, it resolves. The good news is in my experience, "bad weather" like this is fairly infrequent. |
|
One theory - it might be related to the rollout of the updated macOS-13 CI image... I've just noticed that an updated image became available around the same time as the CI builds... there might be a "firsts time cache warm" thing going on here. Its very difficult to confirm this is actually the cause, though. |
9f898f5 to
dd897d9
Compare
dd897d9 to
3be62fb
Compare
Update the versions of our dependencies.
PR generated by "Update dependencies" workflow.