This documentation provides developers with guidelines on how to run tests for the mlscores Python package and measure code coverage using pytest and pytest-cov.
Before you can run the tests and check code coverage, ensure that you have the following installed:
- Python 3.x
pytestpytest-cov
You can install the required packages using one of the following methods:
# Using pip with extras
pip install ".[dev]"
# Using requirements file
pip install -r requirements-dev.txt
# Or install directly
pip install pytest pytest-covTo run all tests in the project, navigate to the root directory (where setup.py is located) and run:
pytestThis command will automatically discover and execute all test files that match the naming pattern test_*.py or *_test.py located in the tests/ folder.
You can also run specific tests by specifying the test file or function.
-
To run a specific test file:
pytest tests/test_scores.py
-
To run a specific test function within a test file:
pytest tests/test_scores.py::test_all_properties_in_language
-
To run a specific test class:
pytest tests/test_display.py::TestPrintLanguagePercentages
Use the -v option for verbose output, which provides more detailed information about the tests being executed:
pytest -vThe test suite includes the following test modules:
| Module | Description |
|---|---|
test_scores.py |
Tests for multilinguality score calculations |
test_query.py |
Tests for SPARQL query functions |
test_display.py |
Tests for display and output formatting |
test_main.py |
Tests for CLI and main module functions |
Run all tests with verbose output:
pytest -v tests/To check code coverage while running your tests, you need to use the pytest-cov plugin. This tool will give you insights into which parts of your code are covered by tests.
-
Run Tests with Coverage Execute the following command to run tests and measure code coverage:
pytest --cov=mlscores
-
View Coverage Report After running the tests,
pytest-covwill display a summary of the coverage in the terminal. You will see output similar to this:---------- coverage: platform linux, python 3.10.12-final-0 ---------- Name Stmts Miss Cover ---------------------------------------------- mlscores/__init__.py 1 0 100% mlscores/__main__.py 85 20 76% mlscores/cache.py 45 10 78% mlscores/constants.py 12 0 100% mlscores/display.py 35 5 86% mlscores/endpoint.py 25 8 68% mlscores/formatters.py 60 15 75% mlscores/query.py 67 10 85% mlscores/scores.py 39 2 95% ---------------------------------------------- TOTAL 369 70 81% -
Generate a Coverage Report in HTML Format For a more detailed view, you can generate an HTML report:
pytest --cov=mlscores --cov-report=html
This command will create an
htmlcov/directory in your project root containing the HTML coverage report. Openhtmlcov/index.htmlin your browser to view the coverage details interactively. -
Coverage with Missing Lines To see which specific lines are missing coverage:
pytest --cov=mlscores --cov-report=term-missing
The test suite uses pytest fixtures defined in tests/conftest.py to provide common test data and mock objects. Key fixtures include:
| Fixture | Description |
|---|---|
sample_properties |
Sample property data for testing score calculations |
sample_property_labels |
Sample property label data with language information |
sample_value_labels |
Sample value label data with language information |
mock_sparql_wrapper |
Mock for SPARQLWrapper to avoid network calls |
Example usage of fixtures in tests:
def test_calculate_scores(sample_properties, sample_property_labels):
# Use fixtures in your test
result = calculate_language_percentages(sample_properties)
assert result is not None-
Import Errors If you encounter import errors, ensure the package is installed in development mode:
pip install -e ".[dev]" -
Missing Dependencies Install all required dependencies:
pip install -r requirements-dev.txt
Or separately:
pip install -r requirements.txt pip install pytest pytest-cov
-
Test Discovery Issues Ensure test files follow the naming convention
test_*.pyand are located in thetests/directory. -
Mock-related Failures Tests use
unittest.mockto avoid making actual network requests. If tests fail with network-related errors, check that mocks are properly configured.
Regular testing ensures code quality and prevents regressions. Run the test suite before submitting pull requests:
# Run all tests with coverage
pytest --cov=mlscores --cov-report=term-missing -v
# Generate HTML coverage report
pytest --cov=mlscores --cov-report=htmlFor questions or issues with testing, please open an issue on the GitHub repository.