Skip to content

Testing

This guide provides information about testing the UBU Digital Finance Solution backend.

Testing Framework

The UBU Digital Finance Solution uses pytest as its testing framework. The testing setup includes:

  • pytest: The main testing framework
  • pytest-asyncio: For testing asynchronous code
  • pytest-cov: For measuring test coverage
  • httpx: For testing HTTP endpoints

Test Structure

Tests are organized in a structure that mirrors the application code:

tests/
├── conftest.py                # Shared fixtures and configuration
├── api/                       # API endpoint tests
│   └── routers/               # Tests for API routers
│       └── users/             # Tests for user-related endpoints
├── auth/                      # Tests for authentication logic
├── models/                    # Tests for database models
├── schemas/                   # Tests for Pydantic schemas
└── utils/                     # Tests for utility functions

Running Tests

Running All Tests

To run all tests:

pytest

Running Specific Tests

To run tests in a specific file:

pytest tests/api/routers/users/test_user_api.py

To run a specific test function:

pytest tests/api/routers/users/test_user_api.py::test_create_user

Running Tests with Coverage

To run tests with coverage reporting:

pytest --cov=app

To generate an HTML coverage report:

pytest --cov=app --cov-report=html

The HTML report will be available in the htmlcov directory.

Test Fixtures

Common test fixtures are defined in conftest.py. These include:

  • Database fixtures: For setting up and tearing down test databases
  • Client fixtures: For making HTTP requests to the API
  • Authentication fixtures: For creating authenticated test clients
  • User fixtures: For creating test users with different roles
  • Permission fixtures: For creating test permissions
  • Role fixtures: For creating test roles
  • Organizational unit fixtures: For creating test organizational units

Writing Tests

API Tests

API tests should verify that endpoints:

  1. Return the expected status code
  2. Return the expected response format
  3. Perform the expected database operations
  4. Enforce the expected permissions
  5. Handle error cases appropriately

Example API test:

async def test_create_user(client, admin_token):
    # Prepare test data
    user_data = {
        "user_fullname": "Test User",
        "user_email": "test@example.com",
        "user_phonenumber": "+1234567890",
        "role_id": "some-uuid",
        "unit_id": "some-uuid"
    }

    # Make request with admin token
    response = await client.post(
        "/user/",
        json=user_data,
        headers={"Authorization": f"Bearer {admin_token}"}
    )

    # Assert response
    assert response.status_code == 201
    data = response.json()
    assert "detail" in data
    assert "user" in data
    assert data["user"]["user_email"] == user_data["user_email"]

    # Verify database state
    # ...

Model Tests

Model tests should verify that database models:

  1. Can be created with valid data
  2. Enforce constraints and validations
  3. Have the expected relationships with other models
  4. Perform any model-specific operations correctly

Example model test:

async def test_user_model(db_session):
    # Create a user
    user = UsersModel(
        user_code="TEST001",
        user_fullname="Test User",
        user_email="test@example.com",
        user_phonenumber="+1234567890",
        user_password="hashed_password",
        is_active=True
    )
    db_session.add(user)
    await db_session.commit()

    # Retrieve the user
    result = await db_session.execute(
        select(UsersModel).where(UsersModel.user_email == "test@example.com")
    )
    retrieved_user = result.scalar_one()

    # Assert user properties
    assert retrieved_user.user_code == "TEST001"
    assert retrieved_user.user_fullname == "Test User"
    assert retrieved_user.is_active == True

Schema Tests

Schema tests should verify that Pydantic schemas:

  1. Validate input data correctly
  2. Reject invalid data with appropriate error messages
  3. Transform data as expected

Example schema test:

def test_user_schema():
    # Valid data
    valid_data = {
        "user_fullname": "Test User",
        "user_email": "test@example.com",
        "user_phonenumber": "+1234567890",
        "role_id": "123e4567-e89b-12d3-a456-426614174000",
        "unit_id": "123e4567-e89b-12d3-a456-426614174001"
    }
    user = userSchema(**valid_data)
    assert user.user_fullname == valid_data["user_fullname"]
    assert user.user_email == valid_data["user_email"]

    # Invalid email
    invalid_data = valid_data.copy()
    invalid_data["user_email"] = "not-an-email"
    with pytest.raises(ValidationError):
        userSchema(**invalid_data)

Utility Tests

Utility tests should verify that utility functions:

  1. Produce the expected output for valid input
  2. Handle edge cases appropriately
  3. Raise appropriate exceptions for invalid input

Example utility test:

def test_generate_user_code():
    # Generate a user code
    user_code = generate_user_code()

    # Assert format
    assert len(user_code) == 8
    assert user_code.isalnum()

    # Generate another code and ensure it's different
    another_code = generate_user_code()
    assert user_code != another_code

Mocking

For tests that involve external dependencies (e.g., email sending, Redis), use mocking to isolate the code being tested.

Example with mocking:

@pytest.mark.asyncio
async def test_send_email(monkeypatch):
    # Create a mock for the email sending function
    async def mock_send_email(*args, **kwargs):
        return {"status": "success"}

    # Apply the mock
    monkeypatch.setattr("app.utils.mail.mail.send_email_async", mock_send_email)

    # Call the function that uses email sending
    email_data = EmailSchema(
        email=["test@example.com"],
        subject="Test Subject",
        body="Test Body"
    )
    result = await send_email_util(email_data)

    # Assert result
    assert result["status"] == "success"

Integration Tests

Integration tests verify that different components of the system work together correctly. These tests typically involve:

  1. Setting up a test database
  2. Making HTTP requests to the API
  3. Verifying the database state after the requests
  4. Checking that the response is correct

Example integration test:

async def test_user_creation_and_authentication(client):
    # Create a user
    user_data = {
        "user_fullname": "Integration Test User",
        "user_email": "integration@example.com",
        "user_phonenumber": "+1234567890",
        "role_id": "some-uuid",
        "unit_id": "some-uuid"
    }

    # Create user with admin token
    create_response = await client.post(
        "/user/",
        json=user_data,
        headers={"Authorization": f"Bearer {admin_token}"}
    )
    assert create_response.status_code == 201
    user = create_response.json()["user"]

    # Get the user code from the response
    user_code = user["user_code"]

    # Request OTP with the user code and default password
    otp_request = {
        "username": user_code,
        "password": "default-password"  # This would be sent in the email
    }
    otp_response = await client.post("/authentication/request-otp", json=otp_request)

    # If 2FA is disabled, we should get tokens directly
    assert otp_response.status_code == 200
    tokens = otp_response.json()

    # Use the access token to get the user profile
    profile_response = await client.get(
        "/user/profile",
        headers={"Authorization": f"Bearer {tokens['access_token']}"}
    )
    assert profile_response.status_code == 200
    profile = profile_response.json()

    # Verify profile data
    assert profile["user_email"] == user_data["user_email"]
    assert profile["user_fullname"] == user_data["user_fullname"]

Continuous Integration

Tests are automatically run in the CI pipeline for every pull request and push to the main branches. The CI pipeline:

  1. Sets up a test environment
  2. Installs dependencies
  3. Runs the tests
  4. Reports test coverage
  5. Fails if tests fail or coverage is below the threshold

Best Practices

  1. Write tests first: Follow test-driven development (TDD) principles when possible
  2. Keep tests focused: Each test should verify a specific behavior
  3. Use descriptive test names: Test names should describe what is being tested
  4. Isolate tests: Tests should not depend on each other
  5. Clean up after tests: Tests should clean up any resources they create
  6. Use fixtures: Use fixtures for common setup and teardown
  7. Mock external dependencies: Use mocking for external services
  8. Test edge cases: Include tests for edge cases and error conditions
  9. Maintain high coverage: Aim for high test coverage, especially for critical code paths
  10. Keep tests fast: Tests should run quickly to encourage frequent testing