Testing
This guide provides information about testing the UBU Digital Finance Solution backend.
Testing Framework
The UBU Digital Finance Solution uses pytest as its testing framework. The testing setup includes:
- pytest: The main testing framework
- pytest-asyncio: For testing asynchronous code
- pytest-cov: For measuring test coverage
- httpx: For testing HTTP endpoints
Test Structure
Tests are organized in a structure that mirrors the application code:
tests/
├── conftest.py # Shared fixtures and configuration
├── api/ # API endpoint tests
│ └── routers/ # Tests for API routers
│ └── users/ # Tests for user-related endpoints
├── auth/ # Tests for authentication logic
├── models/ # Tests for database models
├── schemas/ # Tests for Pydantic schemas
└── utils/ # Tests for utility functions
Running Tests
Running All Tests
To run all tests:
Running Specific Tests
To run tests in a specific file:
To run a specific test function:
Running Tests with Coverage
To run tests with coverage reporting:
To generate an HTML coverage report:
The HTML report will be available in the htmlcov directory.
Test Fixtures
Common test fixtures are defined in conftest.py. These include:
- Database fixtures: For setting up and tearing down test databases
- Client fixtures: For making HTTP requests to the API
- Authentication fixtures: For creating authenticated test clients
- User fixtures: For creating test users with different roles
- Permission fixtures: For creating test permissions
- Role fixtures: For creating test roles
- Organizational unit fixtures: For creating test organizational units
Writing Tests
API Tests
API tests should verify that endpoints:
- Return the expected status code
- Return the expected response format
- Perform the expected database operations
- Enforce the expected permissions
- Handle error cases appropriately
Example API test:
async def test_create_user(client, admin_token):
# Prepare test data
user_data = {
"user_fullname": "Test User",
"user_email": "test@example.com",
"user_phonenumber": "+1234567890",
"role_id": "some-uuid",
"unit_id": "some-uuid"
}
# Make request with admin token
response = await client.post(
"/user/",
json=user_data,
headers={"Authorization": f"Bearer {admin_token}"}
)
# Assert response
assert response.status_code == 201
data = response.json()
assert "detail" in data
assert "user" in data
assert data["user"]["user_email"] == user_data["user_email"]
# Verify database state
# ...
Model Tests
Model tests should verify that database models:
- Can be created with valid data
- Enforce constraints and validations
- Have the expected relationships with other models
- Perform any model-specific operations correctly
Example model test:
async def test_user_model(db_session):
# Create a user
user = UsersModel(
user_code="TEST001",
user_fullname="Test User",
user_email="test@example.com",
user_phonenumber="+1234567890",
user_password="hashed_password",
is_active=True
)
db_session.add(user)
await db_session.commit()
# Retrieve the user
result = await db_session.execute(
select(UsersModel).where(UsersModel.user_email == "test@example.com")
)
retrieved_user = result.scalar_one()
# Assert user properties
assert retrieved_user.user_code == "TEST001"
assert retrieved_user.user_fullname == "Test User"
assert retrieved_user.is_active == True
Schema Tests
Schema tests should verify that Pydantic schemas:
- Validate input data correctly
- Reject invalid data with appropriate error messages
- Transform data as expected
Example schema test:
def test_user_schema():
# Valid data
valid_data = {
"user_fullname": "Test User",
"user_email": "test@example.com",
"user_phonenumber": "+1234567890",
"role_id": "123e4567-e89b-12d3-a456-426614174000",
"unit_id": "123e4567-e89b-12d3-a456-426614174001"
}
user = userSchema(**valid_data)
assert user.user_fullname == valid_data["user_fullname"]
assert user.user_email == valid_data["user_email"]
# Invalid email
invalid_data = valid_data.copy()
invalid_data["user_email"] = "not-an-email"
with pytest.raises(ValidationError):
userSchema(**invalid_data)
Utility Tests
Utility tests should verify that utility functions:
- Produce the expected output for valid input
- Handle edge cases appropriately
- Raise appropriate exceptions for invalid input
Example utility test:
def test_generate_user_code():
# Generate a user code
user_code = generate_user_code()
# Assert format
assert len(user_code) == 8
assert user_code.isalnum()
# Generate another code and ensure it's different
another_code = generate_user_code()
assert user_code != another_code
Mocking
For tests that involve external dependencies (e.g., email sending, Redis), use mocking to isolate the code being tested.
Example with mocking:
@pytest.mark.asyncio
async def test_send_email(monkeypatch):
# Create a mock for the email sending function
async def mock_send_email(*args, **kwargs):
return {"status": "success"}
# Apply the mock
monkeypatch.setattr("app.utils.mail.mail.send_email_async", mock_send_email)
# Call the function that uses email sending
email_data = EmailSchema(
email=["test@example.com"],
subject="Test Subject",
body="Test Body"
)
result = await send_email_util(email_data)
# Assert result
assert result["status"] == "success"
Integration Tests
Integration tests verify that different components of the system work together correctly. These tests typically involve:
- Setting up a test database
- Making HTTP requests to the API
- Verifying the database state after the requests
- Checking that the response is correct
Example integration test:
async def test_user_creation_and_authentication(client):
# Create a user
user_data = {
"user_fullname": "Integration Test User",
"user_email": "integration@example.com",
"user_phonenumber": "+1234567890",
"role_id": "some-uuid",
"unit_id": "some-uuid"
}
# Create user with admin token
create_response = await client.post(
"/user/",
json=user_data,
headers={"Authorization": f"Bearer {admin_token}"}
)
assert create_response.status_code == 201
user = create_response.json()["user"]
# Get the user code from the response
user_code = user["user_code"]
# Request OTP with the user code and default password
otp_request = {
"username": user_code,
"password": "default-password" # This would be sent in the email
}
otp_response = await client.post("/authentication/request-otp", json=otp_request)
# If 2FA is disabled, we should get tokens directly
assert otp_response.status_code == 200
tokens = otp_response.json()
# Use the access token to get the user profile
profile_response = await client.get(
"/user/profile",
headers={"Authorization": f"Bearer {tokens['access_token']}"}
)
assert profile_response.status_code == 200
profile = profile_response.json()
# Verify profile data
assert profile["user_email"] == user_data["user_email"]
assert profile["user_fullname"] == user_data["user_fullname"]
Continuous Integration
Tests are automatically run in the CI pipeline for every pull request and push to the main branches. The CI pipeline:
- Sets up a test environment
- Installs dependencies
- Runs the tests
- Reports test coverage
- Fails if tests fail or coverage is below the threshold
Best Practices
- Write tests first: Follow test-driven development (TDD) principles when possible
- Keep tests focused: Each test should verify a specific behavior
- Use descriptive test names: Test names should describe what is being tested
- Isolate tests: Tests should not depend on each other
- Clean up after tests: Tests should clean up any resources they create
- Use fixtures: Use fixtures for common setup and teardown
- Mock external dependencies: Use mocking for external services
- Test edge cases: Include tests for edge cases and error conditions
- Maintain high coverage: Aim for high test coverage, especially for critical code paths
- Keep tests fast: Tests should run quickly to encourage frequent testing