Conversation
Contributor
Reviewer's GuideIntroduce a SQLite-based persistent storage layer (Port/Adapter) alongside the existing Valkey cache, refactor cache and controller flows to be fully async, and add exponential-backoff tracking for unknown players with richer 404 responses and Prometheus metrics, while wiring storage into app startup and tests. ER diagram for SQLite persistent storage tableserDiagram
static_data {
TEXT key PK
TEXT data_type
BLOB data_compressed
INTEGER created_at
INTEGER updated_at
INTEGER schema_version
}
player_profiles {
TEXT player_id PK
TEXT blizzard_id
BLOB html_compressed
TEXT summary_json
INTEGER last_updated_blizzard
INTEGER created_at
INTEGER updated_at
INTEGER schema_version
}
player_status {
TEXT player_id PK
INTEGER check_count
INTEGER last_checked_at
INTEGER retry_after
}
player_profiles ||--|| player_status : shares_player_id
Class diagram for cache and storage ports and their adaptersclassDiagram
class CachePort {
<<protocol>>
+async get(key: str) bytes|None
+async set(key: str, value: bytes, expire: int|None) None
+async delete(key: str) None
+async exists(key: str) bool
+async get_api_cache(cache_key: str) dict|list|None
+async update_api_cache(cache_key: str, value: dict|list, expire: int) None
+async get_player_cache(player_id: str) dict|list|None
+async update_player_cache(player_id: str, value: dict) None
+async is_being_rate_limited() bool
+async get_global_rate_limit_remaining_time() int
+async set_global_rate_limit() None
+async is_player_unknown(player_id: str) bool
+async set_player_as_unknown(player_id: str) None
}
class ValkeyCache {
+valkey_server
+static get_cache_key_from_request(request)
+static _compress_json_value(value: dict|list) bytes
+static _decompress_json_value(value: bytes) dict|list
+async get(key: str) bytes|None
+async set(key: str, value: bytes, expire: int|None) None
+async delete(key: str) None
+async exists(key: str) bool
+async get_api_cache(cache_key: str) dict|list|None
+async update_api_cache(cache_key: str, value: dict|list, expire: int) None
+async get_player_cache(player_id: str) dict|list|None
+async update_player_cache(player_id: str, value: dict) None
+async is_being_rate_limited() bool
+async get_global_rate_limit_remaining_time() int
+async set_global_rate_limit() None
+async is_player_unknown(player_id: str) bool
+async set_player_as_unknown(player_id: str) None
}
class StoragePort {
<<protocol>>
+async get_static_data(key: str) dict|None
+async set_static_data(key: str, data: str, data_type: str, schema_version: int) None
+async get_player_profile(player_id: str) dict|None
+async set_player_profile(player_id: str, html: str, summary: dict|None, blizzard_id: str|None, last_updated_blizzard: int|None, schema_version: int) None
+async get_player_status(player_id: str) dict|None
+async set_player_status(player_id: str, check_count: int, retry_after: int) None
+async delete_player_status(player_id: str) None
+async clear_player_data() None
+async close() None
}
class SQLiteStorage {
-db_path: str
-_initialized: bool
-_shared_connection
+__init__(db_path: str|None)
+async initialize() None
+async close() None
+async get_static_data(key: str) dict|None
+async set_static_data(key: str, data: str, data_type: str, schema_version: int) None
+async get_player_profile(player_id: str) dict|None
+async set_player_profile(player_id: str, html: str, summary: dict|None, blizzard_id: str|None, last_updated_blizzard: int|None, schema_version: int) None
+async get_player_status(player_id: str) dict|None
+async set_player_status(player_id: str, check_count: int, retry_after: int) None
+async delete_player_status(player_id: str) None
+async get_stats() dict
+async clear_player_data() None
}
class AbstractController {
<<abstract>>
+cache_manager: CachePort
+storage: StoragePort
+cache_key: str
+response
+timeout: int
+async update_static_cache(data: dict|list, storage_key: str, data_type: str) None
-static _track_storage_error(error_type: str) None
+async process_request(**kwargs) dict|list
}
class BasePlayerController {
+async get_player_profile_cache(player_id: str) dict[str, str|dict]|None
+async update_player_profile_cache(player_id: str, player_summary: dict, html: str) None
+_calculate_retry_after(check_count: int) int
+async check_unknown_player(player_id: str) None
+async mark_player_unknown_on_404(player_id: str, exception) None
+async process_request(**kwargs) dict
}
class GetPlayerCareerController {
+async process_request(**kwargs) dict
+async _fetch_player_html(client, player_id: str) str
}
class BlizzardClientPort {
<<protocol>>
+async get(url: str, headers, timeout) Response
+async aclose() None
}
CachePort <|.. ValkeyCache
StoragePort <|.. SQLiteStorage
AbstractController ..> CachePort
AbstractController ..> StoragePort
BasePlayerController --|> AbstractController
GetPlayerCareerController --|> BasePlayerController
GetPlayerCareerController ..> BlizzardClientPort
BasePlayerController ..> StoragePort
AbstractController ..> ValkeyCache
AbstractController ..> SQLiteStorage
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Contributor
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- The global
AbstractController.storageinstance is never initialized in the main lifespan (you create and initialize a separatestoragelocal variable), so any code usingAbstractController.storagein the running app will hit an uninitialized DB schema; consider wiring the initialized storage fromlifespanintoAbstractController.storage(or a DI mechanism) instead of instantiating a separate adapter. - The
with_unknown_player_guarddecorator relies onplayer_idbeing present inkwargsand raisesValueErrorotherwise; given this now decorates severalprocess_requestimplementations, it would be safer to either accept*args, **kwargsand extractplayer_idfrom the positional arguments when present, or enforce/validate this contract at the controller base level to avoid subtle runtime failures if signatures change.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The global `AbstractController.storage` instance is never initialized in the main lifespan (you create and initialize a separate `storage` local variable), so any code using `AbstractController.storage` in the running app will hit an uninitialized DB schema; consider wiring the initialized storage from `lifespan` into `AbstractController.storage` (or a DI mechanism) instead of instantiating a separate adapter.
- The `with_unknown_player_guard` decorator relies on `player_id` being present in `kwargs` and raises `ValueError` otherwise; given this now decorates several `process_request` implementations, it would be safer to either accept `*args, **kwargs` and extract `player_id` from the positional arguments when present, or enforce/validate this contract at the controller base level to avoid subtle runtime failures if signatures change.
## Individual Comments
### Comment 1
<location> `app/players/controllers/base_player_controller.py:14` </location>
<code_context>
from app.overfast_logger import logger
+def with_unknown_player_guard(func):
+ """
+ Decorator to guard player endpoints against unknown players.
</code_context>
<issue_to_address>
**issue (bug_risk):** Missing `wraps` import will cause a NameError at runtime.
This module uses `@wraps(func)` but never imports it, so the decorator will raise `NameError` when defined. Add `from functools import wraps` at the top of this module (as in `valkey_cache.py`).
</issue_to_address>
### Comment 2
<location> `app/controllers.py:31-32` </location>
<code_context>
+ # Cache manager for Valkey operations (Protocol type for dependency inversion)
+ cache_manager: CachePort = CacheManager()
+
+ # Storage adapter for persistent data (Protocol type for dependency inversion)
+ storage: StoragePort = SQLiteStorage()
def __init__(self, request: Request, response: Response):
</code_context>
<issue_to_address>
**issue (bug_risk):** Controller `storage` instance is never initialized with the schema, which may lead to missing-table errors.
The lifespan handler initializes its own `SQLiteStorage` and calls `initialize()`, but `AbstractController.storage` is a separate instance created at import time and never initialized. All controller methods (`get_player_profile`, `set_static_data`, etc.) use this uninitialized instance, so required tables may be missing. Either share a single `SQLiteStorage` instance between lifespan and controllers, or ensure `await AbstractController.storage.initialize()` is called once at startup and reuse that instance in lifespan instead of creating a new one.
</issue_to_address>
### Comment 3
<location> `app/adapters/cache/valkey_cache.py:39-48` </location>
<code_context>
from fastapi import Request
+def handle_valkey_error(
+ default_return: Any = None,
+) -> Callable[[Callable], Callable]:
+ """
+ Decorator to handle Valkey connection errors gracefully.
+
+ Args:
+ default_return: Value to return when ValkeyError is caught (default: None)
+
+ Returns:
+ Decorated async function that catches ValkeyError and returns default_return
+ """
+
+ def decorator(func: Callable) -> Callable:
+ @wraps(func)
+ async def wrapper(*args, **kwargs):
+ try:
+ return await func(*args, **kwargs)
+ except valkey.ValkeyError as err:
+ func_name = getattr(func, "__name__", "unknown")
+ logger.warning(f"Valkey server error in {func_name}: {err}")
</code_context>
<issue_to_address>
**issue (bug_risk):** Catching `valkey.ValkeyError` on the asyncio client may not match the actual exception type being raised.
The previous code caught `valkey.exceptions.ValkeyError`, but the decorator now catches `valkey.ValkeyError` on the asyncio client. Depending on the actual Valkey/redis-py API, this may not match the real base exception, so these errors might bypass the handler. Consider either catching `valkey.exceptions.ValkeyError` from the root package, or another verified base exception actually raised by the asyncio client, so transient Valkey issues are still handled here.
</issue_to_address>
### Comment 4
<location> `tests/conftest.py:21-28` </location>
<code_context>
-@pytest.fixture(scope="session")
-def valkey_server():
- return fakeredis.FakeValkey(protocol=3) # ty: ignore[possibly-missing-attribute]
+@pytest_asyncio.fixture(scope="session")
+async def valkey_server():
+ """Provide async FakeValkey server for tests"""
+ return fakeredis.FakeAsyncRedis(protocol=3)
+
+
+@pytest_asyncio.fixture(scope="session")
+async def storage_db() -> AsyncIterator[SQLiteStorage]:
+ """Provide an in-memory SQLite storage for tests"""
</code_context>
<issue_to_address>
**suggestion (testing):** Add dedicated tests for the SQLiteStorage adapter to validate compression, schema, and player/unknown status flows
Right now the fixture only provides a `SQLiteStorage` instance; the adapter itself isn’t directly exercised. Given its central role (profiles, unknown tracking, static data), it would be valuable to add a `tests/test_sqlite_storage.py` that covers:
- `set_static_data` / `get_static_data`: round-trip, `data_type`, `schema_version`, correct JSON compression/decompression
- `set_player_profile` / `get_player_profile`: persistence and reconstruction of `html` and `summary`, including when `summary_json` is `None`
- `set_player_status` / `get_player_status` / `delete_player_status`: exponential-backoff metadata is stored as expected
- `clear_player_data`: tables are actually cleared
These tests will also implicitly verify `:memory:` connection handling and schema initialization.
Suggested implementation:
```python
from __future__ import annotations
from typing import TYPE_CHECKING
import fakeredis
import pytest_asyncio
from fastapi.testclient import TestClient
from app.adapters.storage import SQLiteStorage
from app.main import app
if TYPE_CHECKING:
from collections.abc import AsyncIterator
@pytest_asyncio.fixture(scope="session")
async def valkey_server():
"""Provide async FakeValkey server for tests."""
# Use async fake redis/valkey instance for tests that depend on Valkey
return fakeredis.FakeAsyncRedis(protocol=3)
@pytest_asyncio.fixture(scope="session")
async def storage_db() -> AsyncIterator[SQLiteStorage]:
"""Provide an in-memory SQLite storage for tests.
This fixture ensures that tests share a single in-memory SQLiteStorage
instance for the session, implicitly exercising :memory: connection
handling and schema initialization.
"""
storage = SQLiteStorage(":memory:")
try:
yield storage
finally:
await storage.close()
@pytest_asyncio.fixture
async def client() -> AsyncIterator[TestClient]:
"""Provide a TestClient for FastAPI app.
Wrapped in an async fixture so it composes cleanly with other async
fixtures (e.g. valkey_server, storage_db) if the app wiring depends on them.
"""
with TestClient(app) as test_client:
yield test_client
```
To fully implement your suggestion about dedicated SQLiteStorage adapter tests, add a new file `tests/test_sqlite_storage.py` that uses the `storage_db` fixture and includes tests covering at least:
1. `set_static_data` / `get_static_data`:
- Store static data with a given `data_type` and `schema_version`.
- Verify round-trip correctness, including JSON compression/decompression and versioning.
2. `set_player_profile` / `get_player_profile`:
- Store a profile with both `html` and `summary_json`.
- Store a profile with `summary_json=None`.
- Assert retrieved profiles reconstruct `html` and `summary` as expected in both cases.
3. `set_player_status` / `get_player_status` / `delete_player_status`:
- Store a status with exponential-backoff metadata (e.g. `retry_count`, `next_retry_at` or equivalent in your implementation).
- Verify the metadata is persisted and retrieved correctly.
- Verify `delete_player_status` removes the status.
4. `clear_player_data`:
- Seed profiles/status/static data for one or more players.
- Call `clear_player_data` and assert all related rows are removed from the underlying tables.
Tailor the tests to the exact method signatures and field names exposed by your `SQLiteStorage` implementation, and ensure they run against the `:memory:` DB provided by the `storage_db` fixture so that schema initialization and lifecycle are implicitly tested.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
TeKrop
added a commit
that referenced
this pull request
Feb 16, 2026
…/Adapter pattern (#361) * feat: first part of phase 3 * feat: progress on phase 3 * feat: made valkey cache async * feat: finished phase 3 * fix: fixes after self review * fix: fixed tests and added sqlite ones * fix: added metrics for storage * fix: fixed nginx lua issue * fix: fixed dashboards * fix: updated down command to not remove volume by default * fix: fixed again commands to not remove volumes * fix: fixed code deduplication * fix: fixed issue * fix: adjusted SQLite usage * fix: improved player search workflow * fix: several bugfixes and introduced blizzard_id as main identifier * fix: last fixes before E2E tests * feat: added additional usage of blizzard ID & battletag mapping * fix: added reverse player data enrichment * fix: finished grafana boards * fix: fixed code deduplication * fix: added remaining sqlite metrics
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.



Summary by Sourcery
Add SQLite-based persistent storage and integrate it with existing caching and player controllers, introducing async cache operations, exponential backoff for unknown players, and enhanced error-handling and metrics.
New Features:
Bug Fixes:
Enhancements:
Build:
Deployment:
Documentation:
Tests: