-
Notifications
You must be signed in to change notification settings - Fork 46k
feat(backend): implement comprehensive caching layer for all GET endpoints (Part 2) #10979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
…oints (Part 2) - Created separate cache.py modules for better code organization - backend/server/routers/cache.py for V1 API endpoints - backend/server/v2/library/cache.py for library endpoints - backend/server/v2/store/cache.py (refactored from routes) - Added caching to all major GET endpoints: - Graphs list/details with 15-30 min TTL - Graph executions with 5 min TTL - User preferences/timezone with 30-60 min TTL - Library agents/favorites/presets with 10-30 min TTL - Store listings/profiles with 5-60 min TTL - Implemented intelligent cache invalidation: - Clears relevant caches on CREATE/UPDATE/DELETE operations - Uses positional arguments for cache_delete to match function calls - Selective caching only for default queries (bypasses cache for filtered/searched results) - Added comprehensive test coverage: - 20 cache-specific tests all passing - Validates cache hit/miss behavior - Verifies invalidation on mutations - Performance improvements: - Reduces database load for frequently accessed data - Built-in thundering herd protection via @cached decorator - Configurable TTLs based on data volatility
✅ Deploy Preview for auto-gpt-docs-dev canceled.
|
✅ Deploy Preview for auto-gpt-docs canceled.
|
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
Here's the code health analysis summary for commits Analysis Summary
|
…stead of db The test was failing because routes now use cached functions. Updated the mock to patch the cache function which is what the route actually calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements comprehensive caching for all GET endpoints across the AutoGPT platform by separating cache logic into dedicated modules for better maintainability. The changes extend the caching implementation to cover graphs, executions, user data, library operations, and store listings with configurable TTLs based on data volatility.
Key changes:
- Extracts cache functions into separate modules (
cache.py
files) from route handlers - Implements intelligent cache invalidation on all CREATE/UPDATE/DELETE operations
- Adds comprehensive test coverage for cache behavior and invalidation patterns
Reviewed Changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 7 comments.
Show a summary per file
File | Description |
---|---|
autogpt_platform/backend/backend/server/v2/store/routes.py |
Removes cache functions and imports them from new cache module |
autogpt_platform/backend/backend/server/v2/store/cache.py |
New cache module for Store API with all caching functions |
autogpt_platform/backend/backend/server/v2/library/routes_test.py |
Updates test to mock cache functions instead of DB functions |
autogpt_platform/backend/backend/server/v2/library/routes/presets.py |
Integrates caching for preset operations with cache invalidation |
autogpt_platform/backend/backend/server/v2/library/routes/agents.py |
Integrates caching for agent operations with cache invalidation |
autogpt_platform/backend/backend/server/v2/library/cache_test.py |
New comprehensive test suite for library cache invalidation |
autogpt_platform/backend/backend/server/v2/library/cache.py |
New cache module for Library API with all caching functions |
autogpt_platform/backend/backend/server/routers/v1.py |
Integrates caching for V1 API endpoints with cache invalidation |
autogpt_platform/backend/backend/server/routers/cache_test.py |
New comprehensive test suite for V1 API cache invalidation |
autogpt_platform/backend/backend/server/routers/cache.py |
New cache module for V1 API with all caching functions |
autogpt_platform/backend/backend/server/v2/library/routes/presets.py
Outdated
Show resolved
Hide resolved
autogpt_platform/backend/backend/server/v2/library/routes/presets.py
Outdated
Show resolved
Hide resolved
autogpt_platform/backend/backend/server/v2/library/routes/presets.py
Outdated
Show resolved
Hide resolved
autogpt_platform/backend/backend/server/v2/library/routes/presets.py
Outdated
Show resolved
Hide resolved
autogpt_platform/backend/backend/server/v2/library/routes/agents.py
Outdated
Show resolved
Hide resolved
autogpt_platform/backend/backend/server/v2/library/routes/agents.py
Outdated
Show resolved
Hide resolved
The cached graph function was missing include_subgraphs=True parameter which is needed to construct full credentials input schema. This was causing test_access_store_listing_graph to fail.
… permissions When a graph is not found/accessible, we now clear the cache entry rather than caching the None result. This prevents issues with store listing permissions where a graph becomes accessible after approval but the cache still returns the old 'not found' result.
unfavouriting a library agent is not working |
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
merged_node_input = preset.inputs | inputs | ||
merged_credential_inputs = preset.credentials | credential_inputs | ||
|
||
for page in range(1, 10): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't. get when you remove 10 pages and remove 5 pages for others, what's the factor?
result = await db.create_preset_from_graph_execution(user_id, preset) | ||
|
||
# Clear presets list cache after creating new preset | ||
for page in range(1, 5): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at this point, you can make a helper out of this :D to remove pages.
library_cache.get_cached_library_agent.cache_delete( | ||
library_agent_id=library_agent_id, user_id=user_id | ||
) | ||
for page in range(1, 20): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here it's 20 :o
Summary
cache.py
files for better code organization and maintainabilityChanges
New Cache Modules
backend/server/routers/cache.py
- V1 API endpoint cachingbackend/server/v2/library/cache.py
- Library API cachingbackend/server/v2/store/cache.py
- Store API caching (refactored from routes)Cached Endpoints
Key Features
@cached
decoratorTest Plan
Performance Impact
Related