Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #915 +/- ##
==========================================
+ Coverage 73.54% 73.59% +0.04%
==========================================
Files 109 109
Lines 10216 10216
Branches 882 882
==========================================
+ Hits 7513 7518 +5
+ Misses 2515 2511 -4
+ Partials 188 187 -1
*This pull request uses carry forward flags. Click here to find out more.
🚀 New features to boost your workflow:
|
| mock_client_class.return_value = c | ||
| mock_agent = Mock() | ||
| mock_agent_class.return_value = mock_agent | ||
| mock_agent.check_offline.return_value = ( |
There was a problem hiding this comment.
@rene-oromtz This is the sort of code that I was running into and asking about the 79-char limit. With modern wide-screen monitors, the old Pep-8 79 character limit being strictly enforced is rather silly.
There was a problem hiding this comment.
Yeah, for this case it is pointless. I agree, while I do think that most of the time it helps, maybe we can discuss increasing to 90 or adding the # noqa whenever it makes sense to do it.
I think for this case, with the patch decorator, we should be able to remove the nesting
rene-oromtz
left a comment
There was a problem hiding this comment.
Thanks for the added tests. I included some comments for overall using fixtures or patch decorators and also a clarification note! Please let me know what you think!
| @pytest.fixture | ||
| def config(self, tmp_path): | ||
| """Create a minimal config for testing.""" | ||
| return { | ||
| "agent_id": "test01", | ||
| "identifier": "12345-123456", | ||
| "polling_interval": 1, | ||
| "server_address": "127.0.0.1:8000", | ||
| "job_queues": ["test"], | ||
| "location": "nowhere", | ||
| "provision_type": "noprovision", | ||
| "execution_basedir": str(tmp_path / "execution"), | ||
| "logging_basedir": str(tmp_path / "logs"), | ||
| "results_basedir": str(tmp_path / "results"), | ||
| } |
There was a problem hiding this comment.
Since this is a fixture, maybe we should added to its own conftest file (for which I just realized we don't have any, maybe later we can check what else could be fixture'd)
| with patch("testflinger_agent.load_config", return_value=config): | ||
| with patch("testflinger_agent.configure_logging"): | ||
| with patch( | ||
| "testflinger_agent.TestflingerClient" | ||
| ) as mock_client_class: | ||
| with patch( | ||
| "testflinger_agent.TestflingerAgent" | ||
| ) as mock_agent_class: | ||
| with patch( |
There was a problem hiding this comment.
Would you consider using @patch decorators instead for preventing the multiple nesting?
| # End test after prescribed number of loops. | ||
| raise KeyboardInterrupt() | ||
|
|
||
| with patch("testflinger_agent.load_config", return_value=config): |
There was a problem hiding this comment.
Same comment about the use of @patch
Update, just realized, if both tests are "patching" the same setup, maybe it could be worth to add a fixture instead.
| def sleep_side_effect(interval): | ||
| sleep_counter[0] += 1 | ||
| if sleep_counter[0] >= 2: | ||
| raise KeyboardInterrupt() |
There was a problem hiding this comment.
Instead of the sleep_side_effect, can you consider using a mock instead?
mock_sleep.side_effect = [None, KeyboardInterrupt()]
| mock_client_class.return_value = c | ||
| mock_agent = Mock() | ||
| mock_agent_class.return_value = mock_agent | ||
| mock_agent.check_offline.return_value = ( |
There was a problem hiding this comment.
Yeah, for this case it is pointless. I agree, while I do think that most of the time it helps, maybe we can discuss increasing to 90 or adding the # noqa whenever it makes sense to do it.
I think for this case, with the patch decorator, we should be able to remove the nesting
| sleep_counter = [0] | ||
|
|
||
| def sleep_side_effect(interval): | ||
| sleep_counter[0] += 1 | ||
| if sleep_counter[0] >= 1: | ||
| # End test after prescribed number of loops. | ||
| raise KeyboardInterrupt() |
There was a problem hiding this comment.
Same comment on using a mock side effect instead
| "results_basedir": str(tmp_path / "results"), | ||
| } | ||
|
|
||
| def test_main_loop_retries_after_401(self, config): |
There was a problem hiding this comment.
Not sure about this naming since the test is not effectively testing 401 explicitly. It seems that it will just check the job can continue polling but it could be that there is simply no job in queue.
Description
start_agent()) lacked integration test coverage, making it difficult to verify end-to-end behavior during job processing, server connectivity issues, and offline state handling.Resolved issues
Test coverage improvement.
Documentation
N/A
Web service API changes
N/A
Tests
agent/tests/test_main_loop.pytest_main_loop_retries_after_401: Verifies loop continues after failed job check, callsprocess_jobstwice across poll cyclestest_main_loop_handles_offline: Verifies loop skips job processing when agent is marked offlineagent/directory, runuvx tox -e unit