Wait VM becomes getable after triggering restore#1419
Wait VM becomes getable after triggering restore#1419khushboo-rancher merged 1 commit intoharvester:mainfrom
Conversation
There was a problem hiding this comment.
LGTM.
Thank you @albinsun for checking and bringing the solution to fix the test_restore with_new_vm related test cases.
I added the vm_checker.wait_getable into all related test case in the test_4_vm_backup_restore.py and also test along with the new class TestBackupRestoreWithSnapshot for PR #1384
And trigger a new test inside the main Jenkins vm toward the raven cluster.
The result is all test_restore_with_new_vm test cases are running well and also PASS most of the backup and restore test cases except those flaky cases already failed before for other reason.

|
I think the root cause would be, we introduced new state while restoring VM, so we should check the restoring process is completed rather than the VM is available. |
Which issue(s) this PR fixes:
What this PR does / why we need it:
vm_checker.wait_getablebetweenapi_client.backups.restoreandvm_checker.wait_ip_addressesvm_checker.wait_ip_addressescallsapi_client.vms.startin the callback chain and will assert FAIL if target VM does not exist.api_client.backups.restoreneeds some time to communicate withbackup-targetto restore VM. So depends on the env., VM instance may not be created immediately.Special notes for your reviewer:
This is a quick fix.
IMO, coder tends to expect function
wait_XXXdoes some polling GET, thus some error-prone. To enhance, we may:create_AAA_and_wait_XXX.Additional documentation or context
Verification
Before (

harvester-runtests#61)After (

harvester-runtests#62)