-
Notifications
You must be signed in to change notification settings - Fork 12
[WIP] Add a common utility function which makes debugging test failures #161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
easier and update one of the test files as an example. This makes debugging failures easier since we now include stdout, stderr and exit code for each bash command we run for which we save output to a variable. Previously we didn't do that so if an assertion failed you had no clue what was going on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, we're doing something wrong initially.
Normal BATS commands when invoked via built-in run outputs and formats everything nicely.
First can we remove this hack and use native BATS run?
-RESULT=$(run_command_and_log_output st2 run examples.python_runner_print_python_version -j)
+run st2 run examples.python_runner_print_python_version -jWe can access result of that command via built-in populated $output var by BATS and so RESULT=$() is not needed. What I'm saying it's all already implemented in BATS.
Next, because we're invoking run again later
run eval "echo '$RESULT' | jq -r '.result.stdout'"
$output most probably gets re-populated with the new data and so that's why we're seeing different output when assert is called.
Instead of that, can we create a helper BATS function around jq to avoid eval echo | jq hacks and utilize BATS in a native/documented way?
Additionally, avoid where possible running commands or mutating output after initial built-in Bats run.
As we're integrating new framework, it's important to start doing it right and thoughtful, otherwise hacks and bad practices will hit us back in future.
| assert_output --partial "Python 3." | ||
|
|
||
| RESULT=$(st2 run examples.python_runner_print_python_version -j) | ||
| RESULT=$(run_command_and_log_output st2 run examples.python_runner_print_python_version -j) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| RESULT=$(run_command_and_log_output st2 run examples.python_runner_print_python_version -j) | |
| run st2 run examples.python_runner_print_python_version -j |
| run eval "echo '$SETUP_VENV_RESULTS' | jq -r '.result.result'" | ||
| assert_success | ||
|
|
||
| assert_output "Successfully set up virtualenv for the following packs: examples" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, in this scenario avoiding another run and mutating output will help to preserve original command results. And so we'll do assert against the original output, step with jq filtering here is overcomplication:
- run eval "echo '$SETUP_VENV_RESULTS' | jq -r '.result.result'"
- assert_success
- assert_output "Successfully set up virtualenv for the following packs: examples"
+ assert_output --partial "Successfully set up virtualenv for the following packs: examples"There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That change is fine, but in addition to checking for a specific string, we should also check that the output is valid JSON.
One of the commands in one of the BATS tests returned a JSON array concatenated to a JSON dict:
[
"example_element_1",
"example_element_2"
]
{
"example_key_1": "example_value_1",
"example_key_2": "example_value_2"
}Both of those objects individually are valid JSON, but simply concatenating them together produces invalid JSON.
So we do still need some way to validate that the produced output is valid JSON. This might be good to do in a custom assert_ function, and could even simply wrap jq.
But however we check for JSON validity, this simplification is a good one.
| SETUP_VENV_RESULTS=$(st2 run packs.setup_virtualenv packs=examples python3=true -j) | ||
| SETUP_VENV_RESULTS=$(run_command_and_log_output st2 run packs.setup_virtualenv packs=examples python3=true -j) | ||
| run eval "echo '$SETUP_VENV_RESULTS' | jq -r '.result.result'" | ||
| assert_success |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this specific case, if we really need jq, this construct with run | jq + assert should be replaced in favor of creating custom assert_output_jq function that will help us to avoid overriding original test results/outputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just read this comment after coming to the same conclusion. 👍
|
If we do assert on original CMD, bats will show everything nicely. Here is an example with Similar happens if original CMD fails I mean this all functionality is already present in BATS. |
This pull request establishes a better pattern for handling tests where we need to store shell command output in a variable before processing it / asserting on it.
Without this change it's very hard / impossible to debug test failures which are result of a shell command which output is stored to a variable. Only way to debug those would be to re-run the tests and add additional log / print statements or similar (and in a lot of scenarios that's very slow and expensive and means waiting 30 minutes or more).
This pull request adds a new helper function which prints command output (stdout, stderr, return code / exit code) to stderr. This is a similar pattern to the one we had in robot and makes debugging much easier and faster since you immediately have access to command output on failure.
Before:
Tests failed. Why? Who knows. No way to know without having access to that command output and at the very least this means re-running tests which is a huge waste of time.
With this change:
Resolves #160.