Hi @justinkay , @timmh,
When training a model based on YOLO, the training process displays precision and validation tables during inference iterations, which works as expected. However, after training completes and I attempt to evaluate the model on test data, the test data is loaded successfully, but no precision or evaluation metrics are displayed. The process seems to stop prematurely, and the last section of the output is as follows:
[04/03 11:41:04 d2.evaluation.evaluator]: Total inference time: 0:23:58.325486 (0.038836 s / iter per device, on 1 devices)
[04/03 11:41:04 d2.evaluation.evaluator]: Total inference pure compute time: 0:16:02 (0.025975 s / iter per device, on 1 devices)
[04/03 11:41:25 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[04/03 11:41:25 d2.evaluation.coco_evaluation]: Saving results to /home/helia/ALDI/output/yolo/test/1-1/inference/coco_instances_results.json
[04/03 11:41:45 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=13.76s)
creating index...
index created!
[04/03 11:42:00 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*
After loading the test data and running the evaluation, I expect to see precision metrics (e.g., mAP, AP50, AP75, etc.) displayed for the test dataset, similar to how they are shown during training validation.
The evaluation process runs, saves results to coco_instances_results.json, and starts evaluating predictions, but it does not output any precision or validation metrics. The process reaches the "Evaluate annotation type bbox" step, but no metrics are displayed.
- The
coco_instances_results.json file is generated successfully in the output directory.
- No explicit errors or exceptions are thrown in the logs; the process simply stops after the last line shown above.
Could you please investigate why the evaluation metrics are not being displayed during the test phase? It would be helpful to know if this is a bug, a configuration issue, or if additional steps are required to output the metrics.
These are the log files:
log_yolov5_test.zip
Hi @justinkay , @timmh,
When training a model based on YOLO, the training process displays precision and validation tables during inference iterations, which works as expected. However, after training completes and I attempt to evaluate the model on test data, the test data is loaded successfully, but no precision or evaluation metrics are displayed. The process seems to stop prematurely, and the last section of the output is as follows:
After loading the test data and running the evaluation, I expect to see precision metrics (e.g., mAP, AP50, AP75, etc.) displayed for the test dataset, similar to how they are shown during training validation.
The evaluation process runs, saves results to
coco_instances_results.json, and starts evaluating predictions, but it does not output any precision or validation metrics. The process reaches the "Evaluate annotation type bbox" step, but no metrics are displayed.coco_instances_results.jsonfile is generated successfully in the output directory.Could you please investigate why the evaluation metrics are not being displayed during the test phase? It would be helpful to know if this is a bug, a configuration issue, or if additional steps are required to output the metrics.
These are the log files:
log_yolov5_test.zip