A library for software engineering task evaluation
- Code Generation
- Code Summarization
- Code Translation
- Clone Detection
- Code Repair
- Code Completion
- Code Search
- Code Classification
- Bug/Defect/ Prediction
- Bug/Vulnerability Type Prediction
- Fault/Bug Localization
- ...
Thank you for your interest in contributing! This document outlines the process for contributing to our project. Your contributions can make a real difference, and we appreciate every effort you make to help improve this project.
- Identify your target software engineering task
You can either choose to integrate an existing evaluation technique or add a new evaluation technique.
- Integrate the evaluation method
Ensure that you have a detailed readme that describes how to use the evaluation method.
An example of an evaluation method and appropriate readme can be found here.
- Add a test script for you evaluation
In order to ensure the validity of the evaluation method, we require that you provide a test script as well.
There is a separate test folder that you must add your tests to. We also ask that you provide a 'how-to-test' section in your readme, detailing how to test the evaluation method.
Mitchell Huggins, please contact [email protected] if any questions about SEVAL.