This app takes URLs and optional metadata from input.csv (one row per URL), runs one or more audits synchronously, and outputs median scores to output.csv.
You can specify multiple different options using the command line options below.
For example:
- The number of times Lighthouse is run for each URL. The default is three.
- Whether to calculate the average or median scores for all the runs. The default is median.
- Which Lighthouse audits to run. The default is all audits: Performance, Best practice, PWA, Accessibility, SEO.
- Whether to include results for all individual audits or for Web Vitals.
By default the app only outputs category scores for each page: Performance, PWA,
Best practices, Accessibility and SEO. Lighthouse calculates these single scores
based on multiple individual audit scores. If you prefer, you can output results
for all individual audits by using the -t flag, or Web Vitals with the -w flag.
Node 16.7.0 or above (to support performance.getEntriesByName()).
- Clone the code using git: git clone [email protected]:samdutton/multihouse.gitor download it as a ZIP file.
- From a terminal window, go to to the multihousedirectory you created and runnpm installto install the required Node modules.
- Add URLs to be audited (and optional metadata) to input.csv, as described below.
- From a terminal cdto thesrcdirectory and runnode index.js, optionally setting the flags below.
- Progress updates and errors will be logged to the console.
- When all Lighthouse runs are complete, view the results in output.csv.
- Check for errors in error-log.txt.
Each line in input.csv consists of a site name, a page type and a URL.
For example:
  My site,homepage,https://example.com
See sample-input.csv for an example input file.
Audit results are written to output.csv with one line per URL.
For example:
  My site,homepage,https://example.com,0.50,0.38,0.78,0.87,1
See sample-output.csv for an example output file.
- Lighthouse runtime errors are logged in error-log.txt.
- Any audit that returns a zero score is disregarded, and a warning for the URL and score is logged in error-log.txt.
- Lighthouse results with errors are not included in output data.
-a, --append        Append output to existing data in output file
-c, --categories    Audits to run: one or more comma-separated values,
                    default is:
                    performance,pwa,best-practices,accessibility,seo
-f, --flags         One or more comma-separated Chrome flags without dashes,
                    default is --headless
-h, --help          Show help
-i, --input         Input file, default is input.csv
-m, --metadata      Optional column headings to be used as the first row of
                    _output.csv_. See [_sample-output.csv_](src/sample-output.csv) 
                    for defaults.
-o, --output        Output file, default is output.csv
-r, --runs          Number of times Lighthouse is run for each page, 
                    default is 3
-s, --score-method  Method of score averaging over multiple runs, 
                    default is median
-t, --all-audits    Include all individual audit scores in output
-w, --web-vitals    Include Web Vitals audits in output
- It's straightforward to log the complete Lighthouse report for each run.
By default only category scores are recorded, which are single, aggregate
scores calculated from individual audit scores. Look for the code
in index.jsmarked***.
- The data from output.csvcan easily be used to automatically update a spreadsheet and produce charts using an application such as Google Sheets.
- See TODO.mdfor work in progress.
Please note that this is not an official Google product.