Skip to content

catchpoint/WebPageTest.crawler

Repository files navigation

WebPageTest Logo

Learn about more WebPageTest API Integrations in our docs

WebPageTest Crawler

The WebPageTest Crawler, crawls through the website to fetch URLs and then runs test on them. Level and URL limit can be given.

image

Requires node, npm.

1. Installing Packages

Once you have cloned the project run npm install to install dependencies. requires node v22.13.0 minumum

npm install

2. Updating config values

There are 3 main config values : -

  1. wpt_api_key - Check here the API documentation
  2. level - integer value, specifies maximum depth the crawler should crawl.
  3. limit - integer value, specifies maximum limit of URLs need to tested. Note : - Crawling stops if either of them reaches a limit.

3. Adding a initial URLs txt file

You can add your initial set of URLs to the startingUrls.txt file by seperating them using a comma.

image

4. Lets fire it up

npm run build & node build/index.js -k [YOUR_API_KEY] -f ./startingUrls.txt

Booyah, once the crawl-testing is complete you'll have a report.csv file which includes performance details of the URLs crawled.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •