Example Usage
crawley [flags] url

possible flags with default values:

-all
    scan all known sources (js/css/...)
-brute
    scan html comments
-cookie value
    extra cookies for request, can be used multiple times, accept files with '@'-prefix
-css
    scan css for urls
-delay duration
    per-request delay (0 - disable) (default 150ms)
-depth int
    scan depth (set -1 for unlimited)
-dirs string
    policy for non-resource urls: show / hide / only (default "show")
-header value
    extra headers for request, can be used multiple times, accept files with '@'-prefix
-headless
    disable pre-flight HEAD requests
-ignore value
    patterns (in urls) to be ignored in crawl process
-js
    scan js code for endpoints
-proxy-auth string
    credentials for proxy: user:password
-robots string
    policy for robots.txt: ignore / crawl / respect (default "ignore")
-silent
    suppress info and error messages in stderr
-skip-ssl
    skip ssl verification
-subdomains
    support subdomains (e.g. if www.domain.com found, recurse over it)
-tag value
    tags filter, single or comma-separated tag names
-timeout duration
    request timeout (min: 1 second, max: 10 minutes) (default 5s)
-user-agent string