LinkedIn Scraper
This runbook finds LinkedIn job postings using the JobSpy Python library and a local `tools/jobspy_scraper.py` wrapper. It turns a requested role, location, company filter, recency window, and output target into a reproducible CSV export with posting metadata and direct URLs. The
7 steps · start to finish.
- 1Step 1
Environment Setup
▶Install the runtime dependency and create the output directory.
mkdir -p /app/results python3.12 -m pip install -U python-jobspy --break-system-packagesVerify the scraper wrapper is available. If it is missing, copy it from the source skill assets before running.
test -s tools/jobspy_scraper.py || cp skills/linkedin-job-scraper/scripts/jobspy_scraper.py tools/jobspy_scraper.py test -s tools/jobspy_scraper.py - 2Step 2
Resolve Search Inputs
▶Identify the search term, location, result count, recency, company IDs, description requirement, job type, and remote-only flag from the user request. If a request is underspecified, choose pragmatic defaults and record them in `/app/results/summary.md`.
- 3Step 3
Construct the Scraper Command
▶Build the command with only the filters that apply. Always write the CSV to `/app/results/linkedin_jobs.csv` unless the user requested a different file under `/app/results`.
- 4Step 4
Run the Scraper
▶Execute the command and capture the terminal output for the summary. Example:
- 5Step 5
Interpret Results
▶Read the CSV and summarize the outcome in `/app/results/summary.md`:
- 6Step 6
Iterate on Errors (max 3 rounds)
▶If installation, scraper execution, or result validation fails, iterate up to max 3 rounds. After each fix, rerun the failed step and update `/app/results/validation_report.json`.
- 7Step 7
Write Validation Report
▶Write `/app/results/validation_report.json` with this shape: