← All runbooks
gooseworks-ai / capabilities-pain-language-engagers

Pain-Language Engagers

Find warm LinkedIn leads by searching for pain-language posts, extracting people who authored, reacted to, or commented on those posts, enriching their profiles, and filtering them against the user's ICP. The runbook starts with intake so the search terms reflect how operators de

agent codexmodel gpt-5.5snapshot python312-uveval programmatic8 stepsv1.0.0

Deploy Pain-Language Engagers to your jetty.io

One-click installs this runbook into a collection on your Jetty account. You can run it from the Spot dashboard, schedule it, or pipe inputs in via the API.

The shape of the run

8 steps · start to finish.

  1. 1
    Step 1

    Environment Setup

    Verify the source pipeline and credentials before generating keywords or spending Apify credits.

    set -euo pipefail
    RESULTS_DIR="${RESULTS_DIR:-/app/results}"
    mkdir -p "$RESULTS_DIR"
    
    if [ -z "${APIFY_API_TOKEN:-}" ]; then
      echo "ERROR: APIFY_API_TOKEN is not set" >&2
      exit 1
    fi
    
    if [ ! -f skills/pain-language-engagers/scripts/pain_language_engagers.py ]; then
      echo "ERROR: source pipeline script not found at skills/pain-language-engagers/scripts/pain_language_engagers.py" >&2
      exit 1
    fi
    
    python3 --version
    python3 skills/pain-language-engagers/scripts/pain_language_engagers.py --help >/tmp/pain_language_engagers_help.txt || true
    

    Record setup status in /app/results/validation_report.json. If a required dependency or secret is missing, stop and write /app/results/summary.md with the blocker.

  2. 2
    Step 2

    Intake and Pain Context

    Ask the user for product, ICP, and LinkedIn signal-source context before running any scraping. Present the questions as a numbered list and tell the user to answer what is relevant and skip what is not.

  3. 3
    Step 3

    Generate Pain-Language Keywords

    Generate roughly 15 to 25 LinkedIn boolean-search keywords from the intake. Organize them into staffing/resource pain, operational friction, margin/growth pain, and process complaints. Every keyword should read like something a frustrated operator would type or say.

  4. 4
    Step 4

    Run Test Pipeline

    Always run a bounded test before a full run. Test mode limits keyword and company-page volume so the user can validate relevance without unnecessary spend.

  5. 5
    Step 5

    Review and Refine

    Present the test results to the user and ask whether to adjust keywords, ICP terms, vendor exclusions, company pages, geography, or date range. Common fixes are listed below.

  6. 6
    Step 6

    Run Full Pipeline

    Run the full pipeline only after test results are relevant and the user approves. If `run_full_pipeline=false`, stop after writing the summary and validation report, noting that no full CSV was requested.

  7. 7
    Step 7

    Iterate on Errors (max 3 rounds)

    If setup, config validation, the test run, the full run, or output verification fails, inspect the error, apply the smallest targeted fix, and rerun the failed stage. Stop after max 3 rounds and write the remaining blocker into `/app/results/summary.md` and `/app/results/validati

  8. 8
    Step 8

    Write Reports

    Write `/app/results/summary.md` with the run date, client name, approved keyword strategy, config path, test-run observations, full-run status, final lead count, and manual follow-up. Write `/app/results/validation_report.json` with stage-level pass/fail results and output file p