KOL Content Monitor
Track what key opinion leaders in a target market are posting on LinkedIn and Twitter/X, then identify trending narratives, high-engagement posts, early signals, and content actions. This runbook converts the upstream `kol-content-monitor` skill into a Jetty-friendly workflow tha
9 steps · start to finish.
- 1Step 1
Environment Setup
▶Create the output directory, load the monitor config, and verify the tools and credentials needed for the selected sources.
mkdir -p /app/results test -f "${CONFIG_PATH:-kol-monitor.json}" || { echo "ERROR: missing KOL monitor config" exit 1 } command -v python3 >/dev/null || { echo "ERROR: python3 is required" exit 1 }Validate the config before scraping. It must include at least one KOL with a LinkedIn URL or Twitter/X handle,
days_back,min_reactions, and an output path. If LinkedIn profiles are present and the upstream scraper requires Apify, verifyAPIFY_API_TOKENis set without printing the secret. - 2Step 2
Intake and Config Normalization
▶Read `config_path` and normalize each KOL entry into a consistent schema:
- 3Step 3
Scrape LinkedIn Posts
▶Run `linkedin-profile-post-scraper` for all configured LinkedIn profiles:
- 4Step 4
Scrape Twitter/X Posts
▶For each configured handle, run `twitter-mention-tracker` over the same date window:
- 5Step 5
Topic Clustering
▶Group all retained posts by topic or theme:
- 6Step 6
Generate the Monitor Report
▶Write `/app/results/kol-monitor-report.md` with:
- 7Step 7
Build Trigger-Based Content Calendar
▶When `include_calendar=true`, write `/app/results/content-calendar.md` with one entry for each strong "Ride the Wave" opportunity:
- 8Step 8
Iterate on Errors (max 3 rounds)
▶If scraping, clustering, or report validation fails, perform at most 3 rounds of targeted fixes:
- 9Step 9
Scheduling
▶For a weekly Friday afternoon monitor, schedule the equivalent command after validating the config: