← All runbooks
gooseworks-ai / composites-competitor-content-tracker

Competitor Content Tracker

Monitor competitor content across blogs, LinkedIn, and Twitter/X on a recurring basis. Surfaces new posts, trending topics, and content gaps you can own. Chains blog-feed-monitor, linkedin-profile-post-scraper, and twitter-mention-tracker. Use when you want a weekly digest of wha

agent codexmodel gpt-5.5snapshot python312-uveval programmatic9 stepsv1.0.0

Deploy Competitor Content Tracker to your jetty.io

One-click installs this runbook into a collection on your Jetty account. You can run it from the Spot dashboard, schedule it, or pipe inputs in via the API.

The shape of the run

9 steps · start to finish.

  1. 1
    Step 1

    Environment Setup

    Create the output directory, validate required inputs, and persist the resolved configuration.

    mkdir -p /app/results
    python3 - <<'PY'
    import json, pathlib, sys
    config = {
      "client_name": "<client_name>",
      "competitors": ["<competitor_name>"],
      "blog_urls": ["<competitor_blog_url>"],
      "linkedin_profiles": [],
      "twitter_handles": [],
      "days_back": 7,
      "keywords": [],
      "output_mode": "highlights"
    }
    if not config["competitors"] or not config["blog_urls"]:
        raise SystemExit("competitors and blog_urls are required")
    pathlib.Path("/app/results/config.json").write_text(json.dumps(config, indent=2))
    PY
    
  2. 2
    Step 2

    Scrape Blog Content

    Run `blog-feed-monitor` for each competitor blog URL. Collect post title, publish date, URL, excerpt, and any keyword matches.

  3. 3
    Step 3

    Scrape LinkedIn Posts

    When LinkedIn profiles are provided, run `linkedin-profile-post-scraper` and collect post preview, date, reactions, comments, and URL. Skip this step with a clear note in `raw_findings.json` when no profiles are configured.

  4. 4
    Step 4

    Scrape Twitter/X

    When Twitter/X handles are provided, run `twitter-mention-tracker` for each handle. Collect tweet text, date, likes, reposts, and URL.

  5. 5
    Step 5

    Analyze and Synthesize

    Normalize the channel outputs into `/app/results/raw_findings.json`. For each competitor, identify new blog posts, top LinkedIn post, top tweet, recurring themes, and content format patterns. Across competitors, identify shared trending topics, coverage gaps, topics the client ow

  6. 6
    Step 6

    Evaluate Outputs

    Validate that every required file exists, that the digest contains a summary, competitor sections, content gap analysis, and recommended actions, and that `raw_findings.json` is valid JSON.

  7. 7
    Step 7

    Iterate on Errors (max 3 rounds)

    If validation fails, inspect `/app/results/validation_report.json`, fix the missing or malformed output, and rerun Step 6. Stop after max 3 rounds and report unresolved failures in `/app/results/summary.md`.

  8. 8
    Step 8

    Scheduling

    For recurring use, run weekly. Mondays at 8am local time are recommended.

  9. 9
    Step 9

    Final Checklist

    echo "=== FINAL OUTPUT VERIFICATION ===" RESULTS_DIR="/app/results" for f in \ "$RESULTS_DIR/summary.md" \