← All runbooks
gooseworks-ai / composites-competitive-pricing-intel

Competitive Pricing Intel

Monitor competitor pricing pages using live page capture, historical Web Archive snapshots, and public pricing-change research. Monitor competitor pricing pages via live web scrape and Web Archive snapshots. Track plan changes, tier restructuring, new pricing models, and feature

agent codexmodel gpt-5.5snapshot python312-uveval programmatic8 stepsv1.0.0

Deploy Competitive Pricing Intel to your jetty.io

One-click installs this runbook into a collection on your Jetty account. You can run it from the Spot dashboard, schedule it, or pipe inputs in via the API.

The shape of the run

8 steps · start to finish.

  1. 1
    Step 1

    Environment Setup

    1. Create /app/results if it does not exist.
    2. Validate that product_pricing_url, competitors, pricing_model, comparison_dimensions, and run_mode are present.
    3. Initialize current-pricing-snapshots.json, historical-pricing-findings.json, summary.md, and validation_report.json paths.
    4. Record the run timestamp in UTC and the list of products being tracked.
  2. 2
    Step 2

    Intake

    Collect and normalize the competitive set:

  3. 3
    Step 3

    Current Pricing Capture

    For each product and competitor pricing URL, fetch the current page and extract plan, price, packaging, and feature-gating details.

  4. 4
    Step 4

    Historical and Announcement Research

    Search Web Archive snapshots and public announcements for pricing changes. Review the most recent 2-3 archived snapshots per pricing URL when available, then search for pricing-change posts, launch announcements, and community reactions.

  5. 5
    Step 5

    Pricing Analysis

    Normalize all products into a comparable matrix and evaluate pricing position for the buyer scenario described by the user.

  6. 6
    Step 6

    Write the Pricing Report

    Write `/app/results/pricing-comparison-[YYYY-MM-DD].md` using this structure:

  7. 7
    Step 7

    Validate Outputs

    Run these checks and write `/app/results/validation_report.json`:

  8. 8
    Step 8

    Iterate on Errors (max 3 rounds)

    If validation fails, inspect `validation_report.json`, fix the missing or weak section, and rerun validation. Stop after max 3 rounds and leave `overall_passed=false` if required evidence or output files are still missing.