Processes

The pipelines and scripts that turn an idea into a published episode.

Validate script against the brain

validate-script

Cheap pre-flight check. Runs the style-guide rules (banned phrases, replacements, fact-watch, narration style, asset guards) against a *.script.json BEFORE we burn TTS / image / video credits.

Steps
  1. Load scripts/lib/style-guide.json + scripts/lib/pronunciation-base.json
  2. Walk every segment.text in the script
  3. Surface ERRORS (block) and WARNINGS (continue with --no-strict)
  4. For talkover segments, additionally run findBeatByBeatNarration
Files
  • scripts/validate-script.ts
  • scripts/lib/style-guide.json
  • scripts/lib/pronunciation-base.json

Build a full episode (TTS + segments + mux)

build-racheteo

End-to-end pipeline that turns a *.script.json into an mp4 + caption file. Caches per segment so re-runs only redo what changed.

Steps
  1. Generate per-segment Polly TTS (cached by hash of text + voice + pronunciation map)
  2. Pick keyword imagery (00_pinned_* preferred per assetGuards) + B-roll
  3. Render per-segment video clips (Ken Burns + memes + sfx + sting)
  4. Whisper-align voice tracks for caption ASS file
  5. Concat segments, mux audio + captions, upload to S3
  6. Write output/manifest-<slug>.json
Files
  • scripts/build-racheteo.ts
  • scripts/lib/whisper-align.ts
  • scripts/lib/caption-helper.ts
  • scripts/lib/stock-fetcher.ts
  • output/state-<slug>.json
  • output/manifest-<slug>.json

Bake the fight-footage talkover (Ep3 06-fight-talkover)

bake-fight-talkover

Special segment: real footage played at 0.6x with a Pedro/Lupe voiceover layered with a drill bed and gunshot accents. Decoupled from build-racheteo so we can iterate on the talkover alone.

Steps
  1. Generate Polly chunks per CHUNKS array (cached)
  2. Concat voice chunks → talkover.mp3
  3. Mix talkover + drill-bed-32s.mp3 (looped @ -22dB) + gunshot accents @ -15dB
  4. Lay the mix over the slowed-down Facebook footage
  5. Write segments-mixed-<slug>/06-fight-talkover.mp4 for build-racheteo to pick up
Files
  • scripts/bake-fight-talkover.ts
  • assets/sfx/drill-bed-32s.mp3
  • assets/sfx/gunshot.mp3
  • assets/footage/gallo-vs-sammy-fb.mp4

Regenerate a pinned keyword photo via gpt-image-1

regen-keyword-pinned

When a stock fetch returns a watermarked / wrong-person image, generate a clean AI replacement and pin it so it wins the alphabetical sort in pickKeywordImages().

Steps
  1. Run with --slug <kw> --name <suffix> --prompt '<scene>'
  2. gpt-image-1 renders a 1024x1024 png
  3. Save to assets/keywords/<slug>/00_pinned_<name>.png
  4. Move the rejected stock into assets/keywords/_rejected/
  5. Add an assetGuards entry to style-guide.json so it never recurs
Files
  • scripts/regen-keyword-pinned.ts
  • assets/keywords/_rejected/
  • scripts/lib/style-guide.json

Live-stream archiver (lambda + launchctl)

youtube-archiver

Continuously records the Planeta Alofoke YouTube live and uploads chunks to S3. Local launchctl agent triggers it on a cron-like cadence; cloud lambda is the production version.

Steps
  1. launchctl ~/Library/LaunchAgents/PlanetaAlofoke-YouTubeArchiver.plist runs every N min
  2. yt-dlp --live-from-start --download-sections '*now-Xm-now'
  3. Upload mp4 to s3://ai-content-assets/planeta-alofoke/recordings/<date>/<HH-MM>.mp4
  4. (Stopped manually when stream is just an animation card — saves spend.)
Files
  • scripts/record-live.sh
  • ~/Library/LaunchAgents/PlanetaAlofoke-YouTubeArchiver.plist

Build + deploy this dashboard

dashboard-build

Static-export Next.js app, then sync dashboard/out/ to S3 behind the public bucket.

Steps
  1. npm run dashboard:data (rebuild dashboard/data/*.json from source)
  2. npm run dashboard:build (next build → dashboard/out/)
  3. npm run dashboard:deploy (aws s3 sync dashboard/out/ s3://.../dashboard/ --delete)
Files
  • scripts/build-dashboard-data.ts
  • dashboard/
  • dashboard/out/