Processes
The pipelines and scripts that turn an idea into a published episode.
Validate script against the brain
validate-scriptCheap pre-flight check. Runs the style-guide rules (banned phrases, replacements, fact-watch, narration style, asset guards) against a *.script.json BEFORE we burn TTS / image / video credits.
- Load scripts/lib/style-guide.json + scripts/lib/pronunciation-base.json
- Walk every segment.text in the script
- Surface ERRORS (block) and WARNINGS (continue with --no-strict)
- For talkover segments, additionally run findBeatByBeatNarration
- scripts/validate-script.ts
- scripts/lib/style-guide.json
- scripts/lib/pronunciation-base.json
Build a full episode (TTS + segments + mux)
build-racheteoEnd-to-end pipeline that turns a *.script.json into an mp4 + caption file. Caches per segment so re-runs only redo what changed.
- Generate per-segment Polly TTS (cached by hash of text + voice + pronunciation map)
- Pick keyword imagery (00_pinned_* preferred per assetGuards) + B-roll
- Render per-segment video clips (Ken Burns + memes + sfx + sting)
- Whisper-align voice tracks for caption ASS file
- Concat segments, mux audio + captions, upload to S3
- Write output/manifest-<slug>.json
- scripts/build-racheteo.ts
- scripts/lib/whisper-align.ts
- scripts/lib/caption-helper.ts
- scripts/lib/stock-fetcher.ts
- output/state-<slug>.json
- output/manifest-<slug>.json
Bake the fight-footage talkover (Ep3 06-fight-talkover)
bake-fight-talkoverSpecial segment: real footage played at 0.6x with a Pedro/Lupe voiceover layered with a drill bed and gunshot accents. Decoupled from build-racheteo so we can iterate on the talkover alone.
- Generate Polly chunks per CHUNKS array (cached)
- Concat voice chunks → talkover.mp3
- Mix talkover + drill-bed-32s.mp3 (looped @ -22dB) + gunshot accents @ -15dB
- Lay the mix over the slowed-down Facebook footage
- Write segments-mixed-<slug>/06-fight-talkover.mp4 for build-racheteo to pick up
- scripts/bake-fight-talkover.ts
- assets/sfx/drill-bed-32s.mp3
- assets/sfx/gunshot.mp3
- assets/footage/gallo-vs-sammy-fb.mp4
Regenerate a pinned keyword photo via gpt-image-1
regen-keyword-pinnedWhen a stock fetch returns a watermarked / wrong-person image, generate a clean AI replacement and pin it so it wins the alphabetical sort in pickKeywordImages().
- Run with --slug <kw> --name <suffix> --prompt '<scene>'
- gpt-image-1 renders a 1024x1024 png
- Save to assets/keywords/<slug>/00_pinned_<name>.png
- Move the rejected stock into assets/keywords/_rejected/
- Add an assetGuards entry to style-guide.json so it never recurs
- scripts/regen-keyword-pinned.ts
- assets/keywords/_rejected/
- scripts/lib/style-guide.json
Live-stream archiver (lambda + launchctl)
youtube-archiverContinuously records the Planeta Alofoke YouTube live and uploads chunks to S3. Local launchctl agent triggers it on a cron-like cadence; cloud lambda is the production version.
- launchctl ~/Library/LaunchAgents/PlanetaAlofoke-YouTubeArchiver.plist runs every N min
- yt-dlp --live-from-start --download-sections '*now-Xm-now'
- Upload mp4 to s3://ai-content-assets/planeta-alofoke/recordings/<date>/<HH-MM>.mp4
- (Stopped manually when stream is just an animation card — saves spend.)
- scripts/record-live.sh
- ~/Library/LaunchAgents/PlanetaAlofoke-YouTubeArchiver.plist
Build + deploy this dashboard
dashboard-buildStatic-export Next.js app, then sync dashboard/out/ to S3 behind the public bucket.
- npm run dashboard:data (rebuild dashboard/data/*.json from source)
- npm run dashboard:build (next build → dashboard/out/)
- npm run dashboard:deploy (aws s3 sync dashboard/out/ s3://.../dashboard/ --delete)
- scripts/build-dashboard-data.ts
- dashboard/
- dashboard/out/