Scaling a Live Camera Network to 14,400+ Cameras with AI
How we used Claude to build scrapers, generate SEO content, and scale a webcam streaming platform from 30 cameras to 14,400+ live cameras (24,500+ unique pages) in under two weeks.
The Challenge
Port of Cams started as a simple project — a handful of webcams from Hawaii and Alaska streamed via HLS. The streaming infrastructure (MediaMTX + Caddy) was solid, but scaling the camera catalog was a manual, tedious process.
Each camera needed an RTSP source, an HLS endpoint, a dedicated page with metadata, weather data, map coordinates, and SEO-optimized content. Adding one camera could take 30 minutes. The goal was to go from dozens of cameras to thousands — covering every scenic highway cam, ski resort, and DOT traffic feed in the US — without hiring a team.
The real bottleneck wasn't infrastructure. It was the sheer volume of data wrangling: scraping camera feeds from 10+ government and third-party APIs, each with different formats, auth methods, and data structures.
The Solution
Claude became the core engineering partner for the entire scaling effort. Here's what we built together:
10 Custom Scrapers in 3 Days Each government DOT system has a completely different API. FAA WeatherCams uses a REST API with Referer header auth. WSDOT publishes open JSON. Oregon DOT serves direct JPGs from TripCheck. Utah, Idaho, Nevada, and 9 other states use a mix of Iteris GeoJSON, ArcGIS endpoints, and 511 DataTables platforms.
Claude analyzed each API's structure, wrote the scraper logic, handled pagination and rate limiting, and generated the Astro page templates — all in a single session per source. What would have been weeks of reverse-engineering became hours.
Automated Page Generation For each camera, Claude generated a complete Astro page with: - HLS.js video player with auto-recovery and stall detection - Weather widget integration - Interactive Leaflet map with precise coordinates - SEO metadata, JSON-LD structured data, and OpenGraph tags - Amazon Associates affiliate links (context-aware by location) - Multi-view grid layouts for DOT sites with multiple angles
Infrastructure Optimization Claude diagnosed and fixed HLS streaming issues: tuned MediaMTX to 4-second segments with 4x queue depth, forced TCP transport to eliminate packet loss over VPN, and implemented HLS.js error recovery that automatically handles frozen feeds across all 14,400+ player instances.
The Results
The platform went from 413 cameras and 617 pages to over 14,400 live cameras and 24,500+ pages in under two weeks. Key outcomes:
- 10 scrapers covering FAA, WSDOT, Oregon DOT, 12 state 511 systems, Caltrans, Windy API, Alaska DOT, and ski resorts
- 24,500+ SEO-optimized pages with structured data, each ranking for location-specific camera queries
- Zero-downtime scaling — the HLS infrastructure handled the 40x increase without architectural changes
- Monetization ready — every page includes Google AdSense, Amazon Associates affiliate links, and Meta Pixel tracking
- 11K+ additional cameras researched in the pipeline for future expansion
The entire scaling effort was done by a solo developer working with Claude. No additional engineers, no outsourcing, no content writers.
Claude's Role
Claude wrote all 10 scrapers, generated 24,500+ page templates, debugged HLS streaming issues, optimized MediaMTX configuration, and handled SEO markup generation. It served as the full engineering team — architect, backend developer, frontend developer, and DevOps — across a two-week sprint.
Tech Stack
Want results like these?
Let's talk about what AI can do for your business.