How Our AI Actually Works
Every AI feature on this site is powered by real AWS infrastructure. Explore the architecture, services, and technical flows behind each one.
AI Playground
Describe it. Watch it build.
Tech Stack Advisor
Describe your project. Get your stack.
AI Agent Visualizer
Watch AI think step-by-step.
AI Chat Assistant
Ask anything about creative-it.
Website Remix
Restyle this site with AI.
Live Translation
Read this site in 25+ languages.
Agentic Coding Stats
Live GitHub activity from AI agents.
Architecture
Full AWS service map
Describe it. Watch it build.
Type a plain-English description of any UI component and watch AI generate production-ready HTML + Tailwind CSS in real time. Supports multi-turn conversations to refine and iterate on your component.
Try itHow It Works
Your prompt is sent to an API Gateway endpoint backed by a Lambda function. The Lambda calls Amazon Bedrock with Claude, streaming tokens back through a chunked HTTP response. The frontend renders each chunk into a live preview iframe as it arrives.
Technical Flow
User types a component description in the browser
Request hits API Gateway with rate limiting (10/day per IP)
Lambda constructs a system prompt optimized for HTML/Tailwind generation
Amazon Bedrock streams Claude's response token-by-token
Frontend renders each chunk into a sandboxed iframe in real time
Conversation history is maintained client-side for multi-turn refinement
AWS Services
Describe your project. Get your stack.
Describe your project requirements and get AI-powered architecture recommendations with a complete tech stack, reasoning, and a Mermaid architecture diagram. Follow up with questions to refine the recommendations.
Try itHow It Works
The Lambda function sends your project description to Claude via Bedrock with a specialized system prompt that instructs the model to analyze requirements, recommend technologies, and generate a Mermaid diagram. Responses stream back with a special marker format for the diagram section.
Technical Flow
User describes their project requirements
Lambda sends the prompt to Bedrock with architecture-focused system instructions
Claude analyzes requirements and generates structured recommendations
Response includes a Mermaid diagram between ---DIAGRAM--- markers
Frontend renders markdown recommendations and loads Mermaid.js for the diagram
Follow-up questions maintain conversation context for deeper analysis
AWS Services
Watch AI think step-by-step.
Select a scenario (code review, data analysis, deployment, bug fix) and watch an AI agent break down the problem, plan a solution, execute steps, and verify results — all streamed live to a terminal-style interface.
Try itHow It Works
The Lambda receives a scenario ID, constructs a multi-step prompt chain, and calls Bedrock for each agent phase (think → plan → execute → verify → result). Each phase's output is streamed as newline-delimited JSON, with the frontend updating the pipeline visualization in real time.
Technical Flow
User selects a predefined scenario (e.g., 'Code Review')
Lambda receives the scenario and initiates a multi-phase prompt chain
Each phase (Think, Plan, Execute, Verify, Result) calls Bedrock independently
Responses stream as NDJSON with step metadata
Frontend highlights the active pipeline step and appends terminal output
Rate limiting tracks daily usage per IP
AWS Services
Ask anything about creative-it.
A floating chat widget that answers questions about creative-it's services, process, team, and capabilities. Maintains session context for natural follow-up conversations with streaming responses.
How It Works
The Lambda uses a Bedrock Knowledge Base backed by an S3 bucket of curated company documents. When a user asks a question, it performs RAG (Retrieval-Augmented Generation) — retrieving relevant chunks from the knowledge base, then generating a grounded answer with Claude. Session IDs enable multi-turn conversations.
Technical Flow
User types a question in the floating chat widget
Request includes a session ID for conversation continuity
Lambda queries the Bedrock Knowledge Base for relevant document chunks
Retrieved context is injected into Claude's prompt (RAG pattern)
Claude generates a grounded answer, streamed back to the widget
Session state persists across messages for follow-up questions
AWS Services
Restyle this site with AI.
Type a visual theme (e.g., 'retro 80s neon' or 'minimalist monochrome') and AI generates custom CSS that transforms the entire site's look in real time. Reset anytime to return to the original.
How It Works
The Lambda sends your theme description to Claude with a system prompt containing the site's CSS custom properties and design token structure. Claude generates override CSS that targets the existing theme variables. The frontend injects the CSS as a <style> tag, instantly restyling the page.
Technical Flow
User types a theme description (e.g., 'warm earthy tones')
Lambda sends the prompt with the site's CSS variable schema
Claude generates CSS overrides targeting theme custom properties
Response streams with CSS between ---CSS--- markers
Frontend extracts the CSS and injects it as a <style> element
A banner appears with a reset button to restore the original theme
AWS Services
Read this site in 25+ languages.
Click any language flag and the entire page is translated in place — headlines, paragraphs, buttons, and all. Translations are context-aware and preserve formatting. Reset to return to English anytime.
How It Works
The frontend collects all translatable text nodes from the DOM, batches them (50 per request), and sends them to a Lambda that calls Claude with translation-specific prompts. Claude returns a JSON array of translated strings, which the frontend applies back to the corresponding DOM elements.
Technical Flow
User clicks a language flag (e.g., German, Japanese)
Frontend traverses the DOM and collects text from translatable elements
Original text is stored in a Map for later reset
Texts are batched (50 per request) and sent to the translation Lambda
Claude translates the batch while preserving formatting and context
Translated strings are applied to DOM elements; a banner shows active language
AWS Services
Live GitHub activity from AI agents.
Displays real-time GitHub statistics — commits, lines changed, and a 7-day activity chart — from our organization's repositories. An AI-generated narrative summarizes the day's development activity.
Try itHow It Works
Two Lambda functions power this feature. The stats Lambda queries the GitHub API for commit and diff data across all org repos, caching results in DynamoDB with TTL. The story Lambda takes the stats and sends them to Claude, which generates a creative narrative about the day's coding activity.
Technical Flow
Page loads and fetches /github-stats from the API
Stats Lambda checks DynamoDB cache (5-minute TTL)
On cache miss, Lambda queries GitHub API for org-wide commit data
Stats are aggregated (24h, 7d) and history points are stored
Frontend renders stats cards and draws a Canvas-based activity chart
Story Lambda sends stats to Claude for a narrative summary
AWS Services
The Full Stack
All AI features run on a serverless AWS architecture. Here's every service involved, grouped by layer.
AI / ML Layer
Managed LLM inference with Claude — powers all AI features
RAG pipeline for the chat assistant's document retrieval
Compute Layer
Serverless functions for every API endpoint — zero idle cost
REST APIs with throttling, CORS, and custom domain mapping
Data Layer
Low-latency caching for GitHub stats and rate limiting
Document storage for the knowledge base and static assets
Networking & Orchestration
CDN for the static Astro site and asset delivery
Infrastructure as code — the entire stack defined in TypeScript
Centralized logging, metrics, and alerting across all Lambdas
Fine-grained permissions between services
Want This for Your Product?
Every feature on this site is built with the same tools and patterns we use for clients. Let's build something intelligent together.
Start a Conversation