Behind the Scenes

How Our AI Actually Works

Every AI feature on this site is powered by real AWS infrastructure. Explore the architecture, services, and technical flows behind each one.

AI Playground

Describe it. Watch it build.

Type a plain-English description of any UI component and watch AI generate production-ready HTML + Tailwind CSS in real time. Supports multi-turn conversations to refine and iterate on your component.

Try it

How It Works

Your prompt is sent to an API Gateway endpoint backed by a Lambda function. The Lambda calls Amazon Bedrock with Claude, streaming tokens back through a chunked HTTP response. The frontend renders each chunk into a live preview iframe as it arrives.

Technical Flow

01

User types a component description in the browser

02

Request hits API Gateway with rate limiting (10/day per IP)

03

Lambda constructs a system prompt optimized for HTML/Tailwind generation

04

Amazon Bedrock streams Claude's response token-by-token

05

Frontend renders each chunk into a sandboxed iframe in real time

06

Conversation history is maintained client-side for multi-turn refinement

AWS Services

Amazon Bedrock
LLM inference with Claude
AWS Lambda
Serverless request handler
API Gateway
REST endpoint with throttling
CloudWatch
Logging and monitoring
Tech Stack Advisor

Describe your project. Get your stack.

Describe your project requirements and get AI-powered architecture recommendations with a complete tech stack, reasoning, and a Mermaid architecture diagram. Follow up with questions to refine the recommendations.

Try it

How It Works

The Lambda function sends your project description to Claude via Bedrock with a specialized system prompt that instructs the model to analyze requirements, recommend technologies, and generate a Mermaid diagram. Responses stream back with a special marker format for the diagram section.

Technical Flow

01

User describes their project requirements

02

Lambda sends the prompt to Bedrock with architecture-focused system instructions

03

Claude analyzes requirements and generates structured recommendations

04

Response includes a Mermaid diagram between ---DIAGRAM--- markers

05

Frontend renders markdown recommendations and loads Mermaid.js for the diagram

06

Follow-up questions maintain conversation context for deeper analysis

AWS Services

Amazon Bedrock
Architecture analysis with Claude
AWS Lambda
Prompt orchestration
API Gateway
REST endpoint with rate limiting
CloudWatch
Request logging
AI Agent Visualizer

Watch AI think step-by-step.

Select a scenario (code review, data analysis, deployment, bug fix) and watch an AI agent break down the problem, plan a solution, execute steps, and verify results — all streamed live to a terminal-style interface.

Try it

How It Works

The Lambda receives a scenario ID, constructs a multi-step prompt chain, and calls Bedrock for each agent phase (think → plan → execute → verify → result). Each phase's output is streamed as newline-delimited JSON, with the frontend updating the pipeline visualization in real time.

Technical Flow

01

User selects a predefined scenario (e.g., 'Code Review')

02

Lambda receives the scenario and initiates a multi-phase prompt chain

03

Each phase (Think, Plan, Execute, Verify, Result) calls Bedrock independently

04

Responses stream as NDJSON with step metadata

05

Frontend highlights the active pipeline step and appends terminal output

06

Rate limiting tracks daily usage per IP

AWS Services

Amazon Bedrock
Multi-step agent reasoning
AWS Lambda
Agent orchestration
API Gateway
Streaming endpoint
CloudWatch
Step-level tracing
AI Chat Assistant

Ask anything about creative-it.

A floating chat widget that answers questions about creative-it's services, process, team, and capabilities. Maintains session context for natural follow-up conversations with streaming responses.

How It Works

The Lambda uses a Bedrock Knowledge Base backed by an S3 bucket of curated company documents. When a user asks a question, it performs RAG (Retrieval-Augmented Generation) — retrieving relevant chunks from the knowledge base, then generating a grounded answer with Claude. Session IDs enable multi-turn conversations.

Technical Flow

01

User types a question in the floating chat widget

02

Request includes a session ID for conversation continuity

03

Lambda queries the Bedrock Knowledge Base for relevant document chunks

04

Retrieved context is injected into Claude's prompt (RAG pattern)

05

Claude generates a grounded answer, streamed back to the widget

06

Session state persists across messages for follow-up questions

AWS Services

Amazon Bedrock
LLM inference + Knowledge Bases
Amazon S3
Document storage for RAG
AWS Lambda
Query orchestration
API Gateway
Chat endpoint with session tracking
Website Remix

Restyle this site with AI.

Type a visual theme (e.g., 'retro 80s neon' or 'minimalist monochrome') and AI generates custom CSS that transforms the entire site's look in real time. Reset anytime to return to the original.

How It Works

The Lambda sends your theme description to Claude with a system prompt containing the site's CSS custom properties and design token structure. Claude generates override CSS that targets the existing theme variables. The frontend injects the CSS as a <style> tag, instantly restyling the page.

Technical Flow

01

User types a theme description (e.g., 'warm earthy tones')

02

Lambda sends the prompt with the site's CSS variable schema

03

Claude generates CSS overrides targeting theme custom properties

04

Response streams with CSS between ---CSS--- markers

05

Frontend extracts the CSS and injects it as a <style> element

06

A banner appears with a reset button to restore the original theme

AWS Services

Amazon Bedrock
CSS generation with Claude
AWS Lambda
Theme prompt handler
API Gateway
Remix endpoint
Live Translation

Read this site in 25+ languages.

Click any language flag and the entire page is translated in place — headlines, paragraphs, buttons, and all. Translations are context-aware and preserve formatting. Reset to return to English anytime.

How It Works

The frontend collects all translatable text nodes from the DOM, batches them (50 per request), and sends them to a Lambda that calls Claude with translation-specific prompts. Claude returns a JSON array of translated strings, which the frontend applies back to the corresponding DOM elements.

Technical Flow

01

User clicks a language flag (e.g., German, Japanese)

02

Frontend traverses the DOM and collects text from translatable elements

03

Original text is stored in a Map for later reset

04

Texts are batched (50 per request) and sent to the translation Lambda

05

Claude translates the batch while preserving formatting and context

06

Translated strings are applied to DOM elements; a banner shows active language

AWS Services

Amazon Bedrock
Context-aware translation with Claude
AWS Lambda
Batch translation handler
API Gateway
Translation endpoint
Agentic Coding Stats

Live GitHub activity from AI agents.

Displays real-time GitHub statistics — commits, lines changed, and a 7-day activity chart — from our organization's repositories. An AI-generated narrative summarizes the day's development activity.

Try it

How It Works

Two Lambda functions power this feature. The stats Lambda queries the GitHub API for commit and diff data across all org repos, caching results in DynamoDB with TTL. The story Lambda takes the stats and sends them to Claude, which generates a creative narrative about the day's coding activity.

Technical Flow

01

Page loads and fetches /github-stats from the API

02

Stats Lambda checks DynamoDB cache (5-minute TTL)

03

On cache miss, Lambda queries GitHub API for org-wide commit data

04

Stats are aggregated (24h, 7d) and history points are stored

05

Frontend renders stats cards and draws a Canvas-based activity chart

06

Story Lambda sends stats to Claude for a narrative summary

AWS Services

Amazon Bedrock
AI story generation with Claude
Amazon DynamoDB
Stats caching with TTL
AWS Lambda
GitHub API integration + story generation
API Gateway
Stats and story endpoints
Architecture

The Full Stack

All AI features run on a serverless AWS architecture. Here's every service involved, grouped by layer.

AI / ML Layer

Amazon Bedrock

Managed LLM inference with Claude — powers all AI features

Bedrock Knowledge Bases

RAG pipeline for the chat assistant's document retrieval

Compute Layer

AWS Lambda

Serverless functions for every API endpoint — zero idle cost

API Gateway

REST APIs with throttling, CORS, and custom domain mapping

Data Layer

Amazon DynamoDB

Low-latency caching for GitHub stats and rate limiting

Amazon S3

Document storage for the knowledge base and static assets

Networking & Orchestration

Amazon CloudFront

CDN for the static Astro site and asset delivery

AWS CDK

Infrastructure as code — the entire stack defined in TypeScript

Amazon CloudWatch

Centralized logging, metrics, and alerting across all Lambdas

AWS IAM

Fine-grained permissions between services

Want This for Your Product?

Every feature on this site is built with the same tools and patterns we use for clients. Let's build something intelligent together.

Start a Conversation
AI Chat Theme Remix Translate