Multi-Model AI Workspace
Lynk is a full-stack web app that lets you chat with multiple LLMs in one place, create live notes automatically, keep a durable project memory across sessions, and (soon) invite others into the same live conversation.
Built: August–September 2025
Role:
Solo designer/engineer (product, UX, frontend, backend, infra)
Status:
Working prototype; polishing for public beta
Live: lynk.website (alpha)
Live: lynk.website (alpha)
The idea:
Most AI chat tools silo context by model and session. Lynk unifies them. It provides a single workspace where OpenAI, Anthropic, Gemini, and other models can respond side-by-side, while a project-level memory system keeps track of what matters over time. The result: faster iteration, better comparisons, and reusable knowledge instead of disposable chats.
The problem:
- Context disappears between providers and sessions
- Switching models breaks continuity
- Collaboration is clunky (usually screenshots/exports)
- Token costs are unpredictable
The solution:
Lynk is a provider-agnostic workspace with persistent project memory. You can:
- Compare multiple LLMs in one place
- Carry insights forward with project memory
- Store links, files, and commands in a Project Registry
- Share or (soon) co-edit a session with one link
Key capabilities:
- Multi-model chat (OpenAI, Anthropic, Gemini, pluggable others)
- Inspector panel that surfaces gist, decisions, todos per turn
- Snapshots that consolidate sessions into structured summaries
- Project Registry for files, prompts, links, commands
- Markdown-first interface for clean code/docs
- Guest vs. account tiers, with Pro tier planned
- Cost safety rails: message caps, summarization, provider throttles
Tech stack:
- Frontend: React + Next.js, streaming responses, Markdown rendering
- Backend: Node/TypeScript API routes with provider adapters
- Database & Auth: Supabase (Postgres + Row Level Security, Auth)
- Providers: OpenAI, Anthropic, Google AI Studio
Security & cost controls:
- Row-Level Security keeps data private per user
- Guest sessions sandboxed, never carried into accounts
- Provider keys stay server-side
- Tiered limits, summarization, rate-limits to prevent runaway costs
Roadmap:
- Real-time shared sessions (“Invite to Chat”)
- Quick diagramming in-chat
- Team workspaces with roles/permissions
- Per-project provider settings
- Export to Markdown/PDF and shareable views
My role:
I designed the product and UX, built the frontend and backend, implemented Supabase auth + database, created the memory pipeline (Inspector + Snapshots), and set up provider adapters with cost controls.
Why it matters
Lynk turns one-off chats into a reusable knowledge base while keeping flexibility to use the right model for each task. It’s both a personal productivity tool and a demonstration of my approach to AI product design: human-centered UX, clear memory semantics, strong data boundaries, and pragmatic cost control.