AI-Powered Drawing Coach
DrawEvolve is an iOS-first drawing app that helps artists improve through real-time AI feedback and personalized practice plans. Instead of generating art, it acts like a coach that critiques, guides, and tracks your progress as you draw.
Built:
August–October 2025
Role:
Founder, designer, and engineer (product, UX, iOS, AI systems)
Founder, designer, and engineer (product, UX, iOS, AI systems)
Status:
In active development; Core drawing and AI Feedback complete and undergoing usertesting
Live: drawevolve.com (dev preview)
In active development; Core drawing and AI Feedback complete and undergoing usertesting
Live: drawevolve.com (dev preview)
The idea:
Most AI art tools create images for you. DrawEvolve helps you create better ones yourself. It analyzes your drawing process, identifies strengths and weaknesses, and builds a skill plan that evolves with you.
The problem:
- AI tools often replace creativity instead of improving it
- Artists lack personalized, structured feedback
- Practice feels unfocused and repetitive
- Most drawing apps stop at tools, not teaching
The solution:
- DrawEvolve combines a clean sketch interface with intelligent feedback. You can:
- Draw with a responsive, natural-feeling canvas
- Get instant critique or post-draw analysis
- Receive skill-based exercises to improve weak areas
- Track growth through evolving performance data
Key capabilities:
- Real-time AI feedback and post-draw critique
- Personalized skill plans and progress tracking
- Offline-first design for local drawing
- Freemium model with Pro tier for live coaching
- Privacy-focused data handling
Tech stack:
App: Swift, SwiftUI, PencilKit, Core Graphics, optional Metal, Combine
AI: Server endpoints in Node.js and TypeScript with streaming critique
Database and Auth: Supabase with Postgres and Row Level Security
Models: OpenAI for critique and analysis, lightweight on-device heuristics
Deployment: Vercel for APIs with Cloudflare caching
Roadmap:
- Expanded brush tools and layers
- Full Pro critique system
- Visual skill dashboard
- PSD and Procreate import/export
- Optional cloud sync
My role:
I designed the product, UX, and brand identity, built the Swift app, implemented backend AI endpoints, and developed the skill-mapping engine. I also handle marketing and identity under RIG Tech LLC.
Multi-Model AI Workspace
Lynk is a full-stack web app that lets you chat with multiple LLMs in one place, create live notes automatically, keep a durable project memory across sessions, and (soon) invite others into the same live conversation.
Built: August–September 2025
Role:
Solo designer/engineer (product, UX, frontend, backend, infra)
Status:
Working prototype; polishing for public beta
Live: lynk.website (alpha)
Live: lynk.website (alpha)
The idea:
Most AI chat tools silo context by model and session. Lynk unifies them. It provides a single workspace where OpenAI, Anthropic, Gemini, and other models can respond side-by-side, while a project-level memory system keeps track of what matters over time. The result: faster iteration, better comparisons, and reusable knowledge instead of disposable chats.
The problem:
- Context disappears between providers and sessions
- Switching models breaks continuity
- Collaboration is clunky (usually screenshots/exports)
- Token costs are unpredictable
The solution:
Lynk is a provider-agnostic workspace with persistent project memory. You can:
- Compare multiple LLMs in one place
- Carry insights forward with project memory
- Store links, files, and commands in a Project Registry
- Share or (soon) co-edit a session with one link
Key capabilities:
- Multi-model chat (OpenAI, Anthropic, Gemini, pluggable others)
- Inspector panel that surfaces gist, decisions, todos per turn
- Snapshots that consolidate sessions into structured summaries
- Project Registry for files, prompts, links, commands
- Markdown-first interface for clean code/docs
- Guest vs. account tiers, with Pro tier planned
- Cost safety rails: message caps, summarization, provider throttles
Tech stack:
- Frontend: React + Next.js, streaming responses, Markdown rendering
- Backend: Node/TypeScript API routes with provider adapters
- Database & Auth: Supabase (Postgres + Row Level Security, Auth)
- Providers: OpenAI, Anthropic, Google AI Studio
Security & cost controls:
- Row-Level Security keeps data private per user
- Guest sessions sandboxed, never carried into accounts
- Provider keys stay server-side
- Tiered limits, summarization, rate-limits to prevent runaway costs
Roadmap:
- Real-time shared sessions (“Invite to Chat”)
- Quick diagramming in-chat
- Team workspaces with roles/permissions
- Per-project provider settings
- Export to Markdown/PDF and shareable views
My role:
I designed the product and UX, built the frontend and backend, implemented Supabase auth + database, created the memory pipeline (Inspector + Snapshots), and set up provider adapters with cost controls.
Why it matters
Lynk turns one-off chats into a reusable knowledge base while keeping flexibility to use the right model for each task. It’s both a personal productivity tool and a demonstration of my approach to AI product design: human-centered UX, clear memory semantics, strong data boundaries, and pragmatic cost control.