LLM Playground
Industry-ready LLM playground demonstrating real-world LLM integration with a FastAPI backend, Google Gemini, and a clean React + Tailwind UI, deployed with production-grade architecture.
-
Designed an API-first FastAPI backend with versioned endpoints
(
/api/v1/ask) for LLM inference and structured responses. - Integrated Google Gemini securely on the server side with environment-based configuration, token usage tracking, and centralized error handling.
- Built a minimal, industry-style React UI with animated response reveals, Markdown rendering, copy-to-clipboard support, and prompt history.
- Implemented advanced UX features including a token slider (up to 100k), dynamic prompt examples per session, and fully responsive layouts.
- Deployed backend on Render and frontend on Vercel with production-ready CORS configuration, structured logging, and safe secrets management.