Two months ago I published a post about the 71 free browser-based AI and DevOps tools I built. It got way more attention than I expected. Thousands of developers started using them daily.
The most common feedback: "Can you build a tool for X?"
So I did. Four new tools, all addressing gaps that didn't have good free solutions anywhere on the internet. Same rules as before: 100% client-side, no backend, no sign-up, no tracking, your data never leaves your machine.
Open the full toolkit — now 75 tools
Tool 72: Weather Widget
A clean, Apple-style weather widget. Search any city, get current conditions with a visual weather icon, hourly forecast strip, feels-like temperature, humidity, wind speed and direction.
Toggle between Fahrenheit and Celsius. Powered by the free Open-Meteo API.
Why I built it: I wanted a weather widget that looks good, runs fast, and doesn't require creating an account or getting an API key. Every other free weather tool either looks terrible, is covered in ads, or requires registration. This one is just clean information.
Real use case: I embed this in my morning dashboard routine. Quick check without opening a weather app or getting bombarded with news headlines.
Tool 73: AI System Prompt Analyzer
Paste any AI system prompt — leaked, open-source, or your own — and get an instant analysis:
- Token count with cost estimate across models
- Safety rail detection — counts restrictions, prohibitions, and refusals
- Capability detection — finds tool definitions, function calls, and permission grants
- Persona extraction — identifies the role and character definitions
- Complexity score (0-100) based on length, rule density, and section count
- Optimization tips — actionable suggestions to improve the prompt
Why I built it: System prompts are the most important part of any AI application, but there's no good tool to analyze them. After reading through dozens of leaked system prompts from Claude, ChatGPT, and Gemini, I realized engineers need a structured way to evaluate prompt quality.
Real use case: Before deploying a new AI agent, I paste the system prompt into this tool. If the complexity score is above 80, I know I need to simplify. If the safety rail count is above 50, I know I might be causing over-refusal. It caught three redundant instructions in my last production prompt.
Tool 74: Vibe Coding Prompt Generator
Select your AI coding tool (Claude Code, Cursor, Windsurf, Replit, Lovable, Bolt), pick your project type, choose a tech stack, describe what you want, toggle optional features (tests, Docker, auth, dark mode) — and get a production-ready prompt optimized for your specific tool.
Each generated prompt follows the SCOPE structure: Situation, Constraints, Outcome, Pattern, Edge cases. This is the structure I've found produces the best results across all vibe coding tools.
Why I built it: Vibe coding is the biggest trend in software development right now. But most people write terrible prompts — vague one-liners that produce vague code. This tool forces structure. It knows that Claude Code works best with explicit file structure requirements, that Cursor needs inline context hints, and that Windsurf prefers step-by-step flows.
Real use case: I used this to generate a prompt for building a SaaS dashboard with Next.js. The generated prompt included tech stack constraints, file structure expectations, authentication requirements, and testing patterns. Claude Code produced a working dashboard with auth, billing, and a data table — on the first try. Without the structured prompt, it would have taken 3-4 iterations.
Check out the full vibe coding prompt library for 50 battle-tested prompts.
Tool 75: AI Agent Cost Calculator
Building AI agents is expensive if you don't plan ahead. This tool compares costs across 10 LLM providers:
Enter your expected agent calls per day, average input/output tokens per call, and tool calls per run. It calculates:
- Cost per call, daily, monthly, and yearly for each model
- Visual bar chart comparing monthly costs
- Recommendations for prototyping, production, and complex reasoning
Models compared: Claude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5, GPT-4o, GPT-4o-mini, GPT-o3, Gemini 3.1 Pro, Gemini 3.1 Flash, Llama 4 Maverick, DeepSeek R1.
Why I built it: Every team I talk to underestimates AI agent costs. They prototype with the cheapest model, then switch to a more capable one in production and get sticker shock. This tool lets you model costs before writing a single line of code.
Real use case: We were planning to use Claude Opus 4.6 for our infrastructure agent (500 calls/day, 3000 input tokens, 800 output tokens). This calculator showed it would cost $42K/year. Switching to Sonnet 4.6 dropped it to $8.4K/year with negligible quality loss for our use case. That's a $33K/year saving discovered in 30 seconds.
What's Next
I'm continuing to build tools based on what developers actually need. The toolkit now covers:
- 32 AI & LLM tools — prompt engineering, model comparison, token counting, code review
- 8 DevOps & CI/CD tools — Docker, Terraform, Kubernetes, monitoring
- 10 converter tools — JSON, YAML, CSV, Base64, URL encoding
- 5 SEO & web tools — meta tags, social cards, robots.txt, color contrast, weather
- 4 CSS & design tools — gradients, shadows, palettes, flexbox
- 4 security tools — JWT, hashing, passwords, UUIDs
- 3 image & media tools — compression, QR codes, code screenshots
- 6 text & data tools — markdown, mock data, JSON visualization, string utilities
- 3 new viral tools — system prompt analyzer, vibe coding prompts, agent cost calculator
If you have a tool idea, reach out on X or LinkedIn.
Read the original post: I Built 71 Free AI and DevOps Tools That Run Entirely in Your Browser