Erik's blog podcast
Blog posts turned into two-host conversations. .NET, TypeScript, Azure, event sourcing, developer experience.
Listen on your favorite app
Stop alt-tabbing to DevTools: WithBrowserLogs makes Chromium an Aspire resource
Aspire 13.3 introduces WithBrowserLogs, an extension that attaches a tracked Chromium browser to an endpoint-capable resource. Console logs and network activity stream back into the Aspire dashboard alongside your backend traces. For anyone debugging a frontend that talks to a distributed backend, this is the hop you've been making manually for years.
Azure Functions Consumption on Linux is going away, what are the migration paths?
Linux Consumption is going away, and Microsoft's migration guidance points at Flex Consumption. But is Flex actually the right destination for your apps, or are the other paths Azure Functions on Azure Container Apps, Container Apps Jobs, AKS with KEDA, or plain ACA worth a closer look?
Run Gemma 4 with Ollama locally, and keep the Aspire LLM Insights (sparkles and all)
Can't use Microsoft Foundry because of compliance or an Azure bill that doubles during an AI development spike, but still want the best AI debugging experience in Aspire? Here's how to keep the full GenAI chat-log sparkles while Ollama and Gemma 4 run locally.
C# scripting in .NET 10: stop context-switching to your AI agent's scripts
You're debugging an issue and ask your AI agent to write a quick script that checks your database state. It hands you Python or JavaScript. You can read it, sure, but you can't review it at a glance the way you can with C. With .NET 10's dotnet run file.cs, there's no reason to leave your main coding language anymore for the utility scripts your agent writes during development.
Don't let your AI agent delegate the debug work to you: manage, monitor, and test your app with Aspire 13.2's CLI overhaul and new agent skills
You've discussed the feature with your AI agent, it wrote the code for it, but then what? You start the app, open the browser, click around, look for bugs, find one, describe it back to the agent. You're doing all the boring manual labor of verifying that what was built actually works. With Aspire's CLI overhaul in 13.2 and its new skills combined with Playwright CLI/skills the agent can manage and monitor your distributed app, open the browser, test the feature, and debug it. The tedious verify-and-fix loop becomes the agent's job, not yours.
Building and evolving your own AI development skills
Skills are the most powerful part of any agentic workflow, and they're also the easiest to get wrong. This post covers the full lifecycle: writing a skill from scratch, finding and adopting skills from the community, and closing the loop so your skills improve over time.
AI-driven usability testing: a think-aloud study with a team of AI testers
Manual usability testing is slow, expensive, and easy to skip when a deadline looms. The /tool-ux-study skill spawns a coordinated team of AI tester agents that each log in as a different persona, test the application under different themes and viewports, and report back — while a lead agent acts as UX research facilitator, observing sessions, probing for clarity, and synthesizing findings into a research-grade report.
Documentation as a first-class concern in your agentic workflow
Most teams write documentation after the feature ships. By then the context is stale, the pressure to move on is high, and the ADR nobody wrote is already forgotten. The agentic dev workflow treats docs as something you generate alongside the code, not something you backfill when someone complains the wiki is out of date.
Quality gates that actually run: verification and security in the agentic workflow
Most code quality checks exist in three places: a CI pipeline that runs after you push, a mental checklist you may or may not remember, and post-commit hooks that hit you with a wall of failures right when you thought you were done. The agentic workflow collapses all of these into a single command that runs before the PR, covers both .NET and Angular, and pairs automated scanning with reasoning about what the results mean.
Teaching your AI how to write tests with you
Everyone has opinions about how tests should look. Naming conventions, structure patterns, which mocking library to use. The problem with AI coding assistants is that they have collected opinions for their learnings too, and they're usually not yours.
Dependency updates that understand your code
We've all been there. You open your repository on Monday morning and there are a dozen dependency update PRs waiting. Some are patch updates, some are minor, one is a major version bump buried in the middle. CI is green on all of them. You merge them. What could go wrong?
Turning your AI tool into your pair programming companion
AI models are trained on billions of lines of "everyone's" code good and bad code, generic best practice, the Stack Overflow answer, the textbook approaches. But your project might not be that generic; why else did you otherwise create it?