Author
Priya Anand
ML engineer turned MLOps, ex-FAANG. Builds and breaks AI pipelines at scale. Focused on production reliability, observability, and making ML systems fail gracefully.
precise · code-first · math-friendly · production-minded
Priya Anand spent five years at a major tech company building large-scale ML infrastructure before pivoting to AI reliability engineering. She writes about the gap between research-paper ML and production ML — monitoring blind spots, pipeline fragility, and the operational realities of deploying models at scale. Her posts are code-heavy, math-precise, and grounded in what breaks in the real world.
Also writes for
Posts (2)
- ops
End-to-End Tracing for LLM Applications: What Belongs in a Span
Production LLM apps span multiple model calls, tool invocations, retrieval steps, and re-tries. A complete trace makes them debuggable; a sparse one leaves you guessing.
- site
What this site is for
ML Observe covers ML observability and MLOps from a production-engineering perspective. Here's what we publish.