<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Grafana on Abhishek Walia</title><link>https://awalia.dev/tags/grafana/</link><description>Recent content in Grafana on Abhishek Walia</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>Abhishek Walia</copyright><lastBuildDate>Sun, 01 May 2022 00:00:00 +0000</lastBuildDate><atom:link href="https://awalia.dev/tags/grafana/index.xml" rel="self" type="application/rss+xml"/><item><title>JMX Monitoring Stacks</title><link>https://awalia.dev/projects/jmx-monitoring-stacks/</link><pubDate>Sun, 01 May 2022 00:00:00 +0000</pubDate><guid>https://awalia.dev/projects/jmx-monitoring-stacks/</guid><description>&lt;p&gt;Most monitoring setups for Kafka require long and lengthy parsing configurations to get JMX metrics into your telemetry system. This project standardizes that pipeline across Prometheus/Grafana, New Relic, Elastic/Kibana, Datadog, and OpenTelemetry.&lt;/p&gt;</description></item><item><title>Monitor Kafka Clusters with Prometheus, Grafana, and Confluent</title><link>https://awalia.dev/talks/confluent-blog-prometheus-grafana/</link><pubDate>Mon, 29 Mar 2021 00:00:00 +0000</pubDate><guid>https://awalia.dev/talks/confluent-blog-prometheus-grafana/</guid><description>&lt;p&gt;Published on the &lt;strong&gt;Confluent Blog&lt;/strong&gt;, March 2021.&lt;/p&gt;
&lt;p&gt;Self-managing a Kafka cluster means wiring up your own monitoring. This post walks through how to export JMX data from Confluent clusters into Prometheus and Grafana with minimal setup. It became the reference blog series for connecting Confluent ecosystems to Prometheus-based monitoring stacks.&lt;/p&gt;</description></item></channel></rss>