logo
banner image

An Axis To Grind?

image

Looking Into The Magic Quadrant

The 2024 Gartner Magic Quadrant for Observability has now been released and, naturally, companies who are included will be sharing it for marketing purposes. After all, who would not want to capitalise on the prestige of being named a 'Leader' by an authority such as Gartner? At the same time though, we think that it is worth bearing in mind that the Magic Quadrant is not a League Table. It is more of a conceptual mapping across two very carefully defined axes, which are themselves delimited by criteria which are particular to Gartner. Below, is our own set of invocations for enjoying the magic of the quadrant and avoiding any vendor-driven hocus-pocus.

1. The format of the MQ diagram is open to inviting comparisons which may not be particularly meaningful. Is there really much value, for example, in comparing Chronosphere and Datadog. They are both great products, but Datadog is a Full Spectrum platform that includes LLM observability, Incident Management, CI/CD monitoring, Cloud SIEM etc.

2. It is quite a stretch to conceptualise 'Microsoft' or 'AWS' as coherent observability products. Azure Monitor has quite a few strands but it is more a series of loose ends than a fabric. It ingests telemetry from all sorts of sources, and it provides high-level analytics. However, dashboarding and reporting is highly fragmented and fragmented across multiple different UI's. It is hard to see any sense in which Azure Monitor is a 'Challenger' to the likes of New Relic or Datadog - much the same can be said of Cloudwatch.

3. Concepts such as Completeness of Vision maybe interesting for the purposes of theoretical debate, but they are of limited use if you are looking for a product that is a fit with a clearly defined observability strategy. Moreover, the Gartner definition of 'Completeness of Vision' is probably quite different from the image that the phrase might conjure up for many of us. Much of the definition actually concerns itself with sales and marketing capabilities. Naturally, you do need to sell your vision but many of us would not intuitively bundle the two together into a single dimension.

4. The survey includes only SaaS platforms - this excludes a number of excellent products such as Kloudfuse and Groundcover that would be of particular value for companies with specialist data governance requirements.

5. The selection criteria mean that products such as Observe, SigNoz, SkyWalking and many others are not included. Arguably, therefore, it paints only a partial picture of the market.

6. The Niche sector seems somewhat threadbare and does not reflect the really vibrant state of this particular space within the observability market. From our point of view this is not just a sector which is of interest because it is dynamic and diverse. It is also of strategic interest as products such as Kerno, Digma, Causely, Lightrun, Tracetest and many others are capable of sitting alongside core systems and providing very significant value.

7. Oracle! The name of Oracle was definitely one of the more surprising inclusions in the MQ. Was it a mistake? Perhaps an escapee from the RDBMS Quadrant? It turns out that there actually is such a thing as an Oracle SaaS Observability offering which actually does stand alone as a product - it is not just a rebranding of the plumbing for the Oracle Cloud Platform. At the same time though, it would seem to be a product which is pretty much unknown outside of the Oracle client base and it would be interesting to know the extent of its reach beyond that space.

8. Costs. Any discussion of the current Observability market that doesn't mention costs becomes a largely academic exercise. Whilst it is true that some vendors rather dramatically overplay the cost card, it is nevertheless one of the most salient issues in observability. A system may have a compelling vision or a formidable organisation structure behind it but, if a customer can't afford to ship their logs to it, those capabilities are reduced to mere hypotheticals.

The purpose of this article is not to knock Gartner, they produce really high quality analyses. The full report includes a lot of detail on the criteria for inclusion and exclusion and definitions of the dimensions. The problem is that these details tend to be overlooked as vendors make a great fanfare about their position in a particular quadrant. The position of a vendor along an axis is highlighted and the subtleties of the inclusion criteria are lost in the small print.

A point we have made many times is that there is no value in buying an observability system because it is "the best on the market" according to a particular ranking. If we are buying a car, not many of us will simply decide to buy the 'best car on the market' - whatever that may mean. Some of us might have a Need for Speed, some of us might just need a family car with loads of storage space and good fuel economy. One is not inherently better than the other. If you are going to procure an observability system, then the first thing you need is an observability strategy. Once you have figured out your strategy you will have a solid framework for evaluating different offerings and determining which is the best for your organisation. Ultimately, the "best" product is the one that best fits your needs.

Comments on this Article

You need register and be logged in to post a comment
Top