Why businesses are investing in observability
IT infrastructure has become more complex in recent years. More than ever, organisations are constantly dealing with systems running on several microservices and hosted in a distributed cloud environment.
This complexity is further increased by the integration of a wide array of technologies like artificial intelligence and machine learning that are necessary for creating complex modern-day systems.
Given this level of complexity in modern IT estates, having a clear picture of the performance and health of IT infrastructure becomes a challenge. Before now, most applications were built on monolithic architecture where all components were tightly integrated into a single unit and deployed as a single service, making visibility possible with traditional software monitoring tools.
However, with the distributed nature of modern-day applications, there is a growing concern that traditional monitoring tools will leave businesses with many blind spots as to what goes on in their IT infrastructures.
With more blind spots come high chances of operational inefficiencies like system downtime, poor user experience, lack of predictive insights, poor resource utilisation, etc.
The technical issue that grounded the UK’s air traffic control system for more than three hours in August is a typical case of what poor visibility could cause in a business running on complex networks and applications.
To address these challenges, more businesses are adopting observability platforms, which offer end-to-end visibility into how vast IT components interact to improve system uptime, enhance performance and optimise resource utilisation.
As a result, many observability platforms have launched in recent years, with Splunk, Dynatrace, Data Dog and Honeycomb emerging as market leaders.
By 2028, the observability tools and platforms market is projected to hit $4.1 billion, a sharp increase from its 2023 valuation ($2.4bn), according to MarketsandMarket.
Observability platforms provide businesses of all sizes and industries with a comprehensive set of tools to assess, oversee, and manage their cloud services, applications, and infrastructure effectively through a single pane of glass.
They help IT teams understand the internal state of complex systems by analysing the data they generate, such as logs, traces, and metrics. Many observability platforms are driven by AI and ML capability, which furnishes IT teams with actionable intelligence that can expedite the remediation of issues in complex IT operations.
Observability solutions also provide real-time data analytics of IT operations, revealing what needs attention or changing, leading to better resource utilisation.
The need for observability is driven by both business and technical concerns, which of course, are intertwined for better IT operations. From a technical standpoint, businesses are adopting observability because it helps IT operations, development, and networking teams to identify and analyse anomalies within their IT ecosystem faster than they would have if they relied solely on traditional monitoring tools.
In a statement made available to TechInformed, Chris Baily, Observability and AIOps engineer at IBM, said: “The need for observability tools is fueled by the limitations found in traditional software monitoring tools which are often component or technology-based and aligned to teams that managed different technologies.
“However, observability solutions are not siloed; hence, they can automatically discover different components of a system, track their relationships, and provide a holistic rather than siloed view of the overall application and its dependencies.”
Observability also allows businesses to build better applications, says Roman Spitzbart, VP EMEA Solutions Engineering at Dynatrace.
“With observability, development teams can have a better understanding of what is going on in their systems and make necessary adjustments to issues that may lead to potential glitches in the application. This results in building applications faster and with fewer resources,” Spitzbart added.
All these connect to business gain as being able to detect and resolve system anomalies on time reduces the chances of system downtime, offers a better user experience, and, by extension, leads to a better return on investments.
Speaking to TechInformed on the business gains of observability, Stephane Estevez, EMEA director of product marketing at Splunk, stated:
“There is a direct relationship between performance, speed and revenue of a business, and what observability does is to ensure that businesses have all the data they need in the right cadence so they can achieve their goals of availability, functionality, performance and better return on investment.”
Some reports have shown that businesses that invest in observability are less likely to record operations shutdowns or downtimes, leading to improved user experience and better ROI.
Splunk’s State of Observability 2023 report, which surveyed about 1,750 observability practitioners, managers and experts from organisations with 500 or more, put it in more context.
According to the findings, 76% of the respondents reported that downtime can cost up to $150,000 per hour. In that same survey, observability leaders reported 33% fewer outages per year than new adopters and were about eight times more likely to exceed expectations in ROI on observability tools.
Despite the rewards of observability, many businesses find it complex to predict the cost of observability. This complexity arises from several factors, including the unpredictable nature of observability needs, evolving infrastructure, and the pricing models employed by observability platforms.
“One major challenge of using observability is the unpredictability of costs, says Scott Kramer, director of Information Technology Clio, a cloud-based legal technology company.
“The expense of observability is proportional to the level of visibility a company desires. So, based on parameters like business needs, threat vectors, and operational systems in place, organisations often deal with variability in data volume, making it difficult to have an accurate observability budget,” Kramer added.
While there is the unpredictability in cost and budget estimation with observability, Kramer adds that organisations can find value in their investment by taking a quantifiable risk management approach.
“There can be cost variances,” says Kramer, “so planning and communication is important to minimise unpredictability.”
Generally, most observability solutions apply various pricing models ranging from one-time license costs to monthly and yearly subscriptions. What businesses are charged is usually based on their type of workload, the amount of data running on their infrastructure, and the number of activities being monitored.
Despite these complexities in predicting observability costs, businesses might be able to manage their observability expense better if they understood the criteria upon which each observability platform based its pricing. Regardless of the scale of the business, the best practice is usually to choose a plan that allows you to monitor only what you need.
This approach aims to minimise data collection and retain data for the shortest possible period. However, this raises questions about how organisations can determine their exact needs in advance and whether minimal data retention makes it impossible to correlate with historical data that may be needed for troubleshooting or analysis.
Coming soon: Observability in action – two case studies
Subscribe to our Editor's weekly newsletter