Updated April 2026

Log Management Pricing 2026: Splunk vs Datadog vs Elastic vs Everything Else

TL;DR

Log management costs $0.10 to $150+ per GB depending on vendor, features, and whether you are ingesting, indexing, or storing. At 100GB/day, expect $3,000-15,000/mo on commercial platforms. Self-hosted ELK: $1,000-4,000/mo in infrastructure. Logs are the single largest cost component of observability, typically accounting for 50-70% of total monitoring spend. The vendor you choose for log management has the biggest impact on your overall observability budget.

Log management is the most expensive component of observability for most organisations. While infrastructure metrics and APM traces get more attention in vendor marketing, logs consume the majority of both storage and processing resources in any monitoring deployment. A typical mid-market company generates 50-500GB of logs per day from application servers, Kubernetes clusters, load balancers, databases, and security systems. At these volumes, the choice of log management vendor is not a minor detail: it can create a $3,000-10,000/month cost difference for identical log volumes.

The complexity of log pricing stems from the fact that log management involves three distinct operations, each with different costs: ingestion (receiving logs from your infrastructure), indexing (making logs searchable and queryable), and archiving (long-term storage for compliance). Some vendors charge for all three separately, some bundle them, and some use entirely different pricing models. This page normalises all vendor pricing to a common per-GB basis so you can make an apples-to-apples comparison for the first time.

Every existing log pricing comparison is published by a vendor with a product to sell. Splunk articles compare themselves favourably to Elastic. Datadog articles emphasise their low ingestion rate without mentioning the separate indexing charge. Grafana articles promote Loki's low cost without disclosing the query performance trade-offs. This is the independent comparison that evaluates all vendors on the same terms, with transparent methodology and no preferred outcome.

Normalised Per-GB Pricing Comparison

The table below normalises all vendor pricing to a per-GB basis for fair comparison. Where vendors charge separately for ingestion, indexing, and archiving, all three costs are shown independently. The "effective per-GB" column estimates the total cost per GB for a typical use case where 100% of logs are ingested, 50% are indexed (made searchable), and all are archived for 30 days. This reflects real-world usage where teams filter out noise from indexing while maintaining full ingestion for compliance and archiving.

VendorIngest/GBIndex CostArchive/GBNotes
Datadog$0.10$1.70/M events$0.025Dual-charge: ingest + index separately
Splunk Cloud$2.00Included$0.100Workload-based alternative available
Elastic Cloud$0.12Included$0.030Resource-based pricing, GB cost varies
Grafana Loki (Cloud)$0.50Included$0.030Lower cost, label-based indexing only
Sumo Logic$2.50Included$0.050Per-account tiers with volume discounts
New Relic$0.30IncludedN/APart of unified per-GB pricing, 100GB/mo free

All prices as of April 2026. Datadog indexing at $1.70/M events assumes ~3M events/GB. Splunk Cloud pricing varies significantly by contract.

Understanding Ingestion vs Indexing vs Archiving

The critical distinction most teams miss when evaluating log management pricing is that log management involves three separate operations, each with different cost profiles and different vendor pricing approaches. Understanding this distinction is essential for accurate cost comparison and for optimising your log management spend.

Ingestion

$0.10 - $2.50/GB

Ingestion is the process of receiving logs from your infrastructure into the logging platform. This involves parsing, enriching (adding metadata), and routing logs to their destination (index, archive, or discard). Ingestion costs are typically the lowest component because receiving data is computationally cheaper than indexing it. However, you pay for ingestion on ALL logs, even those you subsequently discard or archive without indexing. Datadog charges $0.10/GB for ingestion, which appears cheap until you add their separate indexing charge. Splunk's $2.00/GB includes ingestion and indexing together.

Indexing

$1.70/M events - Included

Indexing is the process of making logs searchable and queryable. This is the most computationally expensive operation and the largest cost component on vendors that charge separately (like Datadog). Indexing involves full-text indexing, field extraction, pattern recognition, and storing the data in a query-optimised format. On Datadog, indexing costs $1.70 per million log events, which at approximately 3 million events per GB translates to $5.10/GB, 51x the ingestion cost. Most other vendors include indexing in their per-GB price, making their headline rates appear higher but their total cost often lower. The key cost optimisation lever is to index only the logs you actually need to search, routing the rest to cheap archive storage.

Archiving

$0.02 - $0.10/GB

Archiving stores logs in compressed format on cheap object storage (S3, GCS, Azure Blob) for long-term retention and compliance. Archived logs are not directly searchable without reindexing, but they are available for forensic investigation and audit compliance. Archive costs are 10-100x cheaper than indexed storage because object storage is dramatically less expensive than the compute and SSD storage required for search indexes. Most vendors offer archive features: Datadog Log Archives, Splunk SmartStore, Elastic Snapshot Lifecycle Management. Direct-to-S3 archiving at $0.023/GB/month is the cheapest option for compliance-only retention.

Scenario Pricing: What Log Management Actually Costs

Per-GB rates are useful for comparison but do not reflect actual monthly costs. The following three scenarios show what you will actually pay at different log volumes, assuming 50% of logs are indexed (the remainder ingested and archived only) and 15-day indexed retention. These assumptions reflect typical real-world usage patterns where teams configure log filters to exclude health checks, debug logs, and other noise from indexing while maintaining full ingestion for compliance.

Startup: 10GB/day

10GB/day from 10-20 application servers

VendorMonthly CostAnnual Cost
Elastic CloudCHEAPEST$36$432
New Relic$60$720
Grafana Loki$150$1,800
Splunk Cloud$600$7,200
Sumo Logic$750$9,000
Datadog$795$9,540

Difference between cheapest and most expensive: $759/month

Mid-Market: 100GB/day

100GB/day from 100 hosts, microservices, K8s cluster

VendorMonthly CostAnnual Cost
Elastic CloudCHEAPEST$360$4,320
New Relic$870$10,440
Grafana Loki$1,500$18,000
Splunk Cloud$6,000$72,000
Sumo Logic$7,500$90,000
Datadog$7,950$95,400

Difference between cheapest and most expensive: $7,590/month

Enterprise: 1000GB/day

1TB/day from 500+ hosts, multiple K8s clusters, security logs

VendorMonthly CostAnnual Cost
Elastic CloudCHEAPEST$3,600$43,200
New Relic$8,970$107,640
Grafana Loki$15,000$180,000
Splunk Cloud$60,000$720,000
Sumo Logic$75,000$900,000
Datadog$79,500$954,000

Difference between cheapest and most expensive: $75,900/month

Log Cost Reduction Strategies

Because logs represent 50-70% of total monitoring spend, optimising log costs has the largest impact on overall observability budgets. The following strategies apply to all vendors and can reduce log costs by 40-70% without losing the ability to investigate incidents or meet compliance requirements.

Source-Side Filtering

30-50%

Filter logs at the application level before they reach your logging platform. Drop health check endpoint logs, debug-level logs in production, verbose framework logs (Spring Boot actuator, Express.js middleware), and repetitive cron job outputs. This eliminates ingestion, indexing, AND storage costs for filtered logs. Most applications can reduce log volume by 30-50% through source-side filtering without losing any operational visibility. Configure your logging framework (log4j, winston, serilog) to use INFO level in production and implement structured logging with severity levels that enable efficient filtering.

Smart Sampling for High-Volume Sources

20-40%

Some log sources generate massive volume but each individual log event has low diagnostic value. HTTP access logs from a load balancer processing 10,000 requests per second generate approximately 100GB/day of logs. Sampling these at 10% still provides statistically valid traffic patterns, error rate calculations, and latency distributions while reducing volume by 90%. Implement sampling at the log agent level (Fluentd, Fluent Bit, Vector) using consistent hashing to ensure all logs from a single request are either captured or dropped together, preserving trace-to-log correlation for sampled requests.

Index Only What You Search

40-70%

The most expensive operation in log management is indexing (making logs searchable). Most teams index 100% of their logs by default, but only query 10-30% of them during normal operations. Configure your logging platform to ingest all logs but index only the logs you actually need to search: application errors, security events, transaction logs, and deployment events. Route the rest to cheap archive storage (S3 at $0.023/GB/month) where they remain available for compliance and forensic investigation but do not incur indexing costs. On Datadog, this is the single most impactful cost reduction because indexing costs 51x more than ingestion.

Structured Logging for Storage Efficiency

10-20%

Structured logs (JSON format) compress 40-60% better than unstructured text logs because repeated keys and predictable value formats enable more efficient compression algorithms. Converting from unstructured to structured logging reduces storage costs directly and enables more efficient field-level indexing instead of full-text indexing. Additionally, structured logs enable precise field-level queries that are faster and cheaper to execute than full-text searches across unstructured log data. The investment in structured logging pays off in both reduced storage costs and faster incident investigation.

Retention Tiering

15-30%

Implement three retention tiers: hot (7-15 days indexed, fully searchable, most expensive), warm (30-90 days archived with on-demand reindexing, moderate cost), and cold (12+ months on S3/GCS, cheapest possible storage, reindexable for compliance investigations). Most teams set a single retention period for all logs, which is both too long for data they never query and not long enough for compliance. Tiering reduces the amount of data in expensive indexed storage while maintaining longer retention on cheap object storage for compliance requirements.

Self-Hosted ELK vs Managed Log Platforms

The Elastic Stack (Elasticsearch, Logstash, Kibana) has been the default self-hosted log management solution for over a decade. Running ELK yourself eliminates vendor licensing costs but requires significant infrastructure and engineering investment. At 100GB/day log ingest, a production-ready ELK cluster requires 6-12 Elasticsearch nodes (8 vCPU, 32GB RAM, 1TB SSD each), 2-3 Logstash/Ingest nodes, and a Kibana instance. Total infrastructure cost: $2,000-5,000/month on AWS or GCP.

Engineering time for ELK maintenance is substantial: index management, shard balancing, capacity planning, upgrade cycles, and query performance tuning require 0.25-0.5 FTE of a senior platform engineer. At $12,500/month loaded cost per engineer, that is $3,125-6,250/month in engineering time. Total self-hosted ELK TCO at 100GB/day: $5,125-11,250/month, which is competitive with but not necessarily cheaper than managed alternatives like Grafana Cloud Loki ($1,500/month) or Elastic Cloud ($360/month for ingestion).

The self-hosted option makes financial sense primarily for organisations with very high log volumes (500GB+ per day) where per-GB vendor pricing becomes prohibitive, and for organisations with dedicated platform engineering teams that have the capacity and expertise to operate Elasticsearch at scale. For teams without Elasticsearch expertise, the learning curve is steep and the operational complexity is significant. Managed alternatives like Grafana Cloud Loki offer a compelling middle ground: lower cost than Datadog or Splunk, no infrastructure to manage, and query performance that is acceptable for most use cases even if not as fast as Elasticsearch for complex aggregations.

Related Resources

Full Vendor Comparison

All monitoring products compared

Datadog Pricing

Dual-charge log model explained

Hidden Costs Guide

Log indexing: the biggest hidden cost

Cost Reduction Strategies

Log sampling and filtering tactics

Cost Calculator

Model your log costs per vendor

Open Source vs Paid

Self-hosted ELK TCO analysis

Frequently Asked Questions

How much does log management cost?

Log management costs range from $0.10 to $150+ per GB depending on vendor and whether you account for ingestion, indexing, and archiving separately. At 100GB/day, typical monthly costs are: Grafana Loki $1,500, Elastic Cloud $360, New Relic $800 (after 100GB free), Datadog $2,610 (ingestion + indexing for 50% of logs), Sumo Logic $7,500, and Splunk Cloud $6,000. Self-hosted ELK costs $2,000-5,000/month in infrastructure plus engineering time. The cheapest managed option for pure log management is typically Grafana Cloud Loki or Elastic Cloud, while the cheapest option overall is self-hosted Loki or ELK if you have the engineering capacity.

Why is Splunk so expensive?

Splunk is expensive because its pricing model was designed for an era when log volumes were much lower and Splunk was primarily a security and compliance tool where the cost was justified by regulatory requirements. Splunk Cloud charges approximately $2.00 per GB of daily indexed volume, which at 100GB/day translates to $6,000/month. Splunk has introduced workload-based pricing and various cost optimisation features (SmartStore, Dynamic Data Self-Storage) to address cost concerns, but its base pricing remains 4-20x higher than alternatives like Elastic or Grafana Loki for the same log volume. Many Splunk customers stay because of switching costs: years of saved searches, dashboards, alerts, and team expertise that would need to be rebuilt on a new platform. If you are evaluating a new log management deployment, Splunk is rarely the cost-optimal choice unless you have specific compliance requirements that mandate its features.

What is the cheapest log management solution?

The cheapest managed log management solution is either Grafana Cloud Loki ($0.50/GB) or Elastic Cloud (~$0.12/GB) depending on your query patterns and feature requirements. Loki uses a label-based indexing approach that is cheaper to operate but provides less granular full-text search than Elasticsearch. If you need powerful full-text search with complex aggregations, Elastic is the better value. If you primarily query logs by known label values (service name, severity level, timestamp range), Loki is significantly cheaper. The cheapest overall option is self-hosted Loki, which eliminates per-GB vendor charges entirely, costing only the infrastructure to run it (approximately $500-1,500/month for 100GB/day). New Relic is also extremely cost-effective for low to moderate log volumes because of the 100GB/month free ingest allowance.