I wonder how much that no-expense-spared, money-is-no-object attitude to buying SaaS impacts an engineers ability to make sensible decisions around infra and architecture. Coinbase might have been fine blowing 65 mil but take that approach to a new startup and you could trivially eat up a significant amount of runway with it.
I won’t single out Datadog on this because the exact same thing happens with cloud spend, and it’s very literally burning money.
viccis 7 hours ago [-]
>I wonder how much that no-expense-spared, money-is-no-object attitude to buying SaaS impacts an engineers ability to make sensible decisions around infra and architecture
I saw this a lot at a previous company. Being able to just "have more Lambdas scale up to handle it" got some very mediocre engineers past challenges they encountered. But it did so at the cost of wasting VAST amounts of money and saddling themselves with tech debt that completely hobbled the company's ability to scale.
It was very frustrating to be too junior to be able to change minds. Even basic things like "I know it worked for you with old on-prem NFS designs but we shouldn't be storing our data in 100kb files in S3 and firing off thousands of Lambda invocations to process workloads, we should be storing it in 100mb files and using industry leading ETL frameworks on it". They were old school guys who hadn't adjusted to best practices for object storage and modern large scale data loads (this was a 1M event per second system) and so the company never really succeeded despite thousands of customers and loads of revenue.
I consider cost consideration and profiling to be an essential skill that any engineer working in cloud style environments should have, but it's especially important that a staff engineer or person in a similar position have this skill set and be ready to grill people who come up with wasteful solutions.
swyx 8 hours ago [-]
the visible cost of burning runway on a bill is very often far less than the invisible cost of burning engineer time rebuilding undifferentiated heavy lifting rather than working on product/customer needs
wavemode 41 minutes ago [-]
I wouldn't really say "very often". Occasionally, perhaps.
Even from a pure zero-sum mathematical perspective, it can make sense to invest even as much as 2 or 3 months of engineer time on cloud cost savings measures. If the engineer is making $200K, that's a $30000 - $50000 investment. When you see the eye-watering cloud bills many startups have, you would realize that, that investment is peanuts in comparison to the potential savings over the next several years.
And then you also have to keep in mind that, these things are usually not actually zero-sum. The engineer could be new, and working on the efficiency project helps them onboard to your stack. It could be the case that customers are complaining (or could start complaining in the future) about how slow your product is, so you actually improve the product by improving the infrastructure. Or it could just be the very common case that there isn't actually a higher-value thing for that engineer to be working on at that time.
QuinnyPig 2 hours ago [-]
This is very well stated.
pphysch 7 hours ago [-]
Most of the complexity in observability is clientside.
It is not hard to spin up Grafana and VictoriaMetrics (and now VictoriaLogs) and keep them running. It is not hard to build a Grafana dashboard that correlates data across both metrics and logs sources, and alerting functionality is pretty good now.
The "heavy lift" is instrumenting your applications and infrastructure to provide valuable metrics and logs without exceeding a performance budget. I'm skeptical that Datadog actually does much of that heavy-lifting and that they are actually worth the money. You can probably save 10x with same/better outcomes by paying for managed Grafana + managed DBs and a couple FTEs as observability experts.
lerchmo 7 hours ago [-]
You could hire 100 people to manage your timeseries data and save 70%
9283409232 8 hours ago [-]
People say this but I wonder about this from time to time. I don't think anyone is asking to rebuild datadog from scratch for your company but surely it's worth it to migrate to something not as expensive even if it takes a bit of elbow grease.
closeparen 6 hours ago [-]
Assuming there's nothing else you could do with that elbow grease that would create more value than the SaaS bill costs.
9283409232 3 hours ago [-]
Value is not a hard science. I've seen people shelve tech debt in favor of work on a feature that no one ends up using.
1 hours ago [-]
nemothekid 3 hours ago [-]
1. Leadership doesn’t want to burn engineer cycles on undifferentiated features.
2. Management doesn’t get recognized for working on undifferentiated features.
3. Engineers working on undifferentiated features aren’t recognized when looking for new jobs.
Saving money “makes” sense but getting people to actually prioritize it is hard.
closeparen 6 hours ago [-]
That's the point of usage-based pricing: it's cheap to adopt when you're small.
JohnMakin 6 hours ago [-]
> Coinbase might have been fine blowing 65 mil but take that approach to a new startup and you could trivially eat up a significant amount of runway with it.
Most startups are not going to have anywhere near the scale to generate anything approaching this bill.
> I won’t single out Datadog on this because the exact same thing happens with cloud spend, and it’s very literally burning money.
Unless you're in the business of deploying and maintaining production-ready datacenters at scale, it very literally isn't.
willejs 6 hours ago [-]
I have run ELK, Grafana + Prom, Grafana + Thanos/Coretex, New relic and all of the more traditional products for monitoring/observability. More recently in the last few years, I have been running full observability stacks via either The Grafana LGTM stack or datadog at a reasonable scale and complexities. Ultimately you want one tool that can alert you off a metric, present you some traces, and drill down into logs, all the way down the stack.
I have found Datadog to be, by far hands down the best developer experience from the get go, the way it glues the mostly decent products together is unparalleled in comparison to other products (Grafana cloud/LGTM). I usually say if your at a small to medium scale business just makes sense, IF you understand the product and configure it correctly which is reasonably easy.
The seamless integration between tracing, logging and metrics in the platform, which you can then easily combine with alerts is great. However, its easy to misconfigure it and spend a lot of money on seemingly nothing. If you do not implement tracing and structured logs (at the right volume and level) with trace/span ids etc all the way through services its hard to see the value, and seems expensive. It requires some good knowledge, and configuration of the product to make it pay off.
The rest of the product features are generally good, for example their security suite is a good entry level to cloud security monitoring and SEIM too.
However, when you get to a certain scale, the cost of APM and Infrastructure hosts in Datadog can become become somewhat prohibitive. Also, Datadogs custom metrics pricing is somewhat expensive and its query language cababilities does not quite match the power of promql, and you start to find yourself needed them to debug issues. At that point, the self hosted LGTM stack starts to make sense, however, it involves a lot more education for end users in both integration (a little less now Otel is popular) and querying/building dashboards etc, but also running it yourself. The grafana cloud platform is more attractive though.
SOLAR_FIELDS 5 hours ago [-]
My experience mirrors yours wrt Datadog. It's incredible value at low scale, you get a full robust system with great devex for pennies. Once you hit that tipping point though, you are locked in pretty hardcore. Datadog snakes its way far into your codebase, with all the custom tracing and stuff like that. Migrating off of it is a very expensive endeavor, which is probably one of the reasons why they are such a money printing operation.
mbesto 5 hours ago [-]
I think "medium scale" is probably more appropriate. For a $3M~$5M revenue SaaS you're still paying $50k+/year. That's not nothing for a small owner or PE backed SaaS company that is focused on profits/EBITDA.
willejs 5 hours ago [-]
Yeah, the secret sauce of the dd libs was/is addictive for sure! I think its perhaps better now you can just use oTel for custom traces and oTel contrib libs for auto instrumentation and send that to the dd agent? I have not yet tried it because i suspected labels and other things might be named differently than the DD auto instrumentation/contrib packages, but i don't think the gap is as big now?
decimalenough 7 hours ago [-]
> Assume that Datadog cuts the number of outages by half, by preventing them with early monitoring. That would mean that without Datadog, we’d look at 24 hours’ worth of downtime, not 12. Let’s also assume that using Datadog results in mitigating outages 50% faster than without - thanks to being able to connect health metrics with logs, debug faster, pinpoint the root cause and mitigate faster. In that case, without Datadog, we could be looking at 36 hours worth of total downtime, versus the 12 hours with Datadog. To put it in numbers: the company would make around $9M in revenue it would otherwise lose, Now that $10M/year fee practically pays for itself!
Those are some pretty heroic assumptions. In particular, they assume the only options are Datadog or nothing, when there are far cheaper alternatives like the Prometheus/Grafana/Clickhouse stack mentioned in the article itself.
passivepinetree 7 hours ago [-]
Another assumption that bothers me here is that the $9M in revenue would be completely lost during an outage. I imagine many customers would simply wait until the outage was resolved before performing their intended transactions, meaning far less than $9M would be lost.
calt 6 hours ago [-]
On the other hand, customers can become frustrated at being unable to trade when they need during an outage to and go to a competitor.
vjvjvjvjghv 5 hours ago [-]
I bet they would get much better results if they spent a fraction of the money to better understand their systems and designing them better than spending millions on Datadog
secondcoming 6 hours ago [-]
We are moving from Datadog to Prometheus/Grafana and it's really not all a bed of roses. You'll need monitoring on your monitoring.
asnyder 8 hours ago [-]
There's also https://openobserve.ai, while not as stable as Grafana/Prometheus/Clickhouse, feels a bit easier to setup and manage. Though has a bit of ways to go, does the basics and more without issue.
Crazy crazy they spent so much on observability. Even with DataDog they could've optimized that spend. DataDog does lots of bad things with billing where by default, especially with on-demand instances you get charged significantly more than you should as they have (had?) pretty deficient counting towards instance hours and instances.
For example, rather than run the agent (which counts as an instance regardless of if it's on for a minute), you can send the logs, metrics, etc. directly to their ingestion endpoints and not have those instances counted towards their usage other than log and metric usage.
Maybe at that level they don't even get into actual by usage anymore, and they just negotiate arbitrary amounts for some absurd quota of use.
wenbin 5 hours ago [-]
Earlier this year, we at Listen Notes switched to Better Stack [0], replacing both Datadog and PagerDuty, and we couldn’t be happier :) Datadog offers a rich set of features, and as a public company, it makes sense for them to keep expanding their product and pushing larger contracts. But as a small team, we don't have a strong demand for constant new features. By switching to Better Stack, we were able to cut our monitoring and alerting costs by 90%, with basically the same things that we used from Datadog previously.
> we really work with customers to restructure their contracts
Does anyone have such an experience with Datadog? A few million wasn't enough to get them to talk about anything, always paid list price and there was no negotiating either when they restructured their pricing.
evulhotdog 1 hours ago [-]
They were completely unwilling to negotiate with us at all, and it forced our hand to go other open source routes so we don’t get locked in again.
arccy 5 hours ago [-]
you've got bad negotiators... getting at least 10% off list price should be the baseline, even on less than $1m/year
everfrustrated 8 hours ago [-]
>Originally published on 11 May 2023
cybice 8 hours ago [-]
An article that's basically an ad for Datadog: Pay us a ton of money - it’s still cheaper in the long run.
delichon 8 hours ago [-]
> For observability, Coinbase spun up a dedicated team with the goal of moving off of Datadog, and onto a Grafana/Prometheus/Clickhouse stack.
We recently did the same, and our Datadog bill was only five figures. We're finding the new stack to not be a poor man's anything, but more flexible, complete and manageable than yet another SaaS. With just a little extra learning curve observability is a domain where open source trounces proprietary, and not just if you don't have money to set on fire.
I won’t single out Datadog on this because the exact same thing happens with cloud spend, and it’s very literally burning money.
I saw this a lot at a previous company. Being able to just "have more Lambdas scale up to handle it" got some very mediocre engineers past challenges they encountered. But it did so at the cost of wasting VAST amounts of money and saddling themselves with tech debt that completely hobbled the company's ability to scale.
It was very frustrating to be too junior to be able to change minds. Even basic things like "I know it worked for you with old on-prem NFS designs but we shouldn't be storing our data in 100kb files in S3 and firing off thousands of Lambda invocations to process workloads, we should be storing it in 100mb files and using industry leading ETL frameworks on it". They were old school guys who hadn't adjusted to best practices for object storage and modern large scale data loads (this was a 1M event per second system) and so the company never really succeeded despite thousands of customers and loads of revenue.
I consider cost consideration and profiling to be an essential skill that any engineer working in cloud style environments should have, but it's especially important that a staff engineer or person in a similar position have this skill set and be ready to grill people who come up with wasteful solutions.
Even from a pure zero-sum mathematical perspective, it can make sense to invest even as much as 2 or 3 months of engineer time on cloud cost savings measures. If the engineer is making $200K, that's a $30000 - $50000 investment. When you see the eye-watering cloud bills many startups have, you would realize that, that investment is peanuts in comparison to the potential savings over the next several years.
And then you also have to keep in mind that, these things are usually not actually zero-sum. The engineer could be new, and working on the efficiency project helps them onboard to your stack. It could be the case that customers are complaining (or could start complaining in the future) about how slow your product is, so you actually improve the product by improving the infrastructure. Or it could just be the very common case that there isn't actually a higher-value thing for that engineer to be working on at that time.
It is not hard to spin up Grafana and VictoriaMetrics (and now VictoriaLogs) and keep them running. It is not hard to build a Grafana dashboard that correlates data across both metrics and logs sources, and alerting functionality is pretty good now.
The "heavy lift" is instrumenting your applications and infrastructure to provide valuable metrics and logs without exceeding a performance budget. I'm skeptical that Datadog actually does much of that heavy-lifting and that they are actually worth the money. You can probably save 10x with same/better outcomes by paying for managed Grafana + managed DBs and a couple FTEs as observability experts.
2. Management doesn’t get recognized for working on undifferentiated features.
3. Engineers working on undifferentiated features aren’t recognized when looking for new jobs.
Saving money “makes” sense but getting people to actually prioritize it is hard.
Most startups are not going to have anywhere near the scale to generate anything approaching this bill.
> I won’t single out Datadog on this because the exact same thing happens with cloud spend, and it’s very literally burning money.
Unless you're in the business of deploying and maintaining production-ready datacenters at scale, it very literally isn't.
I have found Datadog to be, by far hands down the best developer experience from the get go, the way it glues the mostly decent products together is unparalleled in comparison to other products (Grafana cloud/LGTM). I usually say if your at a small to medium scale business just makes sense, IF you understand the product and configure it correctly which is reasonably easy. The seamless integration between tracing, logging and metrics in the platform, which you can then easily combine with alerts is great. However, its easy to misconfigure it and spend a lot of money on seemingly nothing. If you do not implement tracing and structured logs (at the right volume and level) with trace/span ids etc all the way through services its hard to see the value, and seems expensive. It requires some good knowledge, and configuration of the product to make it pay off. The rest of the product features are generally good, for example their security suite is a good entry level to cloud security monitoring and SEIM too.
However, when you get to a certain scale, the cost of APM and Infrastructure hosts in Datadog can become become somewhat prohibitive. Also, Datadogs custom metrics pricing is somewhat expensive and its query language cababilities does not quite match the power of promql, and you start to find yourself needed them to debug issues. At that point, the self hosted LGTM stack starts to make sense, however, it involves a lot more education for end users in both integration (a little less now Otel is popular) and querying/building dashboards etc, but also running it yourself. The grafana cloud platform is more attractive though.
Those are some pretty heroic assumptions. In particular, they assume the only options are Datadog or nothing, when there are far cheaper alternatives like the Prometheus/Grafana/Clickhouse stack mentioned in the article itself.
Crazy crazy they spent so much on observability. Even with DataDog they could've optimized that spend. DataDog does lots of bad things with billing where by default, especially with on-demand instances you get charged significantly more than you should as they have (had?) pretty deficient counting towards instance hours and instances.
For example, rather than run the agent (which counts as an instance regardless of if it's on for a minute), you can send the logs, metrics, etc. directly to their ingestion endpoints and not have those instances counted towards their usage other than log and metric usage.
Maybe at that level they don't even get into actual by usage anymore, and they just negotiate arbitrary amounts for some absurd quota of use.
[0] https://www.listennotes.com/blog/use-betterstack-to-replace-...
Does anyone have such an experience with Datadog? A few million wasn't enough to get them to talk about anything, always paid list price and there was no negotiating either when they restructured their pricing.
We recently did the same, and our Datadog bill was only five figures. We're finding the new stack to not be a poor man's anything, but more flexible, complete and manageable than yet another SaaS. With just a little extra learning curve observability is a domain where open source trounces proprietary, and not just if you don't have money to set on fire.
am i misunderstanding, or is the author saying it's better to spend $10m than $9m?