The Production-First Mindset

Slack's Suman Karumuri - The Journey To Being A Founder Of Modern Tracing

November 07, 2021 Liran Haimovitch Episode 15
The Production-First Mindset
Slack's Suman Karumuri - The Journey To Being A Founder Of Modern Tracing
Show Notes Transcript

Rookout CTO Liran Haimovitch sits down with  Suman Karumuri, Senior Staff Software Engineer at Slack. They discuss the tools and techniques that make up modern observability, living through a major incident at Slack...and what was done to nurse Slack back to health, Suman’s personal impact on the realm of Observability, and following the questions to get answers, fast.

Developer-First Observability
Rookout is a developer-first observability platform that provides an unparalleled ability to collect

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Slack's Suman Karumuri - The Journey To Being A Founder Of Modern Tracing

Episode 15. November 07, 2021 • 32:49

SPEAKERS
Liran Haimovitch, Suman Karumuri


Liran Haimovitch  00:02

Welcome to The Production-First Mindset. A podcast where we discuss the world of building code from the lab all the way to production. We explore the tactics, methodologies, and metrics used to drive real customer value by the engineering leaders actually doing it. I'm your host, Liran Haimovitch, CTO and Co-Founder of Rookout. 


Liran Haimovitch  00:31

Today, we're going to be discussing the tools and techniques that make up modern Observability. We are joined today by our guest Suman Karumuri, a senior software engineer at Slack. Suman is one of the most accomplished engineers in Observability, and has worked on some of the most iconic projects such as the Zipkin and Pinterest. Thank you for joining us, and welcome to the show.


Suman Karumuri  00:51

Thanks for having me on the show Liran.


Liran Haimovitch  00:53

Suman. Can you tell me a little bit about yourself?


Suman Karumuri  00:56

My name is Suman Karumuri. Currently, I work as a Senior Staff software engineer at Slack. I'm also the Observability lead at Slack. Recently, we have been working on a project called Slack Trace, which is like a new end-to-end tracing platform. And previously, I worked at quite a few companies on Observability. I've worked at Pinterest on an end-to-end tracing system called Pin trace. And I've also worked on an in-memory metric store called UV at Pinterest. At Twitter, I worked on an in-house LogStash platform called Log-Lens, and I also was a tech lead for Zipkin.


Liran Haimovitch  01:32

Speaking of Observability, we all know that slack recently got acquired by Salesforce. But a while back just after the IPO, Slack had a pretty big incident. What was it like?


Suman Karumuri  01:46

So yeah, that was a scary time, we just had our IPO and we had a major incident where all of slack was down. It was on the front page of TechCrunch. It was a pretty scary time, the system went down because slack relies on an internal system called Job queue, an in-sync job processing system, which in turn relies on Kafka, and Kafka went down. And as a result, job queue went down and slack went down. So, this is a classic case of a dependency failure, triggering a major failure in the upstream, taking all of slack down. It took like about three days for over 20 to 30 engineers working on the incident to nurse slack back to health. So it was one of the major incidents of my career. And it is also one of the incidents that had a material impact on the stock price of slack after the incident. So from this, you can see how important reliability is for companies these days.


Liran Haimovitch  02:47

Exactly. In your role in that area, is it mostly about Observability. Right? Helping companies understand how things are working and when they could be better.


Suman Karumuri  02:56

Yeah, so Slack has this unique position where we provide both Observability and because Kafka for historical reasons-- the Observability team also maintains Kafka because we happen to be the largest consumer of Kafka at Slack. So in this way, we were related in both ways, like we were also helping the incident management part of slack. And at the same time, we were also kind of like the owners of the incident because we own Kafka.


Liran Haimovitch  03:27

And you're mostly working on the Observability side, or kind of both sides of things.


Suman Karumuri  03:33

I mostly work on the Observability side these days. We recently spun out the Kafka team into its own team.


Liran Haimovitch  03:39

Sounds like a good decision. An important part of slack. 


Suman Karumuri  03:43

I agree. 


Liran Haimovitch  03:46

So kind’ve, I'm not sure everybody knows your name, because you've been a bit behind the scenes. But you've actually made a big impact on the space of Observability over the past few years. Can you share with us a bit about what you've been doing?


Suman Karumuri  03:59

Yeah, most of the things I've been working on - for one reason or the other - either ended up in open source, and then I didn't end up contributing to them, or they didn't end up in open source at all. But I've been working on large-scale Observability systems for a while now. At this point, I think I built a log set system, a metric system, and a distributed tracing system, often multiple times from scratch. And I have also experienced operating a lot of these systems at scale. So yeah, I think I was early into this space. I'm actually thankful to be in the right place. Most of the time.


Liran Haimovitch  04:36

Yeah, definitely. Everybody today are talking about the pillars of Observability. I would love to hear your take on that, on what are the pillars and how do they tie in into real-world use cases from your experience.


Suman Karumuri  04:50

Yeah, that's a great question. I think the three pillars of Observability are three different ways of looking at what's happening inside my system. So for example, metrics provide you an aggregated view. And logs provide you a very fine-grained view of what's happening in your systems. And then traces provide you a request or flow-centric view of your system. And depending on the questions you want to ask today, you look at different metrics and different systems to understand what's going on in your system. I think we kind of like tightly coupled, the questions we ask with the data we gather today. But overall, I think the pillars of Observability still makes sense, because they give you three different perspectives of looking at the system. But I think in the end, it's like all the three pillars are about capturing what's happening in the systems, and then understanding what's going on in different ways.


Liran Haimovitch 05:47

I recently ran a poll on LinkedIn, which of the three pillars is most important, or the way I phrased it, which one would they hate to do without? Actually, almost 80% of the people said that logging is the most important, and the one that they would have the most trouble doing without. And you've mentioned you've built more than one logging systems. Kind’ve, what's your experience, do you agree with that estimate that logging is the most foundational of the principles and what do you recommend about that?


Suman Karumuri  06:18

Yeah, actually, I think that's very accurate in my experience, too. I think logging is one of the most fundamental ways of observing systems, because historically, every software that we have ever used, used logs to convey what it's saying. So I think that's one reason why logging is one of the fundamental ways. The other reason is printf debugging is what every programmer is taught, right? So now, nobody's taught like, hey, think in traces, think in metrics, or anything like that. So printf debugging is what everyone is taught. So, I think that logging is here to stay. The other reason I think logging is here to stay is the other ways of observing systems are not - in some sense - limiting. For example, metrics lets you just count, but if you have high cardinality metrics, or something like that, then metrics is not the best solution. And the most obvious solution to fall back on is logs, for example. Similarly, traces are good. But they have limitations too. For example, if your flow-- the simplest example with traces won't work is let's say, you are trying to start a server when there are no requests, right? You need to say that, hey, I'm starting the server, I'm loading this config value, or something like that, right? That functionality is not part of any trace, because it's not-- there is no request yet, right? And you need some way to output that information. And if you're doing deploys and stuff like that, that information is extremely valuable. And those are cases where you can't use traces or, I mean, you can use traces, but you're bending over backwards to use traces at that point. And in cases like that logs are still valuable, you know, I think-- so your survey is right. Like, logs are still needed, because the other two approaches are limiting, in some sense.


Liran Haimovitch  08:10

Now, if logs are so important, and you have so much experience, what would you recommend people, when they're thinking about logging, what are the best practices, they should follow?


Suman Karumuri  08:21

When it comes to handling logs, I think trying to impart some structure to your logs is important. But at the end of the day, most of the time, the way you access logs is when everything else fails. So you need that needle in a haystack search, or something for logs. But I think understanding how these logs will be used and in what context will be used, is actually helpful. The other thing with logging to keep in mind is when you're actually providing these logs for your systems. Understanding their use also helps reduce how much you log, because logging systems at the end of the day are very expensive. They're hard to search. And especially during incidents, if you're thoughtful about how we are logging, it'll actually help you understand and resolve incidents faster.


Liran Haimovitch  09:12

Yeah, I mean, logging smart is so important. And making sure the context is there is going to make logs so much more valuable.


Suman Karumuri  09:19

Yeah, I agree. Yeah, that's a better way. Yeah. Having the right context is also extremely important.


Liran Haimovitch  09:25

Yeah. Actually, I think the need for more context was kind of how tracing was born, right? Because you're one of the co-author of Zipkin. And that was literally one of the first distributed tracing systems. So how did that happen? How did you end up writing it?


Suman Karumuri  09:40

So when I joined Twitter, I was working in ... initially, and then over a period of time I got attracted to Observability. Once I started working on-- initially we started working on log search. And when we were using these logs, one of the things that frequently came up in logs is we always wanted to understand who called my service, like which request is causing this log or something like that, right? That's how I was exposed to distributed tracing. Then I heard about this cool project at Twitter, called Zipkin, which was already in vogue at that time, by the time I started working on it. It was kind of underused at the time. So, partly because Zipkin back in at the time used a Cassandra as its back-end, and that system did not scale very well. So the way I got involved in Zipkin, is we were doing log search and then there are these interesting questions about log search. And we built a new log search back-end. And I got involved in Zipkin, because we wanted to move from the Cassandra back-end to the log search. We wanted to move the back-end of Zipkin, and that's how I got started with Zipkin.


Liran Haimovitch  10:49

So how did that end up? I mean, Zipkin, at some point went open source, how did that happen?


Suman Karumuri  10:55

Around the same time, there was a large push to open source as many things as possible and Zipkin went open source. That was an exciting time, there was no other distributed tracing system out there. So, this was the first distributed tracing system. And unlike now, where people kind of know what Zipkin is, back then we had to explain what distributed tracing is and what it's for. Yeah, those were definitely interesting times.


Liran Haimovitch  11:19

And then you kind of got to take a big part in the forming of open tracing. Do you want to tell us a little bit about that?


Suman Karumuri  11:25

Yeah. So I think open tracing and Zipkin. So I was talking about, like, how you had to tell everyone about what distributed tracing was, right? Open tracing is actually born out of that name. So one of the problems when we open source Zipkin was, there were primarily two issues, none of the open-source frameworks actually had any tracing in them. And whatever tracing was there, it was pretty ad hoc. So, open tracing was born out of the effort to standardize the API's. So the frameworks themselves can use the standardized API to trace a program. And, I think overall, that was a big success. And that's how-- because I was involved with Zipkin, and the back-end at Twitter, that's how I got involved in the open tracing effort. I co-authored some of the spec, and I think I am the first person to actually put an open tracing compatible, tracing instrumentation in production at Pinterest. While I was working on the spec, I basically was implementing Pin trace, which was Pinterest distributed tracing system. Basically, some of the lessons learned there basically made it back to the spec, and vice versa.


Liran Haimovitch  12:36

Makes sense. We're seeing that field is moving super fast. I mean, open tracing is also already got deprecated somehow, and now we’re in the realm of open telemetry, and I'm kind of wondering, where do you see the project going? What would you like to see added to open telemetry?


Suman Karumuri  12:53

Yeah, open tracing to open telemetry was a pretty interesting transition. I think one of the biggest challenges with open tracing is, it only defined the API of how an application should be traced. But it did not define the end product, which is how a span should be laid out. There was no spec for it, as a result, over a period of time, we ended up with essentially two specs, we had one from Zipkin, which has its own shift baseband format, and then one from Jaeger, which has its own band, shift baseband format. And both of those band formats were internal implementation details, which eventually became sort of like pseudo specs. So open telemetry was born to standardize that spec. And I think that is really good. However, I think the Open Telemetry standard has evolved to become more and more complex and more and more specific, over a period of time. Yeah. And it's also I think, more geared towards vendor centric solutions at this point, with not too many open-source implementations. I think one of the cool things I want to see is more support in the open-source, not just from the instrumentation perspective, but also from the back-end perspective, like a very good trace back-end for open telemetry data, and showing off its full paths, you know. Not just like the ability to just query it, like using simple queries, but also the ability to do complex queries and ask complex questions about systems. For example, something like-- there is no system out there today that can actually kind of answer the question like follows, hey, give me all the traces where the MySQL query that is executing, as part of this trace took over 70% of the request time or something like that. Questions like this today-- while a trace has enough data to actually answer that question. I don't think the tools exist for us to formulate those questions and get answers fast.


Liran Haimovitch  14:56

I actually heard a talk by you a couple of years ago back in San Diego at KubeCon. Back at a time when we still had physical conferences, and you actually talked a bit about that, about the unique queries you're running at Slack to kind of understand performance at scale.


Suman Karumuri  15:11

Yeah, I miss those days. I guess we all do. Yeah. So when I worked on distributed tracing at Twitter, it was a pretty mature system. But then I went and built an end-to-end tracing system at Pinterest called Pin trace. And one of the things I learned - once I built that new system -  is kind of common knowledge in the industry today, that tracing has very little ROI. Part of that has to do with trace data that cannot be accessed as raw data. For example, if I give you metrics, or log data, you're just querying on the raw data or writing your queries on the raw data. Whereas when you're dealing with trace data, you're not querying the raw trace data, you have Zipkin UI or a Jaeger UI or an Xray UI, or some UI on top, which is basically telling you what kind of queries you can run. I found that pretty limiting, both from a usability perspective and also from a-- it also limited the number of things you could do with it. So when we went to Slack, and we had the same problem, we build this new system called slack trace. And we kind of wanted to solve this problem, which is we want to simplify trace span format. So you can actually query the raw span data and get insights from it and answer questions like the ones I was talking about earlier, like, hey, give me all the traces with like my SQL query to 70% of the overall execution time or something like that.


Liran Haimovitch  16:39

How are you using those queries at Slack?


Suman Karumuri  16:42

So, one of the interesting things is, let's say you have an incident, right? What happens during an incident is, what our users are doing is they're asking these questions to test out various hypothesis, right? During an incident, you're trying to say, hey, like, the overall request rate dropped? Is it because of my sequel? Is it because of memcache? Is it because of Redis? Or is it because of job queue- - our racing job queue system? Or is it because of some other thing in my hack code, right? you are running these hypotheses, and a lot of time during incidents goes towards triage and identifying the root cause. And part of that process is actually formulating these complex questions and getting answers about them. Today, we ask and answer those questions using a combination of logs and metrics and dashboards. And like doing interactive queries on the data. I think by providing a more powerful API on top of spans, you can actually ask these questions, formulate these questions more eloquently and get the answers faster, and do this hypothesis testing faster. I think that's how these things are very valuable.


Liran Haimovitch  17:52

I mean, that sounds super promising. I know we've spoken about in the past, but you have some other unique ideas about what would you like to see added to observability, kind of re-changing the pillars, combining them.


Suman Karumuri  18:05

Yeah, so one of the things I was thinking about, actually, from our experience with Slack is we actually started merging logs and traces into the same system. Because what we have seen is, most of the logs and spans, they have pretty much redundant data. So it didn't make sense for us to produce the same data twice, store it, process it, because you're not only doubling the backend costs. Like, you're also producing the same data in the application, and spending CPU time in the application, and slowing down user queries, right? So, I think one of the things that has been successful at Slack is we are trying to merge logs and traces now, because I think in the long term, spans, per se, are nothing but logs with some context, right, as you said. And I think merging logs and traces has provided a lot of unique capabilities for us, in addition to like saving costs and things like that. I think that's one. The other advantage of traces we found is typically with log, you can't link events across the system. For example, in a trace, you can only link events that are actually part of a single request flow, right? Whereas in our system, you can link events across different flows. For example, let's say you send someone a message in Slack and they get a notification, right? Today, we have regular logging, and traces as part of that flow. But the act of sending a notification itself has multiple steps. For example, we send a notification, the user sees a notification, and then he opens the app to read the notification. And let's say it does something with the notification, like interacts with the message, right? All these flows are also captured as traces. Even though they happen in different requests across the system, we still captured them as traces. This kind of, has been like a superpower for us in some sense.


Liran Haimovitch  20:10

So we've covered logs, we've covered traces. But as you mentioned, you've also built some metric systems and in-memory databases. So can you tell us a little bit about the in-memory metrics storage engine you've built at Pinterest? 


Suman Karumuri  20:23

Sure. So when I was working at Pinterest, my first project was Pin trace, which was doing end-to-end distributed tracing. After that I was focused on improving our metrics infrastructure, and Pinterest at that time used open TSDB, which is backed by HBase. And it did not scale well, like it had a lot of hotspots in our system. As a result, we were struggling operationally to keep that system up, especially when we're doing new deploys, and things like that, because around new deploys, what would happen is we would get a lot of time series because new instances would come up. And these time series would be high cardinality from HBase perspective, even at Pinterest scale, or slack scale. During these auto scaling events, you have a huge influx of metrics that looks like high cardinality metrics. So we've built an in-memory metrics store called UV, which used a ... bitmap and gorilla encoding to serve metrics data from memory. And this system was about 100 times faster than open TSDB. I think in the end, that was a success, because it showed us how we can solve metrics faster at scale.


Liran Haimovitch  21:32

Now, so many of your projects have ended up as open source. And you've become a bit of an advocate for open source yourself. Why is that? Why are you such a big believer in open source?


Suman Karumuri  21:44

I think pretty much like in the past, like eight years or so, pretty much everything I've done, ended up in open source, and I am glad that it did. Because I think part of that is, when I was doing my college in India, a lot of my learning about software and technology came from open source. Working in open source, and contributing back to the community has been like, a very satisfying thing for me. It’s also cool when, like, no matter how much you get paid. It's very cool- - Like, I think the most important satisfaction for your job is when someone takes a piece of code you wrote and said, hey, I used it, and it works for me. I think the joy that comes out of it is what keeps me going in open source in some sense. It's a little bit selfish, but that's one of the motivations for me.


Liran Haimovitch  22:31

Recognition is an important motivator. 


Suman Karumuri  22:33

Yeah, recognition too. 


Liran Haimovitch  22:35

We've been through your last few places of work. I mean, you've worked at Twitter, you've worked at Pinterest, you've worked at Slack. Those are some of the largest-scale companies on the internet right now. I'm wondering what's unique about engineering at that scale? When you're working on Observability, when you're working on reliability on those size of services? How is it different than just, you know, writing a small piece of code for a small website?


Suman Karumuri  23:00

Yeah, I think running services at scale is, I think, often an underrated skill. Because when you are running services at scale, you need to have an operational mindset towards your services, which is a slightly different skill than just writing code and shipping code. And having this operational mindset means you need to think about what the core flows of your application are, you need to think about how to instrument them, how to understand when the core flows work, and what the exceptional cases  of when those core flows don't work. Instrumenting them, putting a dashboard, or setting up alerts to keep an eye on them. So basically, in operational mindset, you are kind of classifying the known knowns versus known unknowns to unknown unknowns, right? You have that spectrum. I think to run and operate systems at scale, you need to instrument your systems for all of these three factors. And also, you need to have a very close eye on your SLIs, understand when you miss them, understand when you have a lot of on-call toil. And also the most important part that I think is, once you see on call toil, prioritizing, reducing on-call toil versus like building more features, I think is an important feature of running systems reliably.


Liran Haimovitch  24:22

Definitely. You've been in a central position, you've been running Observability or taking a key role in Observability, across the company. So you've obviously got a good picture of the scale of this stuff. I'm wondering how do you think individual teams are affected by the scale as well?


Suman Karumuri  24:41

I think from an individual teams, I think individual teams are affected in a couple of ways. One is, as I said, like going back to our incident at the start of this podcast, we have a downstream issue. Even though your system is not in the path of the incident. You invariably become part of the incident. That's one way you get tied into the reliability of your services. And one thing most people don't observe is a large scale. Let's say some other customer depends on you, even if your reliability is because of your downstream, most people- -  who most developers hold you accountable, while they understand they still hold you accountable for the reliability of their service. So I think at large scale, I think this is how teams become responsible. As long as you're in the core flow, I think you become part of the reliability of the overall company.


Liran Haimovitch  25:37

Now I know that at Slack in order to improve reliability and to ensure everything is going smoothly, you will become big fans of hauling deployments. Can you share with us a little bit about that practice?


Suman Karumuri  25:48

Yeah. So at Slack, initially, we used to do these big giant deployments where we just deployed 200%. That was really cool while it lasted. But it also resulted in a lot of downtime for the service. So, we actually move to a gradual rollout of our software services and we use like a rolling deployment methodology. But going to roll in deployment actually changes how you view your system. For example, every time you do a rolling deployment, you have to change your operational posture quite significantly, actually, you need to go from monitoring the entire service to having to do these like, comparisons of the new deployment versus old deployment. You need to tweak your alerts to make sure that the new deployments are not making things significantly worse for users. You need to tweak your alerts to make sure they can distinguish between old data and new data. You need to separate the logs, you need to separate your on-call rotations, you need to change your priorities. For example, if rolling deployments are becoming harder, or if you see some errors, for example, during a rolling deployment, you want to prioritize fixing them, or at least understanding them as part of like your regular deployment, software development lifecycle. And it's a significant change of mindset and how you operate as a team, I think.


Liran Haimovitch  27:09

when doing volume deployments, how often do you decide to hold back? Or how big does an issue have to be for you to decide to hold back? Versus how do you know that it's a small bug? Or something that you can live with and continue rolling forward while working on the fix later on?


Suman Karumuri  27:26

Yeah, that's a great question. I think there is always like a fine line between roll back and roll forward. I think for us and everywhere I've worked, most of the time we fix issues by rolling forward. But because of rolling deployment, what ends up happening is let's say you deploy a piece of code, it's returning some small errors or exceptions. You typically- - if you can fix it really quick and roll out a patch, you would stop the deployment and roll out a new deployment with the new release. That fixes it. But if the issues are major, and they're contributing to ongoing SlMs, or something like that, in those cases, you would actually roll back. I don't think the size of the bug is what matters. I think whether the SLA is being met or not, is what should decide whether you should roll back or fix forward. Because even if fix forward takes time, you need to account for that too. And I think the SLA guarantees is what guides you towards the right decision think. 


Liran Haimovitch  28:25

Speaking of SLAs, in the past the SLA or you know, those legal documents nobody would ever read? Yeah. And while Google did mention, you know, the error budgets in the referee handbook, it was still far off. Today, over the past couple of years, we're seeing more and more talk over SLA, over error budgets, how do you treat that at slack?


Suman Karumuri  28:44

So, I should have probably been more careful and use the word SLOs instead of SLA, because SLAs have a legal connotation to them. But I think as a service owner, you're responsible for signing up for an SLO for your service and providing that. And the reason I use the word SLOs is there is a lot of mix. There is a lot of context around it to different people, they mean different things. But it is to me SLOs, basically mean, you have a user metric, which somehow captures your user experience. And that's the metric you're focused on. Because you have to be focused on providing a good user experience for your users. And if your SLOs capture that great, if not, you need to modify your SLOs to capture that.


Liran Haimovitch  29:31

You've had a long career with a long, fruitful career. You've had lots of projects under your belt. What's the single bug from your career that you remember the most?


Suman Karumuri  29:40

One of the bugs I remember, is a Pin trace bug where-- so the Pinterest, we built pin trace is deployed. And then, we were making a small change in the instrumentation. And what ended up happening is we changed the Python library from one library to the other. When we made that change, the threading semantics of the Python library changed. As a result, what happened was, we were setting a request to be traced in the thread context, or in the new library, and the request would be traced. But what the new library didn't do is it wasn't clearing the thread context, after the request was traced, and it was keeping that thread context around. So, what ended up happening is the next request came through, even though it was not sampled, it got sampled by this Python library. And because we were only sampling 1% of requests, not all thread contexts were sampled at the same time. But over a period of 40 minutes, all the thread context in the thread pool got sampled. And after 40 minutes, we were tracing 100% of the requests at Pinterest. And that resulted in massive downtime. I won't call it massive, but it resulted in some downtime for the systems and basically answered the question, what happens if you trace 100% of your requests?


Liran Haimovitch  31:09

I think sampling is one of the biggest challenges today in tracing because if you're getting 100%, as you mentioned, it's too much we can't afford it. But if you're not sampling 100%, then you can't rely on tracing as much as you can logs, because you're not going to be sure that the transaction you care about is going to be there.


Suman Karumuri  31:28

Yeah, I agree. I think sampling is one of the most challenging aspects of tracing. And I think this is where the other thing I was talking about, like merging logs and traces is-- I mean it's helpful is because in the logging world, there is no sampling, you're just sampling 100% of the requests, right? And in the tracing world, you are going to great lengths to think about what you need to sample, right? I think by merging tracing and logs, you get 100% sampling, but also, more context. I think that's how I think about it.


Liran Haimovitch  32:00

Makes sense. Thank you Suman. It was great having you on the show. It's been a pleasure learning about Observability and everything that has happened in that space over the past few years.


Suman Karumuri  32:10

Thanks, Liran. Thanks for having me on the show. It was great chatting with you.


Liran Haimovitch  32:19

So that's a wrap on another episode of The Production-First Mindset. Please remember to like, subscribe, and share this podcast. Let us know what you think of the show and reach out to me on LinkedIn or Twitter at @productionfirst. Thanks again for joining us.