Cesar Lugo, Software Engineer and founding member of the Engineering Intelligence team at Typeform, joins us to talk about the power of engineering metrics and how data can help influence how organizations behave.
You can have all the data in the world, but if you don't create a shared context in the organization of what's going on, where, and in what direction everybody needs to move, then you have nothing. It's about changing human behavior.
Engineering metrics cannot be a tool for managers or directors. Every engineer needs to have access to them. They need to understand how things are calculated and why. They need to see the delta. Because this really gives the credibility of what you're measuring and why.
You have so much capacity to solve problems, that you would like to solve all of the problems. But if you start with the actual problems that need solving right now, that can lead to asking yourself the right questions and providing the right metrics to solve these problems. Versus just trying to compare yourself with some industry benchmarks- which are useful, but might not be the actual next pressing thing that you need to solve.
DORA Metrics are standard. But I think the industry's slowly coming to realize that that's not enough to guide an engineering organization, because they tell us about common characteristics of a highly performing organizations, but not how they made their journey to get there.
I've called DORA the BMI of the engineering metrics. It's great at a census level, but it's not great at the individual level. You can't just tell someone, "be less fat," you actually have to show them a way to do that.
Pull request size, time to review, cycle time… those are our bread and butter just because of the frequency that they happen. If something happens in your organization, tens, hundreds thousands of times a day, you need to look at it.
Tech debt seems to be one of Dev Twitter’s favorite topics these days. It’s definitely a recurring one on the podcast. Shouldn’t developers be focusing on building new things? Well, yes, but also, no.
Here are two things we recommend reading:
1st 👉 This Twitter exchange between Charity Majors and Sophie Weston, where Charity talks about rewarding devs with the Tiara of Tech Debt (love it 👑) for their work on fixing, refactoring, automating etc.
2nd 👉 The Medium article “How To Explain Technical Debt To Executives.” by Sam McAffee
Here’s the bottomline:
“Technical Debt” is an emergent condition present in any digitally-enabled business enterprise that has now discovered its technology is too rigid to accommodate new business objectives that have come to light. It is too rigid because of past system design and architecture decisions that were made, usually with the best intentions, based on the engineer’s knowledge of business objectives that were available at the time.”
Got it? Ok.
When speaking about using data to change human behavior inside organizations, César Lugo mentions Typeform’s Process for Continuous Improvement. It goes as follows:
example: “Is our cross-team collaboration slower that same-team collaboration in GitHub interactions?”
example: Harvest data from git provider and compare how different teams are performing and what are the bottlenecks.
example: Present a powerpoint report that contains different graphs and clearly explains how things were calculated
This process is not too different from Athenian’s Process for Continuous Improvement, which you can read more about here - and see it action here as we use it to identify bottlenecks.
This isn’t the first (or second) time we’ve talked about the Stripe Developer Coefficient Report on the podcast.
Here is the full report, in case you need a refresher on the material.
Did you raise an eyebrow when Jason mentioned he’s written “various” fitness books?
Well, he wasn’t kidding, because it’s actually more than a couple.
Looking for a new jump rope routine? Look no further.
“The Gym Membership Challenge” is a parallel Eiso likes to draw between the motivation to use engineering metrics tools to motivation when you sign-up to the gym.
The problem with gyms is that up to 67% of gym memberships go unused, which means despite high-motivation, there is not enough ongoing support to help people achieve their desired results.
The same happens with engineering metrics.
Engineering leaders start using data with high motivation to improve but don't build the organizational habits that lead to continuous results.
Jason agreed with this analogy, and goes even further:
Another set of gyms and fitness centers emerged in the sub tribes - the bodybuilders, the power lifters, the cross fitters, the Barry's Boot Camps,- and there's a bunch of commonality in why they become successful. Here’s why this works, and how we could apply it in our domain:
Cesar: You can have all the data in the world, but if you don't create a shared context in the organization of what's going on where and in what direction everybody needs to move, then, you have nothing, right? I- it's about changing human behavior. So it's not only about, like, correctly assessing what's going on, but it's actually getting everybody on the same boat and on the same direction. I think like one of the first key elements is the democratization of data. Everybody needs to have access to the data.
Narrator: Welcome to Developing Leadership, the podcast for engineering leaders where Eiso Kant and Jason Warner share their lessons on the ins and outs of managing software teams. Today we have César Lugo on the podcast. César is a software engineer best known for being one of the founding members of the engineering intelligence team at Typeform. He joins us to talk about the power of engineering metrics, and how data can influence the way organizations behave. Keep listening to learn about continuous improvement frameworks that actually work, and why industry benchmarks fall short of their goal to help engineering organizations thrive. As always, this episode comes with accompanying show notes, with a deep dive into the main topics, mental models, and key moments from the episode. Find them at developingleadership.co and linked in the episode description.
Eiso: Hi everyone, we're back again with another episode of Development Leadership. Jason and I have a special guest with us today, César Lugo. César was one of the founding members of the engineering intelligence team over at Typeform. Typeform is an incredible company that currently has over 150 engineers and is one of the big SaaS success stories, out of Europe. And we have you with us today, César, to, to really talk a little bit about what does an engineering intelligence team do, when does it get started, why so? So maybe let me just throw, a softball at you. Why did you decide to be part of this team in the first place when it was being created?
Cesar: Hi Eiso, and thanks, for having me. Definitely one of the reasons, that I joined, w- it wasn't actually the engineering intelligence team when I joined, it was called tools and infrastructure. It seems that it was a, a, a meta-team. So it's actually about helping other teams to improve the way that they, that they deliver software. And that, you know, kind of a meta perspective, an eagle eye view of the whole engineering function was very appealing to me.
But also this, like, this intersect between technology and, you know, the actual, software development, function of the engineers. So it's a very social, technical, intersect of systems and organization that was very appealing for me, so ...
Jason: Cesar, I'm curious, how large was the engineering team when the very first connection of that team came into existence in Typeform?
Cesar: I believe we were just under 100 engineers at the time when we decided, that it was, time to, use a team like ours. And I think it's, it's about a size that you might consider something like this. Be- be- below this threshold, it might be, a bit, too much to have a team dedicated to, to analyzing.
Jason: And I know you weren't there when the very first incarnation of this team was formed, but you been privy to or do you recall any of the conversations from others about why, and like what first started it? Because there's lots of stories about these and I've run a couple myself, and, like, we can all talk about it. But for the listeners on the podcast, they're going through this on a daily basis. So what were some of the conversations that were being had, or the problems that they saw that they wanted to fix?
Cesar: I was actually there, and it was less than a problem than actually successes that the team had, that we, we were actually a different team with a broader domain, and we had some successes in, delivering insights and influence in the way that the organization behaves through data, and, we kind of did that as a, as a couple of experiments, and we had such a success and impact of influencing the way that, a number of engineering teams were doing their day to day work that, the upper management decided that, let's just focus on that and let's try to replicate this success in different areas of the engineering function. So that's how we started, and we called ourselves a, you know, engineering intelligence. But we're just a team dedicated data and insights to enable other teams to know themself and assist them in their journey of continuous improvements.
Eiso: I love that, Cesar. But it's, knowing my background that- that's pretty obvious. I'm, I'm curious, those experiments you mentioned, in the early days, what were they?
Cesar: At first we, we, decided that we wanted to measure the adoption of engineering standards across all of our organization, right? So that was something very, specific to our organization that, we had a, a very successful framework of bottom-up change that, anyone in the engineering organization could propose a standard to be adopted by the whole organization.
So say, say that, you know, w- "We want this type of logging to be used," or, "We want, to use this type o- we want to migrate, from one provider of CICD to another provider," or, "We want to use this common, library, and that, as a standard."
And what we did as an engineering intelligence, a- at the time wasn't engineering intelligence, but what we did was devise a tool that measured the adoption of all of our repositories or services, web apps, and libraries, to adopt to different types of standards. And we could visualize it and, and provide it back to teams, and so the, on a daily measurement, the whole organization can have the visibility o- of how they're being a- how they are adopting to different s- engineering standards.
So at the time I think we, we approved around 15 engineering different, engineering standards. And once we rolled out, this measurement system, some standards were already there, but the adoption of differing engineering standards was around 30%, and after the tool was rolled out and, and like, a communication campaign was done, in the span of two or three quarters this went up to 90% of adoption.
And I'm talking about like tens of teams, almost 100 engineers, grow an organization. More standards have been approved, security standards, infrastructure, all across the board, and, and that really, helped us cement ourselves as a team that use, data to influence the organization.
Eiso: So you said something interesting, Cesar, you said tool and communication campaign. Can you dig in a little bit further into communication campaign and, and how do you think about communicating insights and data that you're gathering? Because it's, data alone doesn't lead to change.
Cesar: That's, that's definitely an important part of it. And, I think, like you can have all the data in the world, but if you don't create a shared context in the organization of what's going on where and, and in what direction everybody needs to move, then, you have nothing, right? I- it's about changing human behavior. So it's not only about, like, correctly assessing what's going on, but it's actually getting everybody on the same boat and on the same direction. And, I think like one of the first key elements is the democratization of data. Everybody needs to have access to the data.
Engineering metrics cannot be a tool for managers o- or directors, you know, the, the, every engineer needs to have access to them. They need to understand how things are calculated, why. They need to see the delta, they need to see, you know, how does it move from one to two? Because this really gives the credibility of what you're, measuring and why.
So, and, and they need to have the perspective as well. So, because we're operating at different level of abstractions. So engineers are working deep down on their terminals on their ADEs, and they're working on a level of, you know, tens or hundreds of very short feedback loops a day. And then you're asking them to sit on a, on a meeting, you know, every once in a month, and understand some lagging indicators that go span of weeks and, and months. And it's different to make that connection and to, to incorporate that in their cognitive load.
So when they go back into their, their workflow, it's just no, no room to make those decisions there. So if I would have my way of you know, like no material constraints, I think, like, having the insights and the data at the moment that the engineers are making their day to day decisions, that would be the, the golden state that this sector should aim at.
So I'm coding and I'm being given insights of what do I need to do to make the right decisions at the right moment? So we are a long way from that, but we can be better than, than having to consume data through a third party that already analyzed it and tried to convince me of how do I need to do my work, to, to be a more performing team?
Eiso: So you talked about something here, you talk about like the, the democratization of data, and went deep into the engineers having access and why. Talk to me a little bit about what you've seen with the leadership. As you started rolling out these experiments and, and then later becoming a full team. What were some of the main things, that you, you learned along the way that were really crucial to get teams to actually start trusting the data, acting upon it?
Cesar: First is that without the leadership buy-in, then you're not going anywhere. So, you need to have sponsorship in, in an organization, to first get some traction in, in the teams. Because they're, all the teams are in their own domain handling their own problem. So if, if there is no reserve capacity or, at least, a way that I, I as a team can say, "I reserve this capacity to, to change my way of working because of this data," that's not pre-alignment.
But, I think like management, usually is, more keen to, to have data to understand the engineering function. Because first they need it, no? They, they cannot have everything on, on their head, and, and they are not as close as, as the systems of, of the engineers. So, so having this ability, if it really works for management, the, it's more of a problem getting engineers onboard and, and getting the faith of engineers in, in the- this. It's not only, something for vanity metrics, but they can actually make their work better, right?
Sponsorship from management is crucial and, and the relationship between upper management and teams needs to be, informed and, and it needs to be less about trying to be on the same page and more about what to do about it.
The state of the team needs to be obvious, that's what data is, is there to, provide. The state of the team in terms of delivering, delivering of software function.
Jason: I- I'd love to dig into a little bit about your experience of s- l- let's call it selling upper management on the need for this and the, the value in this with data? We're gonna post this a little bit later, but there was a minor Twitter exchange yesterday in some, subsector about how much time do you allocate for technical debt, as an example. And in my experience, it's always a really tough conversation to tell non-technical CEOs or non-technical CFOs about the need for this. Everyone kinda understands, but they don't really get it. I'm curious your experience here for selling, essentially, or convincing, with data.
Cesar: That's a great point. And in, in our experience, our first experiment has been a great leverage for that. Because, engineering standards become technical debt when you're not up to them, right? So, it's a key of, of quantifying and visualizing it.
So sometimes when you try to explain in technical, very technical terms, something, a non-technical stakeholder might just nod or, or, but they might not be completely convinced of what you're saying. But if you have a whole organization that are aligned about what a score means, if you're, at 100% adoption of standards, or if you are at 50% adoption of standards and you're talking about, you know, 15 different, engineering or 20 different engineering standards that have the backup of, of security teams and infrastructure teams and architectural teams, and they have all this consensus, and they have an unquestionable score that you really are now adopting, your standards then, even non-technical stakeholders, fall into line.
And, and, and they, they might come and say, "Hey, what's this standard about?" You know, like, they might ask, but they, they won't, flat-out challenge the need, of the need of technical debt.
Jason: In my experience what worked well was to quantify it in a domain that the other person might know. So example, SOC 2, HIPAA, FedRAMP, something along those lines, in some sorta compliance regime. And I started talking about risk profiles. I started saying like, "Hey, our goal is not to get to 100% of adoption or 100% technical debt reduction, or" ... I- it's never the goal, 100%, in any of those scenarios.
We wanna target 70, 80, 85% kind of compliance. And I use the word compliance because that, again, it puts into a domain they tend to understand. And then I talk about risk profiles. Like, "Okay, so here's where we are and here's our overall risk profile." And that was w- allowed me, at least, to have a non-technical conversation about a very deeply technical, there's zero chance a non-technical CEO or CFO is ever gonna understand, database chartings, the network constructions, all the subnet stuff. I, I had to put it in language that looked like that.
And, again, that allowed me at least to have that, that conversation, and to get most buy-in, I needed to, to do ... And if I didn't get buy-in, what I was gonna have to do was hide it. Hide the work that I needed to get done here, because we can't go to war, you know, go to business with something where the entire infrastructure's gonna fail just because the CEO or the CFO doesn't get it. So I had to keep working at it and do the work anyway.
Cesar: Definitely, I, I 100% subscribe to what you're saying, and, and that's why, in our framework, all of our standards, need to be linked to a business, reasoning, you know, like a, it's either to make things faster, more secure, to enable to scale. So this connection needs to come at the get-go of making a proposal. So, you know, the actual conversations within our upper management I haven't been, been privy to those on a, on a, on a particular basis, but I'm sure that these elements need to surface to get that point across.
Jason: And I think, going back to one other things I said and it tied into you. A- there was an ins- wh- in my history where an upper management wanted to say, "Hey, the only acceptable downtime is zero downtime, like zero seconds, zero minutes of downtime for the entire year."
And you can imagine what would happen in that scenario where the executive said that, and when the engineering or infrastructure team hears that. So what we had to do is we had to walk away from that and explain, again, what the risk profile of doing that work looked like, and the business trade-offs we would have to make.
We'd have to invest this many dollars, this much time. We'd have to shut things down. So it was, it's fascinating because I think, I think that as Typeform grows, you're gonna be sitting at the nexus of like a lot of this information. You're gonna be getting it in and out, you're gonna h- you're gonna sit at this critical juncture to be able to articulate a lotta this to folks.
Cesar: That's definitely true. And I think, like, one framework that, I believe works, it's, focusing on the problem that you want to solve. So if you start by the problem that actually has a better chance of leading you to a realistical and, uh ... Because you, you have so much capacity to solve problems. You, you would like to solve all of the problems. But if you start by the actual problems that need solving right now, that, that can lead into, asking yourself the right questions and providing the right metrics to solve these problems, versus just trying to compare yourself with some you know, industry benchmark- which, which are useful, definitely, but might not be the actual next pressing thing that you need to solve. So you need to ask yourself, start by the problem, start by your problems, and start by those problems that are closer to the business.
Eiso: I love this, César. And I'm gonna come back to, the, the small jab that you made at DORA Metrics and industry benchmarks. But before we do, I, I want to dig a little bit deeper into this process, from identifying the problems that need to be solved, to kind of discussing them, deciding, aligning, acting, measuring. Which I know is, is a process of continuous improvement you follow at, at Typeform. And so walk me through, starting from the engineering intelligence team, you know, getting the, getting the data, 'til the problem being identified and, and everything that happens, all the way 'til it's actually being acted upon and, and the impact measured, who's involved, how do you communicate?
Because you're dealing, you know, you sit at the, the nexus, of a 100 pers- 150 person engineering org, but you don't own anything. Right, you don't, y- you're not responsible for the team. So how do you deal with that challenge to then make that change happen?
Cesar: That's the nugget of the, of the issue, right? So you're, you're really going into, into the, the main, thing of what we're trying to do, right?
In the year that we have, year and change that we have, we haven't had that many chances to create, many interac- iterations of this, but, I can, I can come up with, two examples of, of issues. And one was, we were asked, if our cross-team collaboration, was, let's say, slower than same-team collaboration in, in GitHub interactions. So it starts by a question. So it's a conversation, right?
So you start by posing question by stakeholders, and you come up with some insights and answers, and then there's a followup, and then you come back and, and tha- that leads to a, a better outcome.
So our first core question was this, you know? Is the collaboration between different teams better, or faster than the collaboration within the team?
This is because on Typeform we have like a, an open source model, so you can actually as a team, contribute to any repository, even if you don't own it, so we wanted to see if, if the contributions on different teams, were, at the same, speed that the same contributions on, on the same teams.
So we went out, to get this information, to harvest data in our Git provider, and, and tried to come up with different indicators that, that can tell us, you know, and compare in, statistically, how, teams are performing, and what are our bottlenecks and, and if it's the, the hypothesis that we were asked is, was true or not. You need to start with a, a previous hypothesis.
So, once we, we did this and we got some, some insights o- on that, we, created, our, a, a storytelling, effort to make this data consumable and to create some alignment and, and consensus within the organization.
So we created a, a report or an analysis that contained different graphs and it was very detailed in the way that we calculated things, in our assumptions, and, and just created this in a way that, we presented to the whole organization and, got some, some interesting results, and got a lot of more questions, from different teams. We got some engagement and saw a need to act upon this, more than others.
But we, we got the attention of, different stakeholders, some changes were, were introduced, and, and that on a couple of more subjects that we have, that we have, also analyzed and researched within the organization.
It's definitely, one of the components of what our team does. It's researching, a problem, and tried to come up with some insights at the level that we can with the tools that we have. So, having some data that is not, completely off, it can inform the conversation and decision-making at the higher level, in a way that, maybe later you could, provide more, granular data.
Jason: So, César, curious, Eiso started this with a little bit of poke at the DORA. What was important to Typeform internally? What are the metrics that you look for, that you track, what are the ones that the engineering teams care about, what are the ones that the managers care about? What does your dashboard look like, essentially?
Cesar: DORA Metrics are standard, you, you cannot start this conversation without, talking about, like, DORA Metrics. But I think the industry's slowly coming to realize that that's not enough to guide an engineering organization because, they are, they tell us about common characteristics of a highly performing organization, but not how they made their journey to get there. So we need to provide, also teams, more leading indicators on how you, you can get there. Otherwise you're just asking them, you know, you're telling them " Be better," right? So you're asking them to be faster. You need to kind of digest-
Jason: I, I mean, I've called DORA in the past the BMI of the engineering metrics. And if anyone know what BMI is, it's the body mass Index, in the US. It's great at a census level, but it's not great at an individual. level So you can't just tell someone, "Be less fat," you actually have to show them a way to do that. So I agree with you entirely.
So what are the ones that you think are, that Typeform has gravitated towards? I'm really curious, because I think many organizations are become bespoke, too, in what they actually care about, though I think a lot of this can be generalized out, because I think the BMI has its place. But so does a set of other individual ones.
Cesar: I think, a lot of it goes to, Git interactions, for example. So it's, pull request size, time to review, pull request, cycle time, those are our bread and butter, just because of the frequency that they happen. So if something happens in your organization, tens, hundreds thousands of times a day, you need to look at it. You need to see, "Okay, so are we making this the best that we can?" So, so this, this is definitely something we need to look at. Your CICD, you know, your pipeline times, you know, like, the success rate, the time takes you to, to, go through all of your automation.
Your incidents, analyze your incidents. What's, what's impacting, what are the features that are more impact or less impact, or are you meeting your incident SLAs, your availability? And, and not only quantitative metrics, but also qualitative, perspective, asking your engineers, what's their experience? What's the dev experience what is their on-call experience? Team health checks.
This is a very important component of assessing the state of your engineering function, and sometimes it's overlooked, because it seems, a bit more difficult for management to act upon. So sometimes it's, treated as a second-class,
Jason: Yeah, two things that you said here I absolutely love, in particular. One is it sounded like the thematics that were coming through about the metrics you cared about were all about speed, so, effectively. So like, speed, cycle time, feedback loops, all of those sorts of things. I get to this quite a bit because of the DORA or maybe the Stripe Developer Coefficient report, which is, the number of releases per day, for an engineering team is a high indicator of, engineering, soundness and how good they are.
I have argued that it's not the metric "number of releases per day," it's the work that it takes to have a high velocity team that can do thousands of releases per day, which that I feel effectively what you just said.
The other, which is the, the developer happiness is always, I think, overlooked. And it's, it's hard, because it's all over the place, and at one point, I'm not sure GitHub actually ever rolled this out, we were talking about doing something that would allow indiv- internal developers at GitHub to do a quick feedback to ev- everybody, which was, "My day today was a seven out of 10, and here's why it was not a 10." "My day today was a four out of 10, here's why it was, and o- and it was one thing, like literally one thing," and we were supposed to see the thematics that popped outta this. CI was slow. You know, it was all about increasing that. I don't think we ever followed through, I'm not sure where that ended up. I left before that got completed. But, that was really interesting, because it started down that path that was, no one, I've never seen really talk about that.
Cesar: I think the challenge is not only to, to do it but to make a ceremony out of it, and, and getting insights out of it. You an do it once, but i- if you start to do it, every, you know, a monthly, survey or, or a quarterly retrospective of the team when they ask themselves a set of questions, and you can even color-code it and, and try to drive some, some conclusions and see some patterns, that can just help inform your decision.
Other things that I think are important is focus, for example, or effort allocation. So it doesn't matter if your team is a, is a killer machine of efficiency, and it's, you know, like, pushing code, like, really fast through your pipeline, and whatnot. But if you're working on 10 different things at a time, then you're just not going anywhere.
And I think, like, I like this analogy of the car, and, and the different instruments that you can use to measure. So you have your RPM to measure, you know, like revolutions per minute, and then you can have your speedometer to see, how fast you're going, and your GPS to know where you're going at. I think this is s- same of the context that your engineers need to have.
So they need to know, for example, their pull request size, as in, you know, your RPM, but they need to know that this is because you really care for them to deliver value faster. So to, to move the car faster, because you could, definitely, pull your RPMs in a way that doesn't make your, your car move faster. The same thing, you can lower the average of your pull requests without even trying to gain anything, just like, not being aware of what the context is, without trying to move your car faster.
And what we really care about is that you move your business needles, you know, like you're actually solving the actual problems that you wanted. Your GPS is, is set on the right location. So you need a way to, share this context, and people be aware of all of this nuance, so that they can make the best decisions on, on, from their post.
Jason: I'll harp even more on that for the folks listening. It's incredibly important, that last piece, the GPS tying it back to the, th- speedometer, to the RPM gauge. And in, it's as easy as saying, "Hey, two different types of races." You got F1 and you've got drag-racing. They're both racing, but you are tuned to the cars, you've built the engines, you've thought about everything so completely differently. But if we lose sight of the type, we just think, "Hey, grab the funny car, er, the, F1 car, and throw it on the drag strip," you know, it's, i- it's gonna be very different, right? So that's something for engineering leaders and, and business leaders to constantly reinforce. What type of race are we running? Why are these metrics important?
Cesar: Right. There's some things that you can measure equally for all engineering teams, but e- every engineering teams have their own domain. We also need to enable them to measure themselves, and to understand, what are their own goals? And, and how can they create metrics that lead them to accomplish those goals? So there are so many metrics that you can abstract and say, "Okay, so every software or most software development teams should be good at this benchmark, whatever metrics." But also there needs to be a sort of, inversion of control of, of them using data to, allow themselves to be guided by their own goals. So I think that's something that we're also lacking a bit, a- as an industry.
Eiso: So César, we, we're very early in this in our industry. Talk to me what the future looks like, you know? If, if in 10 years, both organizations are, are using the data that s-, that you hope that they would be, or let's say just Typeform, in, in the future, what do we look like in terms of tooling, in terms of process? What, what is, as, Jason and I have talked about in the past, you know, what does the Iron Man suit look like, for the engineering teams?
Cesar: Well, I mean, in terms of processes I think like, engineering teams are, are m- are more, accustomed to using data to measure, their own goals and their own KPIs, their own, let's say, OKRs an, and measuring, you know, making the bridge of connecting, what they're doing on the day to day job to the goal of their team. And as tools, I think, like, o- one thing that we still need to solve is, like, the fragmentation of interfaces that we still have from where we're gathering different data. I, I think, we need to get the data closer to the engineer, right? It's not only about, you know, "Come here and drink from the well," but it's like, how can we get the bottle of water for you?
The insights and the data and everything needs to get closer to the actual decision-making point and I think we have a lot to learn from other industries.
Eiso: And what's been your biggest challenges, César, to, to get the organization there?
Cesar: Well, the biggest challenge is, is I, in my opinion, how fast, we think about grand scheme of things and how slow we need to be to implement the, the technical and process change in our organization.
So there's one thing of, seeing patterns and visualizing the system and how you can affect it, and another thing is to, you know, come back to reality because you still need to do some software engineering work and you need to harvest data, and you need to model it, and you need to clean it, and you need to, drive it through, through your pipelines or whatnot, and then agree on what are the metrics, and then understand and research. But, wh- what you're seeing on th- on, on the data actually maps to the reality of the engineering function.
And then, getting stakeholders onboard and, and communicating this. So it's just a tough challenge of creating a shared context, about anything. Like even within teams, sometimes you have, teams that are weeks and weeks, working on a project and n- they are not completely aligned, but imagine a whole organization trying to create some shared context through data. So just the actual changing of the human behavior is the most challenging part, I, I think.
Eiso: And, and where have you seen the most success there, and where, like what are the, the, kinda the indications that make you hopeful that, that we can get there?
Cesar: Well, one thing is it's, engagement. So you see engineers that are interest on it, you, you see that they want to know more, that they, they want to, use data to improve their teams. You see managers that are interested, in generally using metrics to, to change the way that they work. And, and when you see this type of engagement and you can see the teams, come back to you and say, "You know what, I acted on that feedback that you gave us through data, and now we, we're, working better and we feel better th- in the way that we're working," and then you say, "Okay."
it's not only abstract thought, people are taking some of our conclusions and, and they really map and they come back and say, "Hey, I acted upon it and, and it produced a, a positive result for us." And I see a shifting momentum in the industry. I think like, it, also, I think it's related to the whole devops movement. So, before, if you were doing a, a release once every three or six months or whatever, it really didn't make any sense to measure much things like we do now. So, it's fairly a, a, a new stage, and I think this is a natural next step. And i- didn't exist before, because it was just not technical- technically feasible or, or useful.
Eiso: I spend, I mean, y- you know this about me, and we've spent quite a bit of time together as well, right, I, I spend my day with, well, weeks with dozens and dozens of engineering orgs kind of going through this process of, of bringing data in. And, we like to use this analogy, the gym membership challenge, is that there's often very high motivation in an engineering organization to go and improve, and the first thing we do when we wanna improve is we look at, "Let's go get the data."
And that's true, it wasn't the case a couple of years ago. It's it's become the case now. There's like a, a strong shift in our industry. But going then and then evaluating, you know, the gyms that are out there and looking at all the equipment, and which are the best metrics, and, kind of going through all that process doesn't mean that you just signed up all of your engineering org for the gym and at the year everyone has, you know, a six-pack, and can, can lift I don't know 200 pounds. And so you spoke a lot about this like shared context, and, and trying to get there. And I'm really curious to, to push even further on, you know, if you could wave a magic wand and You, at the end of the year, have everybody with rock-hard abs, right, they, everybody was able to interpret the data, acted upon it, set goals and, and met them. What would be the one characteristic, or change that you would wanna wish for, in the organization or in the people to make that happen?
Cesar: It's having the conversations with the data. You know? it's like sitting around . Together and analyzing what the data is telling about your team and that, what the data is telling about the organization, and having an open conversation about it. it Maybe, the data is just not there yet, but if it is, nothing is going to happen if you don't have the conversation.
So having open, honest conversations, non-blame conversations about, what we can improve, and, and trying to do it in a way that, it's, us trying to i- ... I think one of the problems that I have with the wording, measuring, performance, engineering performance is, is this, trying to compare, are we fast enough? Or, are we shipping with quality enough? And I think like one thing is making people think in terms of what's blocking us from being faster, no? or What's, what's the blockage? How can I find the problem, if that's the problem I want to solve? So thinking fr- from problem first, I think it's something that it's, that, that would add definitely to the, to any team, is start by the actual problems and go back to, to the next thing.
Jason: Eiso, I, first of all, thanks for introducing sports to this for me today, and fitness- because I can, I didn't have to be the one. But, so, I love what you said. Let's, let's take this gym metaphor a little bit further. Planet Fitness, globo gym style type of things, they're metriced on dollars. They actually don't care if people show up, right? On a regular basis. They just need the sign-ups, they need the recurring payments. They don't care if they show up, they don't care if outcomes are had.
But another set of gyms and fitness centers emerged in their sub, tribes, like the bodybuilders, the power lifters, the cross fitters, the, the Barry's Boot Camps, and there's a bunch of commonality in why they become successful. And I think we can apply this same kinda metaphor into this domain, and give me 30 seconds to explain, it would be this. One, they create a sense of belonging. Two, there's a community element there. Three, there is experts waiting for the novice and the intermediates to help them along. Four, it's never ever about the, just sign up and come. They encourage, there's a very encouraging, a very a- almost like loudly motivating, "You can do this," sort of, behavior that happens there. And five, there's this constant feedback loop of praise, reward, continue, praise, reward, continue, with people all the way up and down the spectrum of the ladder. The best power lifters are giving encouragement to the young members, and, and the new ones, and they see what's possible in this.
Now, I think we can apply some of that, a- to beat the metaphor to death, to this domain. And, you know, there's a lotta different ways I think which we can do it, but I don't think we can do it quickly, I think we have to actually spend some time to do that over the next couple years in the industry.
Eiso: It's interesting you say this, Jason, because I'm, I'm always very cautious on this podcast to, to ever use it as an advertising or advertisement for Athenian and what we do, but since this is so on topic, one of the main things that we learned, like since we started the company, and that was a very big shift for us, it's never enough to just provide a tool or product.
So what we started doing, and had a huge impact is, we, we built an engineering success team that the leader o- of was an engineering leader who started as an IC when it was 10 engineers, a director when it was 30, and then scaled it to 750 people. And so it took a very experienced engineering leader, and actually make a great guest one day, and, and said to him like, "Okay." Now, the number one OKR in, in our business is that every single one of our customers actually continuously improves.
And that's what you're saying, just like Barry's Boot Camp and all of the things. A- and once you get that mindset, and you start working on that, what you realize is that to make that happen internally, successfully, is you need those internal communities. And it's very interesting what you're talking about with the gym analogy, because you need the internal champions.
You need the experts like César on the engineering intelligence team. You need to actually educate the rest of your organization, you need to take the success stories of your experiments and communicate them well to the rest of the organization, and only then you start having that change happen.
And when we think through all this, the exact same thing happened in Agile. The exact same thing happened in devops, and that seems to be the thing how we do things in engineering. Like these big movements in engineering metrics and data, and I like to just call it continuous improvement is, is one of them, they seem to always follow this pattern kind of like the, the gyms.
Jason: Take it one step further and I- I'll use my own personal analogy here, which is this. Some people know this, but a long time ago I wrote a bunch of fitness books. Majorly into fitness, and I'm a certified personal trainer on a bunch of different levels. And yet, I hire several trainers to work with me on a daily and weekly basis, and the reason why is the same reason why internal tooling teams, internal performance teams, exactly what César has been talking about, are so needed inside companies is because at the end of the day, I could get myself to a degree of, of progression.
But if I wanna take it to a different level, or if I don't want to have to think about every single aspect about what's going on and someone else is responsible for that, that's how I can unlock my own ability to, to, to scale. And scale's not the same idea, but you get the idea. "I should, I need to progress, I need to progress faster." If I'm the only person culpable for my own achievement, I- it's almost too much even with some expertise in the area. So I find that we've, in- inside organizations we do need to allow this, these, these platform teams, these internal tooling teams, these are, this is a wave of the future. Internal development platforms, those will become things in the next couple of years, and it's precisely for this reason.
Cesar: Agree 100%. Without, enabling other teams to make their function, faster with more quality and, and relieving some of the cognitive load needed i- then it's just goin to be a, a callback hell of complexity for, for product development teams. As software grows and becomes more complex, if, if this is not, enabled by, you know, platform teams, devops, and different frameworks and, and tooling that we, we might, provide from, the enablement side of the equation, then that just, is going to be growing pains of, of scaling, I think.
Eiso: So César, for engineering leaders that are listening to this who might be at an organization of, you know, 100 plus engineers and don't have this team yet, what would be some wor- final words of advice for today's episode?
Cesar: Well, I would say start thinking on data first. Start gathering a team, but your leadership needs to start thinking about that domain first. What are your needs? And, and, think about what are the decisions that you're trying to make and, and what's the information that you need to make the best decisions, and see if there are answers already out there. So, not for all the companies will it make sense to have an internal team that handles this. You could, you know, easily go out and, and find, some prepackaged solution as a, as a quick, way of, solving your problem.
But maybe you need even a team to, to make that happen and to, onboard all of your other teams into a third-party solution or to build something on top of that. But it comes back to my, my previous reflection about starting with a problem first. So starting with thinking, what is the problem that you want to solve as an engineering leader, and see if you have the tools to make it happen.
So if, improving the way that you're making software, is not optimal in your opinion, or you don't even know if it's optimal or not, then you might need some visibility on it. So, having visibility is the problem that you're trying to solve, and how you're going to solve it, it's up to, up to you and your own context.
Eiso: I appreciate it, César. Thank you so much for being with us today, this was fantastic.
Cesar: Thank you guys for having me.