Nima Gardideh
President of Pearmill, ex-Head of Product at Taplytics, ex-Head of Mobile at Frank & Oak. YC fellow.
Karim El Rabiey
Co-founder and CEO, Pearmill
The rules of performance marketing change more than Nima, our hosts’ hair scrunchies. How the channels price their auctions, creative formats and visual auctions, and people’s behavior are in constant flux. There are new channels that pop up every few months and new playbooks need to be derived from them to stay competitive.
So how can a paid growth studio like ours (Pearmill) help startups build a sustainable growth strategy with this constant ambiguity and volatility? With a bulletproof growth process that focuses not solely on the results but more on the underlying universe that growth and marketing exist in.
There’s no one better to unravel the process that we’re spent years mastering than Karim El Rabiey, our Co-Founder and CEO. In this episode, we’re dive into our experiment cycle and the steps involved plus how we take those learnings to increase efficiency and nail predictions over time.
More highlights include:
Resources:
[00:00:00] Karim El Rabiey: This is kind of like a hard ass thing to say, but like, if an experiment is really, really successful, but we initially thought it was gonna be like, slightly successful and just like a small incremental thing.
[00:00:11] Of course, celebrate, let's be happy and all that. But like that also means that like we didn't really understand to what extent this experiment could have an impact, which means we either didn't understand the channel that well, which means that we didn't understand the audience, or we didn't understand how people were going to react to it, yeah, there's just another level of analysis to be like, why was our prediction off.
[MUSIC STARTS]
[00:00:34] Nima Gardideh: Hello and welcome listeners and viewers of The Hypergrowth Experience. I am your host Nima Gardideh. This episode is a little bit different. I'm having one of my co-founders come on and talk to you all about the process that we run at Permill for growing companies. And if you don't know, I help co run a company called Pearmill.
[00:00:57] We are a growth studio helping startups scale through paid acquisition, and we spend tens of millions of dollars a month and it's been a process that we've been working on for quite a few years now, and I hope you find it interesting. It's in a world where you cannot run perfect experiments, but regardless you need to have some
[00:01:21] understanding of where should you put your next dollar? Are you doing the right things? Is this the best way you can, allocate your capital across all these different channels that are at your disposal? And Karim's been the one designing that process and we had a pretty good conversation about it, and I hope you enjoy it.
[00:01:41] And if you're watching the video version, he's working, he was working out of our office that day and he's in. Artists in residence room. And the paintings you have, , he has behind him are from one of our artists in Reside. Her name is Ida Mora. She's one of our favorite folks who's been able to use our office to produce and is quite talented. So we'll leave her site in the description as well if you'd like to check out her work. Anyway, here's the episode on the growth process at Pearmill.
[MUSIC FADES]
[00:02:16] The first question for you Karim is, at a very high level, how does the overall process work? Like how do we think about experimentation and learning from the work that we do?
[00:02:33] Karim El Rabiey: I'll start off by talking about how this process isn't necessarily inherently a growth process. I don't think that we're necessarily the first people to apply this type of process to the growth problem. So the process is much more like a scientific process. And the reason we're, the reason we're thinking about this from a scientific standpoint is because it's already, this model has already found a way to define some rules in our universe.
[00:03:08] So when, when scientists. Run experiments, and I'm gonna talk in super broad and vague terms, but like, when scientists define experiments, they're trying to understand more about the rules of our universe and or whatever capacity that is. And not to sound self important.
[00:03:33] Scientists are way more important than us, I'd say. And much less paid. But in a similar parallel, the roles of performance marketing or the universe of performance marketing isn't really that defined. There's channels that have their own motivations there, or their own tensions that have their own like tech behind them that are kind of a black box.
[00:04:05] And people even at the channels don't really understand the channels themselves as much anymore. There's people who are seeing our ads, who have behaviors and patterns. Oftentimes can be seen as irrational and changing. And so our job as performance marketers is to operate within this space of ambiguity.
[00:04:35] And one thing that helps us operate in this space of ambiguity is to try to define it more. And so our process, the reason that we run experiments, Really isn't about the results, it's about defining our universe more so that we can then understand the space more and then get better results. So ultimately, yes, but it's just about a finish in and, something that happens a bit differently than how, like, scientists approach this problem is that the rules of the universe, Generally speaking, don't change.
[00:05:11] But the rules of performance marketing changes quite frequently. So the channels change quite frequently. People's behavior changes. People get used to seeing certain types of ads that used to be really good. And you have people, but now people understand that these ads are trying to do certain things, so people change.
[00:05:29] So what's cool about our space is that we continuously have to run these experiments to try to define the universe and it's not just like one moment in time we have something defined and that's it. We continuously have to run it which is part of what's so cool about this space in general about being a performance marketer. It's also part of what's really annoying about and frustrating about it Laughs
[00:05:51] Nima Gardideh: I was gonna say, I was just on another podcast and they were, they were asking me, how do you keep up with everything that changes all the time? Because it seems very annoying and frustrating and yeah, and I think it depends on what your perspective is. I think that's interesting. It's like a constant source of learning. But it can also be annoying cuz then what you had going and was working for a whole company for a whole year, all of a sudden doesn't work anymore.
[00:06:15] Karim El Rabiey: Totally. It's the duality of performance marketing. So yeah, so the long, and very, monologue aside and just to answer a question more directly, the reason that we were on this process is so that we get a better understanding of what we're trying to do, of what, what we are capable of doing in the channels and learn for our accounts, what is best in order to get them the re the returns that they want, whether it's improve CPA or improve ros.
[00:06:57] Nima Gardideh: And before we go through the overall process. So it's, there's two layers to this, right? We're trying to learn what works for a specific brand that we're helping, but we're also trying to learn what works at like a, a more one level of abstraction removed, which is like what works at the channel level, let's say. Okay, I guess walk us through how we think about the process overall, and we can get into how we can think about these like two levels of abstraction of experiments.
[00:07:27] Karim El Rabiey: I'll touch on it briefly and that we can revisit it, revisit a bit later, but there's experiments that we were on as an agency. There's experiments that we try to understand the general space. So we'll use one account as the account that lets us experiment with some initiative that we may wanna apply. Overall, we wanna apply to learning from, to all other accounts. And then there's some experiments that are just purely account specific that can't really be applied to other things. So the process, the steps of the process, The breakdown of it, is quite straightforward. And I think I've seen so many people talk about this process like Andy Johns Belfor and we've taken and adapted to our systems, but the steps are basic.
[00:08:20] First step is ideating on ideas. I don't wanna actually, I don't wanna call it first step because it's a cycle and we can't really, like, there's no first in a cycle, but a starting point that you consider is ideating on what initiatives or experiments to run and this is where we hold either like brainstorms or like breakout discussions where we try to focus on a really specific problem.
[00:08:51] Like how, rather than like, how do we just get more users or, or how do we lower our cpa? It's much more specific. Like how do we decrease the drop off between marketing qualified leads and sales qualified leads, how do we lower our CPMs for this period? That's, it's usually about a very specific metric or set of metrics. nd from this ideation this ideation step we have a backlog
[00:09:22] goes into step two, which is our prioritization or we go through and we prioritize what we want to start off with first of the experiments and initiatives that we talked about. What do we want to actually spend the time and resources doing? This is probably, I think this is the hardest step of it all. There's, you could go into like the breakout of it, but predicting what's gonna happen is basically what we're trying to do when we're prioritizing. And it's a skill that's really hard to hone. If you're really good at predicting, then you don't have to really worry about prioritization at all.
[00:10:05] And so you have to think about it like, as we're in that, we have to try to get much better at prediction and part of our process is about that. Get just generally building the prediction muscle.
[00:10:15] Nima Gardideh: If you were very, very good at predicting, then you wouldn't even need to run the experiments if you were a hundred percent correct every time. Right?
[00:10:21] Karim El Rabiey: Yeah exactly.
[00:10:24] Nima Gardideh: It depends on your head rate. If you're a hundred percent, why are you experimenting? Just go for it. [Laughs]
[00:10:28] Karim El Rabiey: We need the precogs from the minority report.
[00:10:33] Nima Gardideh: Yeah, exactly.
[00:10:35] Karim El Rabiey: That's what we, and then the step after that, if super supposed to just do it like after prioritize, just do the experiment, run it, execute. That's the, I think in our docs when we write it, I think that that step is just like two sentences. Like just do what you just said you would do.
[00:10:53] Then it goes into the fourth step, which is after it runs its course. Analyze what's happened. Look at the performance that you, that came out of it. How close was it to your predictions? Was it successful or not? Why do we think that whatever happened happened? If it did well, why did that happen? If it did poorly, why did that happen? And then the, the last part is systemizing this. And by systemization we essentially mean that if an experiment worked well, how does it no longer become an experiment and just become part of our account structure or part of our playbook or, and if it doesn't work, how do we use that to then talk about or how do we use that to feed into the ideation phase?
[00:11:51] So whatever learnings we got from it, what are the next steps that we would take and along with this experiment and the reason that last step is really important to the systemization of it is if you're onboarding people into your growth process. If there's a new hire or if there's somebody who hasn't been a marketer that's coming onto the team, you can get them to read through this like systems book of history of experiments that have run and the track record to really understand the process up to this point and understand what they need to do next in either the account or the campaign.
[00:12:30] Nima Gardideh: Yeah, and I really want to talk a bunch about that, cuz I think that's probably one of the hardest areas on how to convey the learnings. Right? But let's get into the ideation. What is it hard to come up with new ideas at our time? Do we run out of ideas a lot? There's many vectors to this, I assume, right?
[00:12:51] There is the creative part, there is the structural part, there's the audience part. There's all these, like what are, I guess maybe start with what are the vectors in which you guys ideate on and think through running experiments in.
[00:13:04] Karim El Rabiey: Yeah, you already named a few of them, like audience tests that we could run. Creative tests, landing page tests, off channel tests. And this goes more into when we work directly with our clients, for them to have certain life cycle flows that work with certain groups of audiences we come through.
[00:13:27] There are tracking tests that we could run. So like, event optimization tests. Like what events are we trying to optimize towards? It doesn't really, it never really feels like there's a shortage of ideas especially when you have like multiple people on a growth team or even our brainstorms don't always involve just people on that pod or on that team.
[00:13:57] We invite, like creative folks, you invite engineering folks or suggest that if you're running any brainstorm, invite people who haven't been as close to the problem. The times that the shortage of ideas can happen is when it's around one specific problem. And there are times where it's just like, I don't know what else we need to do in order to get this problem solved.
[00:14:24] And that's the part where it reaches a point where we can't feel like we can't solve this problem. And then the mentality shifts into, we just have this problem forever. Now where, like what do we, how do we accommodate for it? Like, if you think you have some handicap, you're not trying to beat the handicap you're trying to live a life or run a process or with that in mind and accommodating for it.
[00:15:01] Nima Gardideh: Can you give an example of this? Just so you can clarify it? Like a form of a handicap that effectively you'll have to live with on an account level that will probably be there for a long period of time, or if not forever?
[00:15:15] Karim El Rabiey: It's one that we're dealing with is, and actually it comes up quite a lot is increased competition in a space. I think this comes both seasonally and just depending on the industry at, at certain points of time. So, if you think abou from competition, from a seasonal standpoint, and especially if you're in the e-commerce space Q4 and end of Q4.
[00:15:41] Everything gets way more expensive because all the retailers, all the e-commerce shops, all the big brands, they're all pushing out their holiday messaging, their holiday deals, and what happens is that CPMs go up as a result cuz people are bidding for the same inventory and are trying to go through massive budgets.
[00:16:00] There's nothing that you can really do about that. You can't like, You can't find a way to decrease the cost. No one's found a way to be like, Oh, I got it. I got to the chief CPMs and, and got to like my own inventory that no one else is targeting. So you have to start thinking about other ways to accommodate for higher CPMs.
[00:16:22] Things that a lot of retailers do and a lot of advertisers do is just deals, discounts. I know I can get more conversions. If I only have a thousand impressions that I can get to with my budget and I'll just use like, really numbers in this case, but if I only have a thousand impressions that I can get to with my budget and typically that thousand impressions gets me is worth $20 and that gets me four purchases. It's now worth $40. And the mentality it should be, okay, how do I get it? Rather than, how do I get that $40 down to $20? It should be, how do I get eight purchases out of the thousand impressions I'm doing? So deals, improving conversion rates, adding other touchpoints, maybe other cheaper channels. You have to switch so that you're not trying to solve an unsolvable problem.
[00:17:19] It happens a lot with Google too, where we see all of a sudden new competitors have come in to bid on terms because it's a really hot space, like behavioral health space from mental health space. Well, a lot of competitors are starting in that space
[00:17:33] Nima Gardideh: Mm. This new like VC funding being pumped into the space. So you just have to deal with the new folks coming in.
[00:17:39] Karim El Rabiey: Yeah. Unless you find some machiavellian way to get other people to stop bidding on your terms.
[00:17:48] Nima Gardideh: We've talked about like, I don't know if we ever ran this experiment, but didn't we say like, what if we overbite heavily for a while to scare the marketers away from like a keyboard or something like that. I don't know if you ever ran that experiment, but just thought it was like a funny thought experiment to go through cuz there are humans on the other side who are looking at the data.
[00:18:07] So you can scare them away. Yeah, that's an interesting one. So there is a systems level things that you can't like, really control sometimes and, and you have to live with them and so you have to find ways around them. But otherwise there's plenty of room for new ideas all the time to go through.
[00:18:24] The process goes, we have all these ideas and the vectors are wildly different, right? So there is like, some of them are on channel it seems like, right? So there are creative, new pieces of creative we could be running on like, let's say Facebook, new sets of keywords that you could add to Google.
[00:18:40] There's like all these channel ones and then there is a couple you, you talked about that are not even on channel, right? So there's like landing pages or life cycle flows, and then you somehow have. Prioritize between them. Do we run them all sometimes at the same time? How do you prioritize when they're wildly different on different platforms? Like how do you think about that?
[00:19:05] Karim El Rabiey: Yeah, like I said, that's really, that is the hardest part. And we've used a few different scoring methodologies and some like intuition methodologies, and there's, each one is maybe a bit better than the one before, but it still feels like there hasn't been something that's like really nailed down how we prioritize. The main thing that he like we want to watch out for is experiments that would affect each other. So like you said, if we're running a landing page test and a creative test at the same time, it's a bit risky because we're not really, whatever the results are, if we drive the same audience through those, through those same two tests, whatever the results are, we're not really understanding what caused the change in behavior, whether for better or worse and so either have to stagger the tests or totally isolate the audiences and have different groups of people go through it.
[00:20:06] The only caveat that I would say to that is if you are, if you're pretty early on, I wouldn't worry as much about having really clean experiments that don't bleed into each other or overlap into each other. I would just do more and even if the results are confusing you have some results, you have some momentum, you have some action.
[00:20:33] I think one of my old bosses had told me a while ago this slick metaphor that really stuck with me of like, in the beginning you're building a statue and you're not, like at the stage where you're like, you have a granite block and you're not gonna use like a tiny little chisel and piece away of things.
[00:20:52] You wanna take like a massive fucking hammer to it and like, take out big blocks and not really understand. How the physics of the hammer to the block broke off certain pieces, you just want to like, make some big impact. And then you could take some fine tuning things. So at a larger scale we have to be more, just more careful of that step.
[00:21:17] There's some things that we don't run together, like I said, creative tests and, landing page tests, any creative tests we run against our existing best performing audiences and really actually, most tests that we run, if they don't have to do with audience tests they're running against our best performing audiences. Yeah, I'm not sure if that got to the premise of the question.
[00:21:48] Nima Gardideh: I think it's hard, right? Like it, there are all these different areas that you could test, but it seems like the most important thing at some scale is to effectively have clean signals of information like after the test is run, you just know that it actually succeeded or it failed.
[00:22:11] And in earlier stages, like that doesn't even matter. You kind of just go through the motions of this process and get enough learnings will emerge that you will get to the scale in which you can run these like sort of isolated control experiments.I do think, like, I think we've been doing this for a few years now and that seems to be the hardest part.
[00:22:30] The good news is, I dunno, a single marketer that does it any differently. So, and most marketers are even bad at that isolation part, in our experience, right? Like, how often are we talking to clients saying that no, we cannot possibly run these three tests at the same time. I don't know if that's gotten better or worse, but it's certainly always there to some extent, right?
[00:22:55] Karim El Rabiey: Yeah. It's a pretty frequent conversation and I get the reasons there's urgency. Like every company has urgency. I don't think we ever come across an account that is like, Oh, like, If we get this learning at some point in the next three months, four months, well, it'll be a good learning to have.
[00:23:16] I think everybody's trying to get as many learnings as possible. There's not just the learning risk that's associated with this, but there's, or not having clean signals and that there's a risk associated with that. But there's also the risk of the channels not really understanding what's happening and what to do.
[00:23:41] So, I think we talked a bit last time about how Facebook's machine learning is quite sensitive and even Google's machine learning to an extent also is getting to be more sensitive. All they're trying to do, all they're trying to do is learn. And so if the majority of what's happening on an account is
[00:24:06] all tests, there's no semblance of successful progress or successful signals happening, then those channels are just going to get confused. So if we take our accounts and like our evergreen accounts and start to test a ton with them, then Facebook's going to lose its ability to say, Okay, this is the thing that was doing, that's was steady.
[00:24:33] And I know as Facebook that I can put more budget into this ad set or into this set of creative and know that I'm going to get purchases or conversions or whatever we're optimizing towards. So doing too much at once can also, doesn't just make it confusing for us as individuals, but it will make it confusing for the channels.
[00:24:56] And that's even worse because it's really hard to. It's really hard to get it back on track. It's really hard to get those channels back on track. you
[00:25:04] Nima Gardideh: And this is like an interesting one cuz I think last time you talked about how there was like some level of hanization now. , we're doing and just talking to the channels or like thinking about the channels. But on the technical layer, this is just like a fit problem, right? Like the machine learning models trying to fit on what success looks like. And if you don't give them enough time plus data to fit anywhere, they're just gonna be constantly in search of a fit and that results in volatility. So it all makes so much sense to me, right? So we run these tests.
[00:25:37] Let's talk about the analysis part because I think this process was popularized by people that were given, by marketers that were given a gift, which was the gift of scale, , where they could run experiments and have truly statistical significance. After they run these experiments, we're working at these massive scale like hot scale companies with lots of traffic and lots of users. How do we think about that? Like, we work with all sorts of companies, but quite often, even if they're spending half a million or a million dollars a month, you're not gonna get significance in that. Is there a framework we use? How do we think about when we run one of these experiments? Just even use an example of like, okay, we run an audience test. How long do we wait? How much money do we put behind it? How do we think about designing the experiment such that after it's done we can trust its results to some extent?
[00:26:48] Karim El Rabiey: Yeah. That's a really great question. Before we launch, to what you just said, before we launch, we will, actually like document how long we want a test to run. So very rarely do we launch a test. You're like, All right, we'll just wait until. Get some idea or get some signal of what's gonna happen.
[00:27:12] We will predict we want 500 sessions split between these two landing pages, or we want to spend $700 on each in order there and the prediction is less about to your point or like the appreciation of the time that it will take is less about statistical significance.
[00:27:39] Because I don't think we can afford that. In a lot of cases. We have to balance significance with speed. And it said we'll rely a bit more on direction and on confidence. And in this case sometimes what we will do as a group is rather than use stat confidence, like stats big confidence, we'll use group confidence. Is like, okay, we are for people who are on this account and have a really good understanding of it. And this includes a client as well. And this test has generally been showing that this is the variant be, is more likely to be successful as smart people, as like marketers and intelligent people who have a close idea of what happens in this count.
[00:28:30] Should we, should we go for it? And I think that that part is, it's in the absence of the luxury of statistical significance. Your intuition muscles or your intuition becomes really important and I think that comes with time and familiarity within account, familiarity with a channel, and sometimes just like a willingness to, to make a call and make a decision
[00:29:03] and trust that at some point you may get opposing information that this test that you had confidence in, that you put all in before a statistically significant went wrong and create a backup plan. Like, Alright, we're gonna put everything into this right now, but we know that if after a week it actually collapses, it's really easy for us to revert to this because we implemented the tests in a certain way where we didn't override a past experience.
[00:29:29] Nima Gardideh: So it feels like some form of group human intelligence going on behind trying to make the decision. And then there was like a reversal path always planned out when, when the test is so significant that it would mess up the whole account effectively.
[00:29:47] Karim El Rabiey: Yeah, that's generally like, because again, the channels are pretty sensitive and I think this part is specific to channels, but, maybe not as applicable to like when you're testing out email subject lines is if you change something that is working or if you make an impact and something that is working, it's really hard to go back to that thing.
[00:30:13] So if we have an evergreen account, And we have a set of new creatives that we wanna test and we put them in the Evergreen account and they tank. It's really hard to get that evergreen account back in the state that it was before it started tanking. So we wanna have separation or new creatives.
[00:30:31] We're gonna launch them in some other campaign and if they do, well, we'll put them to Evergreen, but we're not gonna pause the old ones so that we have like a track it back option or there’s a term for it. I forgot what that term is. It’s back? No. No. There's like a saying, two people whose first language is not English. [Laughs]
[00:30:54] Nima Gardideh: English. Yeah. I'm trying to figure this out. Yeah. This is an interesting thing, especially with the understanding that the inputs in which you put into the channels are non reversible, like they're immutable in some sense. Is that what you were looking for? And so the state can never go back to the original state in which you were in.
[00:31:17] So after you change it, you've mutated it to a new state and you cannot go back. And so that's an interesting thing to just talk about, like very briefly, is creative the only, what are the other things we try to like separate out from evergreen campaigns? To protect essentially the majority of the budget. So then we continue having some stable state before we hopefully unlock new levels of performance.
[00:31:44] Karim El Rabiey: Yeah. I'll just use the context of Facebook because it's a lot more sensitive. If we're not using something like Google Optimize where we can split traffic at the URL level, then we will create separate campaigns for landing page testing. If we're doing structure tests, we want to see the difference between
[00:32:10] a campaign that runs on a daily budget versus a campaign that runs on a lifetime budget and gets turned off during weekends. We won't do that within our Evergreen campaign. We'll actually start separate campaigns and sometimes we'll create like, we'll have our Evergreen, and then another campaign that is very similar to our Evergreen that is running against, that's essentially the exact same as our Evergreen, but that's running against the test variant and yeah, toughen it. I feel like those are the predominant ones and bidding tests. So account structure tests, bidding tests, and creative tests. Yeah, landing page. Keep those separate.
[00:32:56] Nima Gardideh: And so, okay, so you, we analyze and the analysis part is hard, because we're not gonna have significance here, but it sounds like basically building conviction over time is the name of the game of like, after we've run enough tests around an area, we're gonna start building some intuition that, hey, we think this is gonna work. We might then, as you, as you named it, system.
[00:33:24] And the systematization process is, it sounds like it has two parts. That one is just creating the documentation such that new team members or other people coming on can be at the same level of understanding of the accounts. And then the other part is just implementing the thing that was just learned across maybe the account itself or across clients. Right. Is there, am I missing anything from the systematization?
[00:33:50] Karim El Rabiey: No, that, that's essentially it. It's the processive documentation and then writing the, essentially just the next step. Whether that next step is applying to everything or, and having it be picked as part of our playbook or it feeds into the next set of ideation.
[00:34:07] Nima Gardideh: So, yeah, let's talk about, I guess, how we track all of this and then how it ends up becoming docented at the final stages. Like what do we use, How do we think through documentation? What tools do we use? How do we write about them?
[00:34:21] Karim El Rabiey: We predominantly use ClickUp and that's where most of the process runs in. But once something has graduated, we also use Notion, and we'll talk about the relationship there, but ClickUp is like any product management tool. And a test management tool just allows you to create the different stages that something is in.
[00:34:48] So when we are submitting things in our backlog, there is a backlog coln and all that somebody has to do is, just write out what that idea is and maybe a couple of sentences on why they think it might work or why it's valuable. They don't have to like, go and fully bake it out. And so if we, we really want to remove the difficulty of adding things to backlog or the barrier of adding things to a backlog because it's super important to keep that full. what something's added, if it gets chosen in the prioritization that becomes the place where everything lives. That ClickUp card or that ClickUp task is where the experiment design lives. The hypothesis or like the hypothesis experiment design prediction. And then week to week or day to day or whatever frequency of updates on how that experiment is running. And so when we prioritize it, we'll bake it out a bit more. We'll put in one part of our ClickUp card, why we're running it, what hypothesis is, and then how it's actually going to be set up and when it's actually running, it moves, I don't know if I'm boring people with like the column nomenclature of our ClickUp space [laughs], but it goes from backlog to on deck to running and what we, whatever frequency
[00:36:21] we think it's important to look at that test, whether it's important to look at it after three days or on a weekly basis or however long. The test owner will update how it's looking so far and just any findings that they have that we may want to apply immediately and what the next step is for this test.
[00:36:42] Sometimes the next step is just in action. Let it run for more. We slotted two weeks for it. Our first week we're seeing that this looks like it's going to perform really well, but we knew that. It was a less competitive week. And so we don't think that it's actually, or where you don't feel the confidence to say that it will do better.
[00:37:04] Let's run it for this week, which seems to be a bit more competitive. We can see if it's still, it keeps true. And we actually give this is a part where it's maybe more interesting from like a client agency relationship. Our clients have access to our ClickUp. So like whatever we use for yourself is what we also include our clients in.
[00:37:27] We're not like creating these slides and presentations afterwards that admittedly are like, could be polished and look nice. We're just putting them in there. So at any point in time they could see what's running, what the status of each initiative is, what the learnings are, and then when it actually is completed. There's three things that should be getting written down. One is obviously what the results were, what the final results were, why we got those results, why we think we got those results. So whatever set of analysis that we're going to do that actually explains it, not just like, Okay, this was the result.
[00:38:11] We're happy with it. And then also how close. We were to our prediction like where we did it turn out how we, this is kind of like a hard ass thing to say, but like, if an experiment is really successful, but we initially thought it was gonna be slightly successful and just like a small incremental thing.
[00:38:38] Of course, celebrate, let's be happy and all that. But like that also means that we didn't really understand to what extent this experiment could have an impact, which means we either didn't understand the channel that well, which means that we didn't understand the audience, or we didn't understand how people were going to react to it, right? So, there's, yeah, there's just another level of analysis to be like, why was our prediction off?
[00:39:01] Nima Gardideh: So, that gap of prediction is important. Like if we're close, but like in the other direction, let's say that's not too bad. But if we're very far off, even if it's in the positive direction, like we thought it was gonna be good, it was great. Then there could, there's like learnings that could have been had by just analyzing like, what was wrong about your intuition around this? So then next time you do a better job predicting effectively. Gotcha.
[00:39:29] Karim El Rabiey: And like I said, the reason that we have to build this, like intuition and like, of yeah, this level of closeness is because we can't always, like you said, like rely on on statistics, significance or, , things happening the way they should, like rationally or logically be happening. And I think our, like our paid social leads, like they were the person that. I think probably of the people that runs page social accounts or like runs like Facebook accounts, seen them use their intuition much more than I've previously and historically seen. And that's gotten to like a pretty high level of success. So that's why this, this intuitive muscle is, is really important to
[00:40:21] us, but sometimes it is, I will admittedly say. It is hard to get buy in on. And I don't know if I've done, like, we've done really a good enough job of explaining why this is really important.
[00:40:40] Our other co-founder Mary's used this analogy really well where she's like, sometimes. You can think of a gift idea for a friend and you know why it's not a good gift idea or somebody I see like, Hey, I'm thinking of getting this for your friend or for this friend, and why it's not a good idea.
[00:41:03] And you could use your intuition to know that like you, they shouldn't get it for them, but you can't logically point out all the things that would make this a bad gift for this person there. You can do that to some extent, but there's a part of it that's just like your familiarity and your understanding of it. And maybe we just don't have the vocabulary for it yet, but feel something there.
[00:41:30] Nima Gardideh: It's hard. It sounds very similar to, if you listen to investment folks, like people that are wielding a lot of money. They do all this analysis, they do like all this logic work, right? And then at the end of the day, they make a call and then you ask them, how did you make that decision at that time? And they just tell her there was like a vibe effectively, right? [Laughs]
[00:41:53] There was, where it was just like my intuition telling me that's, what makes sense. And like your intuition basically has all that information in it, right? You fed it with all this information. If you're just making pure, intuitive decisions, like I think that's wrong, but I think what sounds like what you're saying is like, Hey, do all the work. where you have looked at all the data you've liked, fit it all into your brain and you and your body, and at that point your intuition says this is the right move.
[00:42:26] There is some level of intelligence that's probably built beyond you being able to write in a flow chart why you think this is the right move and we gotta be able to maybe explain that to a client. But it just seems like it's working.
[00:42:42] Nima Gardideh: Yeah, I guess the last question is like, how do you organize this? Like this is the part that is harder, I understand at an atomic level of like, Hey, here is this one campaign. Here's an experiment that we ran. Here's what we learned from that one experiment over the arc of time.
[00:42:58] Like how do you make it so that it's easily readable and legible and someone else coming in can catch up quickly without having to go through every single document and read it. Maybe the answer is read every single document in chronological order so you can rebuild the intuition we just talked about or there is like an abstracted way of pulling all these learnings out. Okay. How do we do it better?
[00:43:26] Karim El Rabiey: Yeah. Part of it is leveling it up into larger themes. So if you give somebody just like a list of experiments to run through without, each experiment may have its individual context, but usually when we're thinking of three initiatives or three experiments to run, they are around a similar theme.
[00:43:45] And there's nothing in like click up that says like, this is the theme. So it has to be supported with other docs. Part of the way that we group our initiatives is around, we do sprint planning. So we run it in sprints the first week of every month. The first full week of every month. We will dedicate that to closing all the experiments for the past month or all the ones that we have data on.
[00:44:16] Make sure that they're just sealed, written, and we know what the next steps are. We align on what the biggest problems are for the coming month. They're just the biggest challenges that we have. And then our experiments are grouped around, the next set of experiments are grouped around that.
[00:44:33] So we actually, usually what's running in the first week is just the carryover from the weeks before, but it's dedicated to knowing what we're gonna do in week two, week three, week four. It's sometimes five weeks in a month. Or the start of the 5 weeks
[00:44:52] Nima Gardideh: Yeah. like a portion of the weekend. We promise we can count. We can count the number of months, or days in a month. [Laughs]
[00:45:00] Karim El Rabiey: But yeah, so that there's that monthly doc that gets written up that has all the context from what has happened before that has this set of experiments that we want to focus on next, and then, or the challenges that we're having and then the next set of experiments and so that's a bit easier to get into.
[00:45:24] And we usually encourage people, like f they're getting into, like if they're joining the team is to read through those docs, it makes it a bit more thematic. The bigger problem. Or the other problem that exists though is that sometimes these are done in the spur of the moment to like have to, we will document because we wanna document, but some people have varying degrees of skill like communication, written communication, written documentation. And we haven't really perfected this yet.
[00:46:00] But what we're finding is gonna be important is to have like a Reddit of all the things that we docented we're like, Okay, we're now three months removed from this, or two months removed from it. Does what's written still make sense? Would it make sense to somebody that's coming into this for the first time and going through the process of rewriting it, with the intention of it being read versus the intention of it being documented.
[00:46:29] So there's all the things that we can do along the way to that. Make sure that it's like there and written and documented. But I think we're gonna have to have this other lens on top, which is just a review of a periodic review.
[00:46:43] Nima Gardideh: Yeah, that seems like an interesting challenge. Having now lost all the context, does it still make sense? Because when you're in it, you have all this like added context of, hey, what's been happening all month? Or like, even if it's like the year, right? Oh, this was in March of 2020 that there was this thing called the pandemic back then, right?
[00:47:02] Like there's forgetting all the context is probably like a big part of the problem which makes so much sense and systematizing that part of going back in time and rereading sounds like, I mean, it sounds like a lot of extra work, but it probably makes sense. So then you can like really support new people that come on.
[00:47:24] Karim El Rabiey: The benefit of that, even though it's extra work, is that you could probably imply you can, you probably don't need the people who are, who have worked on it to be able to say whether something makes sense or not. Like you can get is a bad idea, but like, or maybe not a bad idea, but like you can f like a writer that just like reads through that has like experience with maybe like a more technical writer to just like go through and clean up. So. I think you can paralyze that work.
[00:47:59] Nima Gardideh: Well, I think we can cut it off there. I think that this process is obviously something we've like, gone really into over the past few years and by the end of it is very clear that there's still areas of improvement and we're gonna keep working on it. Thank you for telling the story. Appreciate you coming on.
[00:48:19] Karim El Rabiey: Thanks for having me again. It's always super fun. Have a good one.
[00:48:23] Nima Gardideh: And that's a wrap. Thanks for listening to another episode of The Hypergrowth Experience. I certainly enjoyed having this conversation with Karim. I run a totally different part of the company, I work on the technology and sales and acquisition part of the company. So, I am so grateful to have spent the time with him learning about the process that the growth team runs so smoothly and it made me feel a lot better about knowing that we're spending all this money with this thought process and, and this level of rigor.
[00:48:55] So I hope you enjoyed it as well. If you wanna be on the podcast or if you wanna work with us please feel free to reach out to me. I'm nima@pearmill.com. Next episode I have one of my favorite entrepreneurs on, his name is Ian L. Patterson. He runs this company called Plurilock in the cyber security space, and he's one of the most individual thinkers that I know who's been able to scale a company through a very different mechanism than the average sort of venture back company that you may hear about in the press. So, I'm pretty excited for that episode to come out. Anyway, thanks for listening.
[MUSIC FADES OUT]