Why Creative Diversity Is the Only Way to Win With Meta's Andromeda Algorithm

If you've been running Meta ads for more than a year or two, you've probably noticed something shift. The campaigns that used to work, the ones built on tight audience targeting and a handful of polished video ads, just don't produce the way they used to. And the response from a lot of brands and agencies has been to blame the algorithm, or to blame costs, or to blame iOS changes from years ago that somehow still get brought up in every strategy meeting.
The actual problem is usually simpler than any of that. The algorithm changed how it finds people and serves ads. Most advertisers haven't changed how they feed it.
At Y'all, we spend a lot of time thinking about what Meta's delivery system actually needs from us in order to do its job well. And the answer, more than anything else, comes down to creative diversity. Real, meaningful diversity across formats, messages, and styles. Not five versions of the same video with different thumbnails.
What Andromeda Actually Changed
Meta's Andromeda update fundamentally restructured how the ad delivery system works. Before Andromeda, the system operated within the audience constraints you set. You picked an interest group, uploaded some ads, and Meta showed those ads to people inside that box.
Now the system operates more like a prediction engine. For every impression opportunity, it evaluates the likelihood a specific person will take the action you care about, the expected value of that action, and the predicted quality of the ad experience. The ads get distributed based on probability, not based on the audience bucket you selected.
That means creative plays a larger role in shaping who the algorithm finds within the audience you allow. You still choose your objective, conversion event, geography, exclusions, and budget allocation. But within those constraints, the creative itself is the primary signal that determines which people actually see your ads. I wrote about this at length in why creative is the real targeting mechanism, and I keep coming back to it because it changes the entire game for performance creative agencies and the DTC brands they serve.
When we audit new accounts, one of the first things we look at is how much genuine variety exists in the ad creative. Not how many ads are running. How many fundamentally different messages are being tested. The distinction matters a lot.
What Counts as "Different" (and What Doesn't)
Here's where most brands and even some agencies get this wrong. They hear "creative diversity" and think it means producing more stuff. So they take one winning video and make five versions of it. Swap the first three seconds. Change the text overlay color. Test the same concept with and without a logo at the end.
Those aren't meaningfully different ads. The algorithm processes them as variations on the same signal. And when you give the system a bunch of ads that it can't distinguish from each other, it doesn't know what to do with them. The way we think about it: the algorithm needs you to put a bunch of ads in front of it that it genuinely cannot tell are the same ad.
That means your "variants" need to differ at the structural level. Different hooks, different storytelling arcs, different visual styles, different value propositions leading the message. A static ad testing a "saves you time" message and a video ad testing a "people are switching from [competitor]" message are genuinely different. A 15-second cut and a 30-second cut of the same interview are not.
We've adjusted how we think about variants at Y'all because of this. We're not going to force a variant if the only difference is surface-level. If we can't articulate why the algorithm would treat version B as a meaningfully different signal than version A, we don't run it. That discipline has actually reduced our total output slightly but improved our testing efficiency because every piece of creative that goes live is genuinely teaching us something new.
Why Broad Targeting Paired With Diverse Creative Wins
There's a compounding effect that a lot of advertisers miss. When you pair broad targeting with diverse creative, you get the benefit of both the algorithm's scale and its learning speed.
Here's why. Broad targeting gives Andromeda the largest possible pool to search for buyers. It removes the artificial constraints that fragmenting audiences creates. And diverse creative gives the system multiple distinct signals to test against that pool. The algorithm can discover that middle-aged women in the Midwest respond to your static product comparison ad, while younger men on the coasts respond to your UGC testimonial video, all within the same campaign.
Over-structured accounts with hyper-segmented audiences and limited creative accomplish the opposite. They fragment data across too many small pools, slow down learning, and create situations where the system doesn't have enough information to optimize effectively. We've audited accounts where the structure was so fragmented that each campaign was essentially competing with itself for the same audience segments. That's a media buying problem dressed up as a performance problem.
Simpler structures, fewer campaigns, broader audiences, more creative variety. That formula sounds almost too basic to work. But the accounts that run this way consistently learn faster and scale more efficiently than the ones built on complex segmentation.
The Testing Problem Nobody Talks About
One thing I wish more DTC brands understood about creative testing: your hit rate is going to be low. That's normal.
We've worked with brands who came in expecting 80 or 90 percent of their ad concepts to perform well. That expectation creates a cycle where every "losing" ad feels like a failure, the team gets demoralized, and eventually someone decides to just keep running the winners instead of testing new ideas. Which works for a while, until those winners fatigue and performance falls off a cliff.
A realistic hit rate for new creative concepts is closer to 10 percent. One out of every ten ideas will really work. Another two or three might be salvageable with iteration. The rest won't hit, and that's fine because each one taught you something about what your audience responds to.
The key is protecting new creative with enough budget to get a fair read. If you throw a brand new ad into a campaign that's already dominated by established winners, the algorithm will give the proven performers the lion's share of impressions every time. The new ad never gets enough data to prove itself. It's like making a freshman compete with a senior. There are several ways to do this: dedicated creative testing campaigns, A/B tests, CBO with minimum spend constraints, or ASC with staged budget ramps. The specific mechanism matters less than the principle. Give new concepts enough protected spend and enough time to generate real signal before you make a call.
This is one area where understanding how to A/B test ad creative variations efficiently really matters. You need a testing framework that isolates variables, gives new concepts protected budget, and evaluates performance across the right time window. Judging a new ad after 24 hours of spend almost never tells you anything useful.
Static Ads Are More Useful Than You Think
There's a bias in the DTC world toward video. And video is extremely powerful. But statics serve a purpose that video can't easily replicate: they isolate the message.
When you run a video ad and it underperforms, what failed? Was it the hook? The pacing? The audio? The creator? The product shot? The call to action? You're dealing with a dozen variables in a single piece of content. Diagnosing the failure requires guesswork.
A static ad with a single image and a clear message tests one thing at a time. If the "saves you 20 minutes every morning" static outperforms the "clinically proven results" static, you've learned something clean about which value proposition resonates with your audience. You can then take that winning message and produce a video around it, knowing the core concept already works.
We use this approach regularly at Y'all. Test the message in statics first. Produce the winning concept in richer formats second. It's more disciplined than going straight to expensive video production, and it reduces waste because you're not spending $5,000 on a video built around a message nobody cares about.
The Evergreen Problem
Another pattern we see constantly in accounts we audit: seasonal creative killing scaling campaigns.
Here's how it plays out. A brand launches a new collection or a limited-time product. They produce creative around it, that creative performs well for two or three weeks, and then the product sells out or the season changes and the ad becomes irrelevant. The campaign loses its best performer, costs spike, and the team scrambles to produce something new.
Evergreen concepts that work year-round are the backbone of any scaling strategy. If your best ad only works during a holiday promotion or references a specific product that rotates out of stock, you're rebuilding your creative foundation every few weeks. That makes scaling nearly impossible because the algorithm never gets to compound its learning across a long enough time horizon.
The brands that scale most consistently maintain a library of evergreen creative that can run for months. They still produce seasonal and promotional content on top of that, but the base layer stays stable. Think about it like a portfolio. Your evergreen creative is the index fund. Seasonal stuff is the individual stock picks. You want both, but the index fund is what keeps the account healthy.
Partnership Ads: The Most Underused Lever in DTC
One specific format worth calling out: partnership ads, sometimes called collaborative ads on Meta. These are ads that run under a creator's or retailer's handle rather than the brand's handle.
Results vary by vertical and creator fit, but the directional trend is consistent: partnership ads tend to outperform brand-handle ads on cost efficiency.
We see this consistently in practice. Partnership ads feel different in the feed because they carry the social proof of the creator's handle. They blend in with organic content in a way that standard brand ads can't. And the reduction in acquisition cost makes them one of the highest-ROI creative formats available to DTC brands right now.
Despite all of that, most accounts we audit have never tested them. They're leaving one of the most effective levers for improving ad campaign ROI completely untouched.
What This Means for How You Evaluate Creative
If you take one thing from this article, let it be this: when an ad underperforms, the worst thing you can do is label the entire ad a failure and move on.
Performance creative needs to be evaluated systematically. Look at distinct dimensions. Did the opening stop the scroll? Was the product and benefit immediately clear? Did the message map to a real pain point or desire? Was there credible proof? Was there a clear path to action? When you can identify which specific dimension failed, you can fix that dimension and test again without throwing away everything that worked.
This approach turns every piece of creative, winners and losers, into usable data. And over time, that data compounds. You build a clearer picture of what your audience responds to, what objections they have, what proof they need, and what format delivers each message most effectively. That compounding intelligence is what separates agencies and brands that get better over time from the ones that stay stuck cycling through random ideas hoping something hits.
The right creative volume depends on your spend level and goals, but the principle holds across the board: more genuinely distinct concepts give the algorithm more signal. The quality and variety of that volume matters even more than the raw number. Twenty versions of the same concept teach the algorithm nothing new. Twenty genuinely different concepts, distributed across statics, video, and creator content, give it the raw material to find buyers you didn't know existed.
The Scroll Stopper vs. The Closer
One last thing worth mentioning, because it trips up a lot of brands. The creative that catches someone's attention and the product that actually sells are often different.
We've seen this play out clearly with premium products. Bright, eye-catching colors work as scroll stoppers. They get the thumb to pause. But when you look at purchase data, people overwhelmingly buy the black version, or the basic version, or whatever the "safe" option is. The bright product gets attention. The standard product gets the sale.
This means your creative strategy might need to show the exciting version in the ad to stop the scroll, while making sure the landing page features the version people actually buy. Boosting conversion rates through optimized ad visuals sometimes means accepting that the ad's job is attention and the page's job is the sale, and optimizing each for its respective role rather than trying to make one piece of content do everything.
Frequently Asked Questions
How many different ad creative formats should I be running on Meta?
We've found that accounts running at least three distinct formats, static images, video, and creator or animated content, tend to surface more performance pockets. This gives Meta's Andromeda algorithm enough variety to learn which audience segments respond to which format. Some accounts can scale well with a dominant format, but diversifying generally expands who the algorithm can find for you.
What changed with Meta's Andromeda update for DTC advertisers?
Andromeda shifted Meta from audience-based targeting to prediction-based delivery. The algorithm now evaluates every impression opportunity based on the likelihood a person will convert, rather than just showing ads to the interest groups you selected. You still control objectives, conversion events, geography, and exclusions, but within those constraints, creative diversity matters more because the creative is the primary signal shaping who sees and responds to your ads.
How often should I refresh my ad creative on Meta?
Rather than refreshing on a fixed schedule, monitor performance signals. When frequency rises above 2 to 3 on new audiences and costs start climbing, that's the signal to introduce new creative. Maintaining a library of evergreen concepts means you're not constantly starting from scratch when individual ads fatigue.
What's a realistic success rate for new ad concepts?
About 10 percent. One out of ten new concepts will be a strong performer. Two or three more might work with iteration. The rest provide learning even if they don't produce direct returns. Expecting a higher hit rate leads to under-testing and over-reliance on a small number of winning ads.
Should I test messages in static ads before producing video?
In many cases, yes. Static ads isolate the message from the production variables present in video (hook, pacing, audio, creator). Testing the core value proposition in statics first, then producing video around the winning message, reduces production waste and improves the odds your video investment pays off.
What are partnership ads and why should DTC brands use them?
Partnership ads (or collaborative ads) run under a creator's or retailer's handle rather than the brand's. Meta's 2025 case studies cite acquisition cost reductions as high as 19 percent, though results vary by vertical and creator fit. They blend into the feed more naturally and carry built-in social proof. Most DTC accounts haven't tested them yet.
How do I know if my Meta account structure is hurting performance?
The most common structural problem is not separating new customer and existing customer campaigns. If your prospecting campaigns aren't excluding existing buyers, your reported ROAS is inflated because repeat purchasers are easier to convert. Check your frequency metrics too. For cold prospecting campaigns, 2 to 3 frequency is a reasonable benchmark. When rising frequency pairs with rising CPM or CPA, that combination signals fatigue. Remarketing and high-intent audiences can sustain higher frequency before efficiency drops off.
How can DTC brands improve their ad campaign ROI with creative?
Focus on genuine variety over volume. Each creative concept should test a meaningfully different message, format, or angle. Use protected budgets for new concepts so they get fair evaluation. Evaluate underperformers by diagnosing which specific dimension failed rather than discarding the whole ad. And test partnership ads if you haven't already.
If you're running Meta ads for a DTC brand and want to talk through your creative strategy, I'm always happy to have that conversation. Reach out and we can take a look at what you're working with.


