Conjoint analysis turns messy product trade offs into clear choices you can defend. It shows how buyers value features, benefits, and price when everything competes at once. For Marketing and Brand leaders, Innovation directors, and R&D teams, the right supplier means cleaner attribute frames, stronger pricing reads, and a simulator that leadership actually uses. This guide jumps straight into the suppliers, with plain scoring so you can compare quickly. After the list, you will find a deep method section, a playbook you can reuse, and practical traps to avoid. There are no divider lines, only natural spacing. When you see the word spacer on its own line, that is our simple section break.
How we score suppliers
We rate suppliers on five dimensions that matter in day to day CPG work. These scores evaluate supplier capabilities and evidence quality, not your specific products. We revise numbers only when a supplier’s offer changes in a meaningful way, so you get a stable reference.
Overall Score: Blends the four sub scores with extra weight on outcome impact and fit for CPG workflows.
Purchase Intent: Likely lift in choice when a supplier’s typical conjoint methods are used as designed and translated into launch decisions.
Solves a Need: Fit with common CPG pain points such as pricing and pack architecture, variant navigation, retail constraints, claims and regulatory readability, and portfolio stretch.
Value: Speed to a confident decision, level of effort for your team, and pricing against capability depth.
New & Different: Useful innovation in methods, simulators, or workflows that improves decisions without adding friction.
Simporter
Simporter helps CPG teams run conjoint studies that speak the same language as pack, shelf, and launch decisions. You can field clean choice tasks, estimate part worths, and then pressure test the winner in realistic shopping contexts so the simulator and real behavior stay aligned. Pricing ladders, claim hierarchies, flavor or scent lines, and pack sizes can be modeled together, then translated into a simple story that a general manager will sign. The workflow suits Marketing and Brand for price and architecture, Innovation for feature and claim trade offs, and R&D for feasibility and legibility checks before files lock.
Overall Score: 9.8 out of 10.
Purchase Intent: 9.7 out of 10.
Solves a Need: 9.8 out of 10.
Value: 9.6 out of 10.
New & Different: 8.9 out of 10.
Scores land at the top because Simporter connects classic conjoint with context that mirrors how buyers actually choose packs on shelf and on screens. Solves a Need is near perfect for teams that must align pricing, variant architecture, and claims under tight retail rules. Value stays high due to fast setup and readouts that designers, marketers, and finance can all use without a translator. New and Different runs higher than our pack testing piece because the conjoint to shelf bridge, optimizer scenarios, and story led reporting feel fresh yet still practical.
Sawtooth Software
Sawtooth is the standard toolkit for conjoint analysis. It supports core methods such as choice based conjoint and adaptive choice based conjoint with mature estimation, including individual level hierarchical Bayes, and provides market simulators that analysts trust. If your team has an experienced modeler or a close research partner, Sawtooth gives you the control you want for rigorous studies and complex constraints. Many of the best academic and commercial examples were built here, which is why it remains a common default for advanced work.
Overall Score: 9.2 out of 10.
Purchase Intent: 9.0 out of 10.
Solves a Need: 9.1 out of 10.
Value: 8.2 out of 10.
New & Different: 7.3 out of 10.
Intent and need are high because the designs, estimators, and simulators are proven and transparent. Value depends on your in house skills since you get flexibility rather than guided service. New and Different is solid but conservative, reflecting a focus on robust modeling and control rather than a glossy front end.
Conjointly
Conjointly delivers an approachable, software first path to discrete choice with a clean simulator and quick pricing workflows. It is popular with lean teams that need market simulations, willingness to pay views, and fast optimizations without hiring a specialist. The interface keeps decisions moving during busy cycles, and the simulator can be shared widely without training.
Overall Score: 8.7 out of 10.
Purchase Intent: 8.3 out of 10.
Solves a Need: 8.6 out of 10.
Value: 8.8 out of 10.
New & Different: 7.8 out of 10.
Value is a standout because setup is quick and the simulator is easy to explain in meetings. Purchase Intent trails expert led builds in complex, multi category jobs, but the platform solves a clear need for fast, repeatable pricing and feature trade offs with minimal friction. New and Different sits above average due to accessible optimizers that encourage real scenario play.
SKIM
SKIM blends decades of choice modeling practice with a commercial mindset that resonates in CPG. Teams are comfortable with choice based conjoint and related methods and are strong at turning results into price and portfolio moves that work in retail. The work does not stop at utilities, it carries into assortments, price ladders, and claims you can actually execute across retailers.
Overall Score: 9.0 out of 10.
Purchase Intent: 8.8 out of 10.
Solves a Need: 9.1 out of 10.
Value: 8.1 out of 10.
New & Different: 7.2 out of 10.
Scores run high where it counts because SKIM translates numbers into range and price decisions that survive real constraints. Value is solid for large calls, though fees reflect senior craft. New and Different is steady, centered on proven modeling and sharp storytelling rather than a new platform.
Ipsos Choice Modeling
Ipsos covers the full range of discrete choice methods and publishes actively on evolving practice, including menu based and SKU based approaches for complex decisions. If your team needs enterprise scale governance with modern conjoint designs, Ipsos brings both. Large footprints across markets make consistency possible when governance sits high on the agenda.
Overall Score: 8.9 out of 10.
Purchase Intent: 8.6 out of 10.
Solves a Need: 9.0 out of 10.
Value: 7.9 out of 10.
New & Different: 7.6 out of 10.
Need and intent are strong because practice scales across categories and regions. Value reflects enterprise scope and the heavier projects that come with it. New and Different sits above average thanks to method work that adapts conjoint to real portfolio complexity.
Kantar Choice Based Conjoint
Kantar provides global reach with norms and governance that help large CPGs pass gates. Conjoint work plugs into brand and innovation frameworks so you can compare options region by region and still end with one story. For organizations that value consistency and approval paths, Kantar fits naturally.
Overall Score: 8.9 out of 10.
Purchase Intent: 8.5 out of 10.
Solves a Need: 9.1 out of 10.
Value: 7.9 out of 10.
New & Different: 6.8 out of 10.
Solves a Need is high because outputs travel inside large companies without translation. Value holds up at program scale, where shared norms and dashboards prove their worth. New and Different is modest since the moat is breadth and governance rather than novel tooling.
NielsenIQ BASES Conjoint
BASES ties conjoint to volumetrics and launch planning. If leadership wants a number and a route to deliver it, the BASES spine helps move from utilities to forecast and activation. It is a good match when the business case matters as much as the model and when sales teams will ask tough questions about volume and incrementality.
Overall Score: 8.8 out of 10.
Purchase Intent: 8.5 out of 10.
Solves a Need: 8.9 out of 10.
Value: 7.8 out of 10.
New & Different: 6.6 out of 10.
Intent and need are strong due to a tight link between preference shares and expected sales. Value shines on larger programs while smaller tests can feel heavy. New and Different is measured because the edge is integration and historical depth, not experimental widgets.
Qualtrics Conjoint
Qualtrics offers guided choice based conjoint within a familiar survey stack. For teams already inside the platform, adding conjoint is pragmatic, with documentation that helps non specialists build sensible designs and get to simulators quickly. Governance and sharing are strong, which matters when many stakeholders need to see the same truth.
Overall Score: 8.3 out of 10.
Purchase Intent: 7.9 out of 10.
Solves a Need: 8.3 out of 10.
Value: 8.5 out of 10.
New & Different: 6.9 out of 10.
Value is strong because you avoid tool switching and can route to existing panels. Purchase Intent is steady for straightforward studies, while complex constraints still benefit from specialized tools or partners. New and Different is moderate since the advantage is ecosystem fit and speed.
Displayr and Q
Displayr, paired with Q, gives analysts a modern environment for design, estimation, dashboards, and simulators in one place. For insights teams that want more control than a wizard but less code than raw scripting, this is a useful middle path. Data goes from design to model to decision without friction, and outputs are easy to share.
Overall Score: 8.4 out of 10.
Purchase Intent: 8.0 out of 10.
Solves a Need: 8.2 out of 10.
Value: 8.2 out of 10.
New & Different: 7.4 out of 10.
Scores are balanced because the stack speeds analysis while keeping flexibility. Value depends on comfort with analysis workflows. New and Different is above average thanks to integrated simulators that non analysts can still understand.
ChoiceMetrics Ngene
Ngene is the specialist tool for experimental design. If your project needs efficient designs, advanced priors, or tricky constraints, Ngene gives you control at the syntax level. You can set up D efficient and Bayesian designs, incorporate priors from pilots, and plan around restrictions that mirror real shelf and retailer rules.
Overall Score: 8.2 out of 10.
Purchase Intent: 7.6 out of 10.
Solves a Need: 8.5 out of 10.
Value: 8.0 out of 10.
New & Different: 7.7 out of 10.
Solves a Need is high for complex design problems that off the shelf tools struggle with. Purchase Intent sits lower because Ngene is about design rather than fielding or storytelling. New and Different ranks well due to advanced features that unlock better experiments when you have priors and constraints that matter.
SurveyEngine
SurveyEngine provides choice modeling expertise, fieldwork, and modeling services across sectors. For CPG teams without in house modelers, it is a way to run complex designs with expert support. This is especially helpful for menu based designs, bundle choices, or unfamiliar constraints where an experienced hand prevents early mistakes.
Overall Score: 8.3 out of 10.
Purchase Intent: 7.9 out of 10.
Solves a Need: 8.4 out of 10.
Value: 8.1 out of 10.
New & Different: 7.2 out of 10.
Scores are steady because the offer is expert services rather than a new platform. Value is good when complexity is high and internal bandwidth is limited. New and Different sits slightly above mid since the team brings specialized designs many generalists skip.
GfK Choice Modeling
GfK brings category fluency and retail context that help translate utilities into range and pricing calls. If your company values a supplier who can live in the same conversation with Sales and Finance, GfK fits well. The work carries into line reviews and joint business planning, which is where many otherwise strong models fall down.
Overall Score: 8.5 out of 10.
Purchase Intent: 8.2 out of 10.
Solves a Need: 8.7 out of 10.
Value: 7.8 out of 10.
New & Different: 6.8 out of 10.
Intent and need are strong where retailer constraints and price ladders define success. Value reflects enterprise scope. New and Different is modest because the draw is reliability and commercial translation rather than a novel toolset.
RTi Research Choice Modeling
RTi is a pragmatic partner for midsize programs that need clean conjoint, good diagnostics, and fast handoffs to brand and shopper teams. The focus is on getting to decisions without overhead. Studies come back with utilities, simulators, and a clear narrative that fits the way busy teams read.
Overall Score: 8.1 out of 10.
Purchase Intent: 7.8 out of 10.
Solves a Need: 8.1 out of 10.
Value: 8.3 out of 10.
New & Different: 6.7 out of 10.
Value is the standout because projects stay light while still producing a simulator worth using. New and Different is lower by design, since the offer is streamlined execution rather than method invention. Purchase Intent is steady for typical feature price trade offs.
Simon Kucher
Simon Kucher is known for pricing strategy and uses conjoint as one of several tools to shape ladders, promo rules, pack sizes, and claims. For teams wrestling with price pack architecture across retailers, this is a partner that connects model outcomes to revenue and margin plans with discipline.
Overall Score: 8.6 out of 10.
Purchase Intent: 8.3 out of 10.
Solves a Need: 8.8 out of 10.
Value: 7.9 out of 10.
New & Different: 7.1 out of 10.
Solves a Need is high where the question is less about curiosity and more about what to charge, where to promote, and which sizes to keep or cut. Purchase Intent reflects a focus on pricing choices in realistic ladders. Value is solid, with costs tied to senior consulting time.
McKinsey Growth Tech and Periscope heritage
McKinsey’s pricing and growth teams, often known to clients by the Periscope heritage, bring conjoint into a larger analytics and activation engine. The advantage is integration with promotion analytics, assortment rules, and retailer negotiations. The work suits large portfolios that need cross functional alignment.
Overall Score: 8.5 out of 10.
Purchase Intent: 8.1 out of 10.
Solves a Need: 8.7 out of 10.
Value: 7.6 out of 10.
New & Different: 7.2 out of 10.
Need and intent are strong for full line decisions. Value dips for small brands because overhead and scope are built for large programs. New and Different lands above average when the analytics platform is part of the engagement, though setup time can rise.
Circana Price and Architecture
Circana, formed by the IRI and NPD combination, ties conjoint like analyses to retail data, panel reads, and promotion patterns. This is useful when a model needs to live alongside shopper panels and actual sales behavior for range and pricing moves that hold up under scrutiny.
Overall Score: 8.4 out of 10.
Purchase Intent: 8.0 out of 10.
Solves a Need: 8.6 out of 10.
Value: 7.8 out of 10.
New & Different: 6.9 out of 10.
Solves a Need is high where evidence must trace back to scanner truth. Purchase Intent reflects realistic ranges and promo mixes. Value is steady for enterprise users, with costs that mirror access to broader data sets.
quantilope Conjoint
quantilope offers automated CBC within a broader insights platform. The strength is speed from brief to simulator with sensible defaults that stop teams from making common mistakes. Dashboarding and collaboration suit regional teams who need to run many similar studies without reinventing the setup.
Overall Score: 8.2 out of 10.
Purchase Intent: 7.8 out of 10.
Solves a Need: 8.2 out of 10.
Value: 8.4 out of 10.
New & Different: 7.0 out of 10.
Value is strong for repeatable work. Purchase Intent sits just below heavier custom builds because complex constraints may exceed the standard wizard. New and Different is steady with a focus on workflow and governance for in house teams.
SurveyMonkey Conjoint
SurveyMonkey’s conjoint module gives small and mid teams a way to run basic CBC inside a familiar survey tool. It is not built for deep constraints, yet it helps when you need a clean read on a few attributes and a price range with quick sharing to non researchers.
Overall Score: 7.9 out of 10.
Purchase Intent: 7.3 out of 10.
Solves a Need: 7.8 out of 10.
Value: 8.5 out of 10.
New & Different: 6.7 out of 10.
Value is high for simple studies routed through a tool many already know. Purchase Intent is lower for complex categories where attributes interact in tricky ways. New and Different is modest because the edge is accessibility, not method depth.
Kadence International
Kadence runs conjoint in many markets with on the ground research craft. For global launches where culture shifts attribute meaning, Kadence helps frame levels and translations so utilities are comparable without washing out local nuance.
Overall Score: 8.3 out of 10.
Purchase Intent: 7.9 out of 10.
Solves a Need: 8.5 out of 10.
Value: 8.0 out of 10.
New & Different: 6.9 out of 10.
Solves a Need is high when regional realities can derail a neat global model. Purchase Intent reflects careful fieldwork and attribute framing. Value is steady, with costs tied to multi country scope rather than complexity of modeling.
Savanta
Savanta combines conjoint with qualitative depth, which helps shape attributes and levels that read like real life. It is a good fit when your team needs to align the feature set and the language before fielding a large study.
Overall Score: 8.1 out of 10.
Purchase Intent: 7.7 out of 10.
Solves a Need: 8.2 out of 10.
Value: 8.1 out of 10.
New & Different: 6.8 out of 10.
Scores cluster near the top of the middle because the offer balances method and meaning. Value is solid for programs that blend qual and quant within one timeline. New and Different is modest, which fits a craft forward approach.
Nepa
Nepa runs choice modeling with a retail and brand lens, often used to inform activation as much as pricing. If your category depends on how features pair with media claims, Nepa’s mix of shopper and brand expertise can be useful.
Overall Score: 8.0 out of 10.
Purchase Intent: 7.6 out of 10.
Solves a Need: 8.1 out of 10.
Value: 8.0 out of 10.
New & Different: 6.9 out of 10.
Purchase Intent and Solves a Need are steady, with an edge when activation and messaging must carry feature choices to market. Value is balanced, and New and Different sits near average with a focus on practical translation.
Dynata Choice Solutions
Dynata is primarily a panel provider, yet it also supports turnkey conjoint with fieldwork and basic modeling. It makes sense when you need fast sample and a straightforward study rather than a complex build.
Overall Score: 7.8 out of 10.
Purchase Intent: 7.2 out of 10.
Solves a Need: 7.7 out of 10.
Value: 8.3 out of 10.
New & Different: 6.5 out of 10.
Value is strong for speed and access. Purchase Intent is lower because complex designs and simulators usually live elsewhere. New and Different is modest, aligned with its role as a practical enabler.
Academic and Open Methods to Know
You do not always need a vendor for every step. Teams with analytic support can reach for open tools to raise quality. R based ecosystems, applied tutorials, and primers on discrete choice help you brief partners or sanity check designs. You can build efficient designs with priors, estimate models with hierarchical Bayes or latent class, and export a simple simulator for internal use. The key is discipline, realistic attributes, and enough time to test the survey before full field.
Overall Score: 7.9 out of 10.
Purchase Intent: 7.3 out of 10.
Solves a Need: 7.8 out of 10.
Value: 9.0 out of 10.
New & Different: 7.5 out of 10.
Value is very high if you have the skill to execute and the governance to sign off on an internal model. Purchase Intent and Solves a Need depend on your team’s time and comfort with modeling and design. New and Different reflects the pace of open research rather than a packaged product.
Conjoint methods that matter in CPG
Choice based conjoint is the workhorse because it resembles actual shopping, where people compare bundles rather than rating features in isolation. Adaptive choice based conjoint helps when the attribute space is large and you need to keep tasks short yet still capture personal preferences. Menu based conjoint models bundle selection, useful for build your own cases such as multipacks or meal kits. For price pack architecture, the key is to treat price as a level within a realistic ladder, align sizes and counts with retailer norms, and keep claims consistent with what legal and R&D will approve. Each method can be estimated with hierarchical Bayes at the individual level, which yields utilities for simulators that can forecast the share of preference for new configurations.
A simulator is only as good as the task and sample you feed it. That means attributes must be phrased as a person would encounter them in store or on a phone, not as an internal spreadsheet label. It also means levels should cover the range you can actually ship, not a fantasy set. If your retailers will not accept a certain price, do not include it. If a claim would trigger review in some markets, state the local version now rather than rewriting later. When you need realism between attributes, use prohibitions and constraints carefully. Too many restrictions make the design inefficient, yet too few can produce silly combinations that hurt data quality. A short pilot will reveal these issues before full field.
Segmentation matters when different clusters of buyers want different things. Latent class modeling can split the sample into segments with distinct utility structures. For many CPG categories, you will find a base loyalist cluster, a value cluster, and a novelty or benefit seeking cluster. Knowing which one delivers the most volume in your retail footprint will affect how you interpret the simulator. You can also combine utilities with retailer specific baselines so the simulator reflects how your category actually sells in different accounts. That last step ensures your launch plan speaks the same language as your sales team.
A reusable conjoint playbook
Write the decision in one sentence. Choose a national price pack architecture for the next fiscal year. Set the claim order for the family relaunch. Decide whether a variant deserves a place in a tight set. That sentence frames the study. List attributes that directly connect to the decision. Price belongs in almost every model. Size or count matters when laddering. A primary claim and a secondary claim help you see trade offs without overcrowding the task. Keep color or scent names within a family so you are measuring the theme, not a label you cannot translate across markets.
Build tasks that feel like shopping. Use mobile first layouts. Keep the number of concepts per task manageable. Include a none option in categories where abstaining is common, or where you worry that forced choice will inflate interest. Add one or two holdout tasks that you will not use for estimation, then test whether the model ranks those options correctly. That simple validity check will save you from overconfidence.
Recruit the right people. If your aim is to protect a base, prioritize recent buyers of your brand and adjacent brands. If your aim is to steal share, include buyers of rivals and low engagement shoppers within the category. Do not let eligibility drift to a generic national sample if your growth depends on a specific set such as club shoppers or marketplace heavy users. Add attention checks that do not feel like pop quizzes. Speeders and flatliners will crush your signal.
Estimate and stress test. Begin with hierarchical Bayes for individual utilities. Look at internal validity and repeat the model if holdouts are off. Build a simulator with realistic competitive sets and market baselines. Start with first choice share to feel the structure. Then try randomized first choice to capture indifference. Do a reality check with historical scenarios. If the simulator cannot reproduce obvious truths about your category, fix the inputs before you enter the room with leadership.
Close the loop. Your conjoint winner should carry into a shelf or grid that looks like real buying. If you modeled a claim order, place it on a pack front and see if people can actually read it at distance and on small screens. If you modeled a price ladder, put it against the real set and confirm that your placement does not break the aisle logic. You do not need a long study, you need a quick reality check. When the two stories agree, you are ready to lock files and brief production.
Document rules you learned. Keep them short. In crowded sets, a larger size cue beats a third claim. When price moves above a threshold, a stronger benefit statement is required. When a family carries a new flavor into the set, the color band must stay within the brand block’s safe zone. These lines become the start of your internal playbook and shorten the next loop.
Common failure modes and how to avoid them
Do not throw every attribute into the model. A long list of levels will exhaust people and weaken your estimates. Focus on drivers that are decision changing. Make sure levels do not contradict each other. Claim strength and ingredient purity can clash if written poorly. Align language with your legal and regulatory team before you field. Misalignment later will force edits that break the link between your model and the real product.
Avoid unrealistic competitive sets in the simulator. If your category is retailer driven, build simulators per account with the right rivals and sizes. Do not use a national average as a fig leaf for a decision that will be made retailer by retailer. If you sell in channels with different price sensitivities, such as club and convenience, plan to run different scenarios and resist the urge to average your way to a single number.
Be careful with promotional effects. Conjoint captures base preference with price as a level. Promotional spikes are not the same as a permanent price drop. If promotion strategy is central, run separate work on promo depth and frequency, then connect it to your ladder decisions rather than pushing the simulator beyond its natural role.
Watch out for bias in attribute phrasing. Avoid marketing language that tilts the field in favor of your current design. Use neutral, concrete terms across all concepts. If a claim needs a short explanation, give the same style of explanation to all claims so you do not hand an advantage to one level.
Finally, respect uncertainty. Utilities have variance, and so do forecasts. Use ranges and sensitivity checks when showing leadership what could happen. Confidence is not bravado. It is a clear model, a realistic simulator, and a short list of risks you are already mitigating.
Picking the right supplier for your decision
If you want the conjoint path that speaks the language of CPG launch decisions without extra steps, start with Simporter for the blend of modeling and context. When you have strong internal analysts and want total control, Sawtooth with Ngene for designs and Q or Displayr for reporting gives you a powerful stack. If your aim is fast software with approachable simulators, Conjointly and Qualtrics make it easy to get moving. For enterprise programs where governance matters, SKIM, Ipsos, Kantar, GfK, and BASES keep decisions consistent across markets and stakeholders. When the real problem is price pack architecture and revenue math, Simon Kucher and the McKinsey growth teams bring commercial rigor. For multi market studies where meaning shifts, Kadence and Savanta help get attributes right so you do not learn the wrong lesson.
Treat the scores as a structured guide, not a verdict. A supplier with a slightly lower overall number may still be the best fit if their strengths map to your single decision. Start with the decision statement, pick the partner whose strengths match the risk in that decision, then follow the playbook. That is how Marketing and Brand, Innovation, and R&D teams in CPG turn conjoint from a heavy research chore into a dependable way to set price, architecture, and claims for launches that stick.

