Meghan (Meggie) Brody
Most content designers and strategists know that content operating as a service rather than a strategic partner cuts its impact off at the knees because it becomes a matter of plugging words into a space rather than allowing content to guide the experience. This lack of content-first design is not a new phenomenon, but one of its many far-reaching effects not talked about often is the subsequent inability to effectively measure content value.
Content value—its impact on users and businesses—cannot be meaningfully measured without a content-first approach that involves content professionals helping to frame the problem, define success, and ultimately plan for evaluation.

Why is a content-first approach necessary to measure content value?
Content-first design means understanding and mapping what users need to know before any other design work, which allows us to put users at the center of the experience, rather than prematurely determined branding or visuals.
Content shoehorned into a design a week before launch could be measured, but should it? It serves its purpose as nearly placeholder copy, isn’t derived from strategy, and likely isn’t significantly informed by customer research. Why expend resources on evaluating content that’s not effective and not done correctly? It’s like cutting the leaves off a weed instead of pulling it out at the root.
When involved at the start of a project, content designers can ask the right questions and make plans to effectively evaluate content that’s informed by existing research, best practices, and strategy. When content professionals work cross-functionally to frame the problem and define success, they can effectively test content and determine its value and impact.

Why is content measurement important?
- Having content-specific data allows content designers to shape content strategies and frameworks around hard evidence for scalable, consistent execution, which means better content for specific user groups.
- Data-informed content decisions are stronger than subjective ones. When paired with best practices, they also streamline the design process because feedback can be contextualized with the why, which reduces unnecessary, subjective feedback.
- Content value can often be translated into UX- and business-specific key metrics that show content’s value quantitatively, which is often easier to evaluate and communicate cross-functionally. (With the caveat that content design is both an art and a science; being data-informed is different from data-reliant, and doing this well requires a content designer’s expert judgement.)
All of these reasons ladder up to how stakeholders understand content as a discipline, helping content professionals frame their work as something measurable with intention rather than plug-and-go “wordsmithing.”
How do we measure content value?
Approach it the same way you would any other test: define the problem that the content is designed to solve, record your hypothesis around why and how the content solves it, identify the appropriate test (ideally with a UX researcher), and evaluate the results, which hopefully include user and business impacts.
The appropriate type of test depends on your question. An A/B test with content-only changes isolate the content from other variables, like visual design. (It’s also an easy one to get engineering collaboration on since content string changes are often much less time-consuming than design changes.) Resonance testing is useful for uncertainty around specific terms, and if usability testing is already in the budget, it’s easy to add quantitative questions targeting content. In a perfect world, a content designer works with a UX researcher to design the test.
It’s not correct to say that every piece of content should be tested, but every piece of content can be tied to findings from testing. Quality testing negates the need for exhaustive, repetitive testing down the line. Build a strong, foundational research repository to pull core truths from to make informed content choices. Tie your content principles and strategies to your findings, and share the wealth and the credit: share these findings with product, design, and engineering stakeholders and acknowledge the work of your UX research partner.
If you’re looking for a way to start, a great example is how a previous, forward-thinking manager of mine coordinated one baseline A/B test. For this test, we wrote three Goldilocks-esque variations of the same landing page, changing only the readability level: below our target level, at the recommended level, and a higher level. The outcome allowed us to attach data to our style guide’s recommendation of writing for specific reading grade level and gave content designers a proof point to refer to when stakeholders would provide subjective feedback around content being too simple or complex.
And if getting one test off the ground is a challenge, start crunching numbers to get a dollar-amount estimate of time and resources spent fixing (or not fixing) poor content in one experience, page, or feature. This can be based on time spent on rewrites and code changes, poor CSAT, order returns, etc. A financial estimate of poorly used funds is a powerful unit of measurement when making the case for content value testing.
Conclusion

Measuring content value isn’t a nice-to-have, it’s necessary so content professionals can write and design effectively, and allows them to evaluate when content is enabling UX and business outcomes and identify areas of opportunity. This can only happen meaningfully and economically by using a content-first design approach. So if your organization or team wants meaningful, data-informed content that is user-centric and steeped in strategy, look for opportunities to take a content-first design approach and allocate resources for testing.

