Roundtable: What to measure and what not to measure
An Ideaspace podcast and special roundtable event
The Ideaspace publishes interviews with authors, artists, designers, economists, researchers, venture capitalists, political activists, and others working on the frontiers of what’s valuable and in our self-interest.
Roundtable: What to measure and what not to measure
Background: An all-star panel explores measurement
Listen: On the web, Apple, Spotify, RSS
Watch: YouTube
“How do we connect the values that we inherently have to what gets measured?” — Silka Sietsma, Head of Emerging Design at Adobe
Sup y’all. Welcome to the Ideaspace.
At the end of the original Data Is Fire essay, I posted an invitation: if you worked in data you were invited to an experimental roundtable to discuss “What to measure and what not to measure.”
A month later an unexpectedly all-star panel convened, including people from the ESG community, tech CEOs, heads of nonprofits, designers, an artist, and other data practitioners. In our discussion we explored five questions and found many areas of overlap: how to know if what you’re measuring is real, how to create a high-quality metric, and where the frontiers of data might take us. You can listen to our conversation on the web, on Apple, on Spotify, or watch a video here. A greatly condensed transcript is below.
If you work in data or related fields, the next Bento Society roundtable will explore “What gets measured and what doesn’t get measured,” focusing on bias in measurement, and things we don’t yet measure but might in the future. If you have experience or insight to add on this topic, sign up to join the roundtable here.
Roundtable: What to measure and what not to measure
The panel:
Esther Dyson: journalist, tech investor, and the Executive Founder of Wellville, a nonprofit project devoted to helping communities sustain health for the long term
Kay Makishi: entrepreneur, advisor, and mentor who serves as Principal of KEM Growth, a growth marketing consultancy
Mario Vasilescu: a robotics engineer turned humane technologist and the CEO of Readocracy
Silka Sietsma: Head of Emerging Design at Adobe
Angeline Gragasin: Writer, filmmaker, artist, and the Co-Founder and Director of Happy Family Night Market
Claudia Gonella: Marketing & Communications Director at GRESB, an investor-led initiative that assesses and benchmarks the ESG performance of real assets
Lakshmi C: Software engineer, data analyst, and admissions manager for Teach for India based in Tamil Nuda, India
Seth Killian: Game designer best known for his work in fighting games, including Street Fighter
Zach First: Executive Director of the Drucker Institute founded by the legendary management writer Peter Drucker
Brandon Silverman: CEO of Crowdtangle, a social media analysis company
YANCEY: Welcome. I’d like to start with some context for why we’re here. The ability to track and measure human activity has never been easier. Digital sensors let us track behavior, identify specific values, and even shift people's decisions like never before. At the same time, people are fearful and distrustful of data like never before. Yet any path towards a fundamentally better world requires more data and more measurement. The most important metrics of the 21st century might not have been invented yet. Where do we go from here? That's what we're going to explore. I shared five questions ahead of time. Here’s the first: What are experiences when measurement was particularly useful?
CLAUDIA GONELLA (GRESG): If I look back on what measurements work well, there are two criteria. One, that there's an objective reality that has been measured, a kind of consensus, and there's a process around coming to that consensus between different stakeholders. The stakeholders have agreed that this is something that they can agree on, and that there's an objective thing that they want to agree about. So, for example, carbon emissions. If carbon emissions get out of control they will have a detrimental effect on the climate so it's worth measuring carbon. That's objective number one. The second is that good measurements drive action. They're things that enable some kind of possibility to happen. It's the kind of action that breathes life into the measurement. In the carbon case, there's now huge innovation in collecting data around energy and GHG emissions at the meter level in buildings that's being aggregated up through increasingly integrated technology chains up to the portfolio level and the national and global levels. It provides a comparable benchmark so that an investor can compare in a standardized way.
YANCEY: What are examples of failed attempts at measurement that you’ve experienced?
ANGELINE GRAGASIN (Filmmaker): I'm an artist, so I'm not tasked with measuring large groups of people or large demographics. But I have measured myself. I’m obsessed with measuring my own productivity. How I spend my time, did I get enough exercise, did I get more deep or REM sleep today than last night, etc. And I don't know that there is such a thing as a failure of measurement. Every time I decide to measure something that process reveals what my values are. Why am I measuring this thing? What do I anticipate the outcome of this measurement will be? Measurement is just the first step in a process of understanding. What’s really valuable is our analysis of that data and how we derive meaning from it. For me as a filmmaker and a writer, the narrative that I construct from that measurement is always the most insightful. I don't think the act of measuring could ever be considered a failure because it always leads to knowledge.
SILKSA SIETSMA (Adobe): There's a typical way to measure a product. In typical product design you think about the year ahead and then you break it down into quarters. At the end of the day what gets measured as success is your growth, and it's usually either user growth or financial ROI. Those figures are determined ahead of time at the beginning of the year, and at the beginning of each quarter they get adjusted. That's what you aim to achieve and if you achieve it you’re considered the hero. But there's also a constant discussion: “What are we doing it for?” I work with lots of designers who inherently really care about people… but at the end of the day there's a disconnect. We have more information that we can use to measure success, but it's not being thought of like that. It's still being thought of as how much revenue and growth we can have. How do we connect these values that we inherently have to what gets measured?
SETH KILLIAN (Game designer): It's a question of how we understand things that we actually care about in terms of human behavior. We've seen bad and unhealthy tethers between human behavior and spending, which is obviously something that's important to the people who are doing the tracking. That’s where we get sugar highs and these other unhealthy unsustainable behaviors. What does long-term healthy player engagement look like? The answers are remarkably unsurprising. Spending time with your friends is really good. Some variety of experience is really good. These very obvious trends are now becoming clear. Still, it's a controversial concept. Because every plank you propose about what constitutes game health is controversial apart from money, which is pretty easy to align upon. The other stuff is more complicated.
YANCEY: How do we know that the things we’re measuring are real?
BRANDON SILVERMAN (Crowdtangle): The more transparency the better, including on the development of the metrics themselves. A lot of big questions in the social media space are only going to be answered through collaboration between private companies, the outside world, experts, civic organizations, government, etc. The only way you can do that is through transparency. We have to acknowledge the metrics are never going to be perfect. They're going to be biased in a million different ways. They're going to be fundamentally based on values. They're going to be subject to good and bad analysis. Creating a marketplace of debate around both the metrics and the analysis feels like one of the more concrete things that will help.
SETH KILLIAN (Game designer): The greater the multiplicity of viewpoints you get in an attempt to measure right, the higher quality data you get. How do you do that? By spending more time and energy doing those kinds of evaluations. If we're talking about wanting to get a really high quality metric, it’s going to take us a long time to do that. We have to coordinate with a lot of different people and potential viewpoints. There’s a cost to getting a high-quality metric. You're never out of the balancing act. It requires a two sided view: what is the right question, and what is the right measurement to get quickly to the heart of it.
CLAUDIA GONELLA (GRESG): My concern generally is that measurement overreaches a little bit. We've got so good at certain types of measurement, like optimization, efficiency, metrics, things we can measure that sometimes we're just a little bit obsessed and we don't match it with a territory. It's a kind of map and territory mismatch. In my world often people talk about ESG metrics as also measuring sustainability. They don't, because sustainability is a much more complicated thing. You can't just look at this map of static ESG and apply it to a territory of complex, evolving, interconnected systems. Even sustainability isn't measuring everything as we understand it. So we're sitting here optimizing for the map when we're not really dealing with the territory.
YANCEY: What are the limits of measurement? Are there things we cannot or should not measure?
ZACH FIRST (Drucker Institute): We have a desire to measure things in ever-shorter timespans, but the important phenomena we’re trying to measure are often inherently slow-moving. We update our scores at the Drucker Institute annually. That's about as fast as we think it makes sense to update scores for large corporations and how they manage themselves. One of the most frequent questions we get is: have you considered doing a quarterly update, something that conforms a bit more with the financial reporting schedule? We refuse to do it. Some of the least valid data sources we've tested are the ones that are based on real-time sentiment analysis. They don't really relate to anything. There's such a hunger and a desire to measure in these infinitesimally small time slices that it generates a tremendous amount of bullshit. We end up actually measuring the wrong things in an effort to be fast.
YANCEY: Where is there unexplored potential for data and measurement?
ESTHER DYSON (Wellville): The more we measure biases induced by AI the more we realize we're measuring our own biases. If AI produces this it must be because we produced it ourselves first.
MARIO VASILESCU (Readocracy): Obviously I’m biased in terms of what we're doing, but I'd love to see a lot more measurement around information pollution. We measure a lot for advertising, but in terms of information ecosystem and the amount of noise, I'd love to see more measurement.
BRANDON SILVERMAN (Crowdtangle): Thinking about the maturity levels of metrics and whether there could even be a metric of metrics. Like, this is a metric that we feel very confident in versus one that’s in development or beta, which are very separate things. Being transparent about that. Trying to limit the number of metrics at the top, while then adding way more at the bottom and seeing which ones graduate over time.
SILKA SIETSMA (ADOBE): One area we haven’t explored is digital wellbeing. Right now it means limiting the time on screen, but that’s just one metric. There are so many other ways we can think about our wellbeing. That's an area that needs to grow. I have two kids and we talk about their love of video games, and I love them too. They understand all the mechanics that get them addicted and they love that, but they also hate the fact that they can't focus on other things because of it. We think they're not aware, but kids know exactly what’s going on. So if they do understand, why can't they have a choice of being able to tweak the things? “For this period I'm just going to learn a skill or I want to slow down this addictive mechanic.” I think there will be a shift. Instead of product owners saying “this is what we should be measuring,” end users will say we want agency in that.
Listen to the complete conversation on the web, Apple, Spotify, or subscribe via RSS. Watch on YouTube.
MY TAKEAWAYS
What stood out to me from the conversation:
Multi-stakeholder metrics are critical to creating widespread adoption (and take time to get buy-in and get right)
Important to be clear on what’s being measured and what’s not — don’t think you’re measuring more than you are
Always measure on longer timelines
Metrics that give people the ability to manipulate themselves could be powerful (Esther Dyson shared a similar idea earlier this week)
A metric of metrics that tracks emerging forms of measurement could accelerate adoption and encourage multi-stakeholder collaboration
This quote from Angeline: “Measurement is just the first step in a process of understanding”
Thanks to all the panelists for their wisdom and to you for reading and listening. To join the Bento Society’s next roundtable exploring “What gets measured and what doesn’t get measured,” apply here. To support more projects like these, become a member of the Bento Society.
Peace and love my friends,
Yancey
The Bento Society
The Ideaspace is published by The Bento Society, which hosts weekly events and supports projects exploring the frontiers of what's valuable and in our self-interest.