Quantity and Quality

by Todd

While planning Vancouver Design Week, somebody on the team installed a nifty Slack integration that summarizes web analytics. Every day we got a nice little update into the web channel to let us know what was happening with visitor traffic, and as the PR effort ramped up it was nice to have this information coming in.

Since the festival wrapped a couple weeks ago, the website has gone quiet, and the numbers dropped off. No cause for concern, that’s what we would expect.

The analytics tool, however, went beyond reporting numbers and weighed in with an opinion about how the VDW website was doing.

Arc is not happy with the numbers

Oh, the website is doing poorly, and that’s in bold so you notice.

Of course, this little bot is way off the mark. The big dropoff means we had a lot of people before/during the festival. If there were no substantial change afterwards, we’d be concerned. The correct qualitative assessment is the exact opposite of what the bot says.

Numbers are never a full story

There’s a pervasive myth in technology that if you have data, you have answers. In reality, when you have data you have something to inform answers. And that’s assuming you have the right data, but that’s a whole other story.

To see quantity and quality in the right light means that outputs need to be understood as ingredients of an outcome, or the signals to verify that you’re on the right track. Numbers only achieve meaning within the frame of what you want to see.

Computers make it easy to turn out numbers, but turning out meaning is where the real value lies. That happens to be hard for computers because they know nothing about the real world or people’s goals. For software to reliably say whether any given data is good or bad, the people making it have to consciously figure out how the data is to be evaluated. Or, if you only want to be in the business of delivering numbers, don’t pretend you’re delivering meaning with an if-statement.

Imagine it better

Let’s do a little drive-by design to imagine how it could be better. Suppose that, on seeing a sharp change in metrics, the bot prompts you with a question like:

There was a drop in activity last week. Was this expected?

You click yes or no accordingly. If yes, the system can record that sharp changes in activity are expected and take that into account before raising future flags. Maybe you make a note to be saved with that event, and the next time there’s a big drop the bot presents your pearl of wisdom.

There was a drop in activity last week. Last time there was a similar drop it was noted that etc., etc.

If no, the system can note that as well, and maybe offer some good starting points for figuring out what’s going wrong.

That one change transforms the interaction substantially. Instead of equating output with outcome, it uses the output to shape the outcome through a simple dialogue. The tool becomes the assistant rather than masquerading as the expert.

When software works with you it typically works better. Software that asks people to bring real-world knowledge and goals to bear on the numbers can work really well, and produce better outcomes. Asking questions, learning, not making assumptions; the things that make people nice to work with can apply to machines, as well.

Bigger implications

Stories about the difference between quantitative and qualitative are nothing new. But we have to keep telling them because their lessons seem so quickly forgotten, and the stakes are getting higher with the rise of algorithmic decision-making (aka AI).

The more significant the decisions made by machines become, the more emphasis will be needed on making the right call on the quality of something from its underlying numbers. What today is a gaff from a modest web analytics bot can tomorrow be about someone’s finances, medical prognosis, employment potential, and the list goes on. These things make a difference of convenience today, and tomorrow that difference will be critical.