2022 W52: Killing Errors
Every week I post about one thing that happened at Rows! We're building in public!
---
I am just coming out of the Xmas weekend.
Winter roasts haven't made us any slower. Actually the opposite is true.
Just last week a manager approached me to review the results of a project which he created to reduce the errors we see users do when executing functions.
The outcomes were much better than expected.
The project
Rows lets users consume data from other products via Integrations or generic web application interfaces (APIs). We let users do it via Functions inside cell formulas (try GET_COMPANY("microsoft.com","linkedin_size")
or via our Actions Wizard.
However, Integrations are hard and can result in errors. That's even more difficult when you're giving this power on a spreadsheet, as I'm sure it is with coding environments. There's authentication, tokens, picking IDs, request setup, etc. In fact, we noticed that between that Integrations errors were between 5% and 30% of all executions, depending on the Integration. That's way too much for us, even knowing that up to 89% of spreadsheets have errors; our job is to improve spreadsheets, not go along.
This challenge was defined with a success metric and with a list of to-dos. The to-dos were aimed at providing clear messages and a direct path to resolving those issues.
Success metrics: Drop by 25% the percentage of errors for Marketing users.
Product:
Give context about HTTP auth errors and offer a new connection.
Add a new error message on authentication problems.
Fix the long-lived token issue.
Standardize integration errors.
Remove #FAIL! errors.
Add a user-friendly message on configuration errors.
Allow users to interact with integration errors on Live.
Help users to solve social media errors.
No valid Google Analytics IDs.
No valid Facebook or Linkedin pages.
No Google Ads account.
No Facebook is connected to Instagram.
No valid metrics or accounts.
Invalid Linkedin Campaign name.
Invalid Facebook Campaign name.
Findings and Review
For the baseline period considered, we had:
29.82% error rate within all Integrations executions (532k).
29.78% error rate within Marketing Integration executions (163k).
After we did these fixes, we got:
9.45% error rate within all Integrations executions (2.9M).
18.18% error rate within Marketing Integration executions (365k).
So, for all executions, we got a 3.2x smaller error rate even as executions grew immensely, 5.5x. Or, rather perhaps, the executions grew because there were less errors. It's now 68% less likely that a user creates makes an error, and that makes the user want to use us more!
For marketing, we got a 1.6x smaller error rate (or 39% less likely to create error) as executions grew a lot too, 2.2x!
Eval
Our eval is centered around 3 outcomes:
0: failed to deliver impact;
1: delivered the expected impact;
2: exceeded the impact.
Our bar for exceeding impact is an outcome that is 2x bigger than predicted. Now, because in this case we're on a reduction scale, twice as good as "reducing 25%" is not to "reduce 50%" (2*25%), but rather to achieve two reductions of 25%, or ~44% (1-(1-25%)^2).
We achieved that for all Integrations, but not for the Marketing ones - almost, 39% vs 44%!. Still, the team beat the objective to a pulp. So, should we eval this as a 1 or as a 2? 🧐.
-
We won't stop here. This team taking care of Data in the spreadsheet has *a lot* planned. Very soon they will once and for all transform how you work with Data in a spreadsheet. For real. I expect many 2s are coming.
-
See you next week H