All posts
Published at Mon Sep 18 2023 in
Rows HQ

3 Matrices for building products with AI

Henrique Cruz
Henrique Cruz
Matrices thumbnail

This is the seventh post in our PLG Series, where we write deep dives on the experiments, strategies and tactics that we used at Rows to accelerate our product-led growth motion.

Make sure to check out the previous articles on Using Calculators to Drive Traffic with SEO: An In-Depth Look and Building a Loginless Experience for 1B people.

The year is 1995. Netscape had just IPO’d what is now culturally viewed as ‘the IPO that changed the Internet’. CNET is touring the Netscape offices interviewing a 24 year old Marc Andreessen. The reporter asks Marc for Netscape’s secret. His reply is one of the most important sentences of the last 30 years of tech:

“Netscape's secret is that we’re in the middle of an exploding market.”

Over the last 30 years, we have witnessed a series of explosive markets. The web in the early 00s, social media post-‘04, the shift to mobile in ‘06-’09 and the sharing economy in the beginning of ‘10s. Many of us thought that crypto would be the next one, and it now seems that AI is a truly exploding market.

One interesting aspect is that  as markets continue to expand, people are adopting new technology more quickly than ever before. It took 50 years for refrigerators to become widely adopted by US households, while social media only took 15 years. The same trend can be observed for individual applications. The most popular apps we use are growing at an increasing rate. While Instagram took 2.5 months to reach 1 million users, it only took ChatGPT 5 days, and Threads 1 day.

People are adopting technology faster than ever, and that this will also be true in the AI age.

Intro charts

If you were in 1995, should you have been working on the web? Yes. In 2023, should you be working on AI? Probably.

Matrix 1: The Problem-AI fit

‘What problem should I apply AI to’? That’s the question a lot of us have asked ourselves in the last few months. The first matrix has helped me think through that exact question.

It comes from spending a lot of hours with AI in the past year - many of them on ChatGPT. Others working with our team on the AI Analyst and the OpenAI integration. And others in a series of small side projects. Like training an AI model to make illustrations in our brand style. Fine-tuning GPT-3.5 to write book reviews. Or building a tool to summarize blog posts into Twitter threads.  

What I found was this. Today - and this is a big caveat as things change fast - I believe that AI is best suited for tasks that meet the following criteria:

  1. Easily verifiable: Quality can be quickly assessed by a human. This means tasks that don't require a lot of human judgement to assess its quality, or whose accuracy can be quickly evaluated. A good example is using AI to write an SQL query to solve a particular job. A bad one is asking it to write an essay on the economics of 17th century Netherlands. It’s pretty hard to find the accuracy of the essay unless you’re an expert on the field.

  2. Low iterative cost: AI output can be iterated easily, cheaply, and quickly. For example, AI is perfect to summarize Amazon product reviews where you can quickly iterate on the prompt - e.g. ask to turn the summary into bullet points. Contrast that with using AI as a travel agent and ask it to buy you a plane ticket to Hawaii. The cost (money and time) of an error is just too high.

Matrix 1: The Problem-AI fit

The low iterative cost is one of the reasons why the chat interface of ChatGPT became so popular in comparison to other UI layers of GPT-3. A chat interface encourages us to interact with the AI iteratively, easily discard low-quality output and guide the conversation towards something useful. So that you’re ok with an AI that is wrong 50% of the time as long as you can iterate cheaply and fast to get to the 50% right.

Matrix 2: The AI Solution Classifier

The second matrix serves as a simple framework to classify AI solutions. If you spend any time on the tech Twitterverse (Xverse?) you’ve come across the daily torrent of threads showing the new crop of AI products (we’ve been large beneficiaries of it). 

I find that a good way to classify any AI solution is to categorize it into 4 quadrants:

Matrix 2: The AI Solution Classifier

Why it exists

  • Derivative:

    • When the product only exists because of AI. AI is the driving force behind the creation and functionality of the product. No AI, no product.

  • Augmentative:

    • AI improves core use cases in existing products. This is where most of us are. Old folks playing the new game.

How it works

  • AI Wrapper:

    • Let’s call it WYSIWAIG - What You See Is What AI Gets. The solution is essentially a wrapper to the AI model. The user input is almost pristinely transferred to the model and what the user sees is the output from the AI model.

    • The prompt is often constrained to include additional context (concatenating the user input with additional context), and the output constrained to a certain data format (e.g. text in bullet points, json file).

    • This is the most common type of AI solution. For derivative solutions the canonical example are avatar image generator apps. For augmentative solutions, this is your Canva Magic Eraser, Notion AI, Loom AI or the OpenAI integration in Rows.

  • Shared intelligence:

    • What the user sees is significantly different from the output of the AI model. The output from the AI model is processed, analyzed and computed to give something unique to the user.

    • A good example is the Wolfgram Plugin for ChatGPT. The plugin takes the input from the user - a question that requires mathematical computation - and uses a combination of ChatGPT and its own intelligence to give you an answer.

It is also the approach behind our AI Analyst✨. We send metadata about your table (e.g. headers, size) and use AI to construct a base analytical model which includes research questions, placeholder answers and spreadsheet formulas. We then take that input and correct it, compute it, transform it, format it and deliver it to your spreadsheet.

Matrix 3: The AI-Pricing fit

Finally, there is the AI-Pricing fit matrix. 

There comes a time during product development when you need to decide how to price the ‘thing’. Will you bundle it in the free plan (if you have one), will AI be a paid add-on, or is it exclusively in the paid plan?

This decision can have a big impact on your business. It did for us. In the 2 months since releasing the (free) AI Analyst , more than 400k new people tried Rows for the first time, most driven to the product by the virality generated by the idea that a free AI spreadsheet can rival with Microsoft Excel.

As most current AI solutions involve API calls to third-party models that monetize by usage, the cost to serve is often the biggest driver to this decision. You can think of the cost to serve, and consequently on how price the AI solution as combination of two factors:

  • Frequence of use: The number of times a customer uses the AI solution per billing cycle (e.g. monthly).

  • Depth of use: The number of monetary units (tokens) spent per individual use.

Matrix 3: The AI-Pricing fit, or the Frequency-depth scale

As the frequency of use and the depth of use increase, your cost to serve also increases. This means that your ability to price the AI solution for free decreases. We’re experiencing this first-hand at Rows:

  • Our AI Analyst✨ fits in the bottom left of the scale. A power user of the AI Analystwill get a lot of value from analyzing a few dozen tables and asking dozens of questions per week, but they are unlikely to need more. And since we only send metadata to the AI model, we keep the depth of use relatively small. 

  • The OpenAI integration is on the top right. Teams use it for high-frequency cases like tagging hundreds of customer tickets per week, translating customer reviews or generating hundreds of keyword ideas. This, paired with strict rate limits from OpenAI, limits our ability to offer a seamless OpenAI integration in the free plan. 

As the number and quality of the open source models continues to grow, and the cost of enterprise models declines, this will likely become less important with time. It’s not unrealistic to think of a future where 95% of enterprise use cases for AI are done by a group of open-source, local, high-performance models. Until then, the impact of the UX of an AI solution in its frequency and depth of use will continue to determine how it is priced.

Wrap up

Let’s hop onto our time machine to wrap up this post. 

If AI’s ‘Netscape moment’ was the release of ChatGPT last November, we might be getting close to the top of the Hype cycle. That means a lot of inflated expectations, low-quality products and a lot of X threads. Luckily for anyone ‘building product’ the focus is the same: build something people want, preferably in an exploding market.