Nate Haynam: AI Architect
Josh Haynam: CEO, Co-founder at Interact
Can we use a machine to recommend products? How does it understand what I want to buy? The short answer, it doesn’t. ChatGPT or Generative Pretrained Transformer-3 was designed as a zero shot large language model to act as an AI assistant. Let’s start with some intuition on what ChatGPT, how it work, and then let’s jump into how AI will change recommendation systems.
A little intuition. GPT-3 like any machine learning model is trained using large datasets to help predict tasks. To train a large language model, OpenAi used large datasets from the internet including a filtered common crawl, WebText2, books, and Wikipedia. However, models cannot learn directly from the English language. For natural language processing models, it’s necessary to translate text into integers using tokenization. OpenAI uses the tiktoken library which encodes the tokens through byte pairing. After pulling out a test dataset, the model is trained with a batch size of 3.2M tokens, learning rate of 0.6*10-4, and total training data size of-300 billion tokens.
Standard model procedure, Above Standard Model. GPT-3 is a transformer model, an architecture released in 2017 by Google Brain. Transformers diverge from previous NLP models in attention. While models like BERT lacked the ability to use prior information in a sentence to construct the following words, Transformers retain prior context to help accurately predict future words.
The flexibility of transformers allows modified initialization, pre-normalization, and reversible tokenization; essentially GPT-3 is an auto regressive language model allowing next word prediction using prior context. Contextualization of learning allows generalization as a zero-shot model.
Zero shot and few shot models are out of the box models that can perform tasks without or with little pretraining. Since GPT-3 is trained on more or less the entire Internet, a few shot GPT-3 model (10-100 prior examples) outperforms even pretrained open domain models such as RAG on trivia questions out of the box. ChatGPT’s primary feature is the zero shot model where you simply enter Task Description + Prompt = Direct Model Test Results rather than training the model on a fine-tuned dataset on your given task.
However, GPT-3 is not a general AI. It will not take over the planet (at least in the terminator sense). At its core the model’s transformer layers store training data for reference for answering inputted prompts. The model itself is distilled Internet for reference. Prompts dictate what training data subset should be analyzed to determine the prompt’s completion probability, like going down a probability tree, where prior words in a sentence dictate the path down the GPT-3 layers to find the next word.
Recommendations from cold hearted AI? Rather than generating text from thin air, we now know that ChatGPT behaves more like crowdsourcing the world to answer each given prompt. When applied to recommendation systems, GPT-3 means providing human recommendations with AI as an intermediary. While current product recommendations use harvested data through often dubious means, AI requires no training data (zero-shot model). At Interact we have been applying this core architecture to our infinitely customizable quizzes. Our fine-tuned model allows for question, answer, color palette, and image generation from a simple quiz title (e.g. What Taylor Swift song best fits your personality?), your website url, or product name. Rather than using stolen user data, or arbitrary constructing a quiz from scratch, you can rely on our fine tuned model built off of ChatGPT using Interact’s custom templates as weighted training data to provide quality, success verified, quizzes.
Learning recommendations. Reinforcement machine learning is at the heart of Interact’s model. Rather than the stereotypical user data to curate ads to get users to purchase products. AI recommendation systems use essentially crowd sourcing to generate a quiz pipeline that funnels users into category types whose attributes match our client’s products. Successful recommendations are then fed back into the Interact model to improve customer interaction. As our client’s quizzes attract traffic the quiz becomes increasingly curated to their clientele leading to increased traffic that begin a feedback loop. Increased traffic, improved user experience, reinforcement learning quizzes, no Silicon Valley user data bs.
Cautionary Tale. ChatGPT is taking the world by storm, but the model has been around for years. “Attention Is All You Need,” a paper releasing the core architecture of GPT-3 was released in 2013.” Language Models are Few-Shot Learners,” a paper explaining the functionality and training process for GPT-3 was released in 2020. Why now are we in 2023 obsessed over a 3 month old ChatGPT? Interaction. Humans are not AI, we need feedback loops, we need imagery, we need an interface.
ChatGPT’s interface took the world by storm, not its model. A primary example is Nvidia’s and Microsoft’s Megatron-Turing NLG which has 530 B parameters. Seemingly more powerful and yet few of you have heard of this 2021 model. Human-AI interaction is limited now for the first time by humans. For year’s our sci-fi obsession couldn’t be touched by our abilities as engineers. Now we are limited by interfaces, by interactions.
-  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention Is All You Need,, 2017.
-  Tom B. Brown, Benjamin Mann, Nick Ryder, and Melanie Subbiah. Language Models are Few-Shot Learners, 2020.
The business case for recommendation systems
Recommendation systems are highly effective for encouraging customer action. In a Northwestern University study they found a significant 59.32% of people will always or very often follow suggestions from a recommendation system. This is in line with statistics we see from quizzes at interact.
Explicit vs. Implicit Recommendation systems
Implicit recommendation systems: Use data from customer behavior on websites, email, social etc. to infer a customer profile and make recommendations.
Explicit recommendation systems: Use data actively given by the customer from reviews, answering questions, etc. to form a customer profile and recommend products, services, or content. In this case only explicit (zero party) data is used.
For this article we will be focusing on explicit recommendation systems for two main reasons.
- Explicit recommendation systems are not susceptible to data privacy laws. Because explicit recommendation systems work off of explicit (zero party) data, they are not going to be rendered irrelevant if data privacy laws make it continually harder to use non-given customer data to power recommendation systems.
- Explicit recommendation systems are dramatically cheaper than implicit recommendation systems by a factor of 20x or more because implicit recommendation systems require a much deeper integration into existing web properties and data flows require complex API connections that are very expensive to maintain.
Barriers to entry for explicit recommendation systems
At interact we’ve been working with businesses since 2012 to create an implement explicit recommendation systems in the form of product recommendation quizzes and needs assessments. There are two primary barriers to entry for creating these systems.
- Time to write the recommendation system: On average it takes 15-20 hours to write an explicit recommendation system which includes the questions a customer is prompted with, the outcomes they will receive, and the scoring system that connects the questions to the outcomes to power the recommendations given.
- Accuracy of the recommendation system: A poor recommendation system loses its effectiveness and this is a primary reason customers do not implement explicit recommendation systems.
Solution: AI Generated Explicit Recommendation Systems with Machine Learning Self Improvement
At Interact we have developed a solution for the two primary barriers to entry for implementing explicit recommendation systems.
First, we are synthesizing 10 years of learning about what makes the most effective explicit recommendation systems and feeding it into an AI recommendation generator that is fine tuned based on our proprietary data sets.
Second, as customers implement AI-generated recommendation systems, the end users who engage with those recommendation systems are able to rate the effectiveness of the recommendation systems. We feed that data back into the AI model so it self improves over time.
The AI recommendation system generate works in 30 seconds as opposed to the 15-20 hours for a human to create the same system. It is of course editable once generated for finishing touches and addition of links + integrations for lead generation.
Recommendation systems generated by the AI are also perfectly accurate because we are able to analyze the correlation coefficient between each answer choice within a recommendation system and each of the outcomes and then layer on a scoring system that is perfectly accurate.
We will have two models available.
- General model: This model will take in data from all recommendation systems generated and be available at a lower price point for small and medium businesses.
- Company-specific model: This model will be enclosed within one company and will be self improving specifically for the people who engage with the explicit recommendation systems generated by that company so all systems generated are specific to the customer base of the company. Each company specific model will be a separate instance and therefore available at an enterprise price point.
Get on the waitlist:
If you’d like to use the InteractAI explicit recommendation system generator you can join the wailtist here for early access.