Exclusive Interview: Danny McMillan On Amazon’s Project NiLe

   

Written by:

JL: What do you think about the announcement of project “Nile”? Were there any surprises for you?

DM: A9 already uses Machine Learning (ML), Deep Learning (DL), and Natural Language Processing (NLP)—I have numerous papers covering this. These technologies help the algorithm to understand and interpret user queries, rank products based on relevance, and continuously learn and adapt to changes in user behavior and preferences.

The biggest challenges Nile will need to overcome are Bias, Accuracy, and Data Privacy.

JL: Do you think it’s going to happen in January?

DM: Anything is possible, but…

· Before diving in, Amazon will conduct a feasibility study, allocate resources, and prepare data.

· The development phase involves model training, testing, iterative improvements, and parallel running with A9.

· The rollout phase may include A/B testing, performance monitoring, scaling up, final transition, and post-launch monitoring.

· They may incur infrastructural issues, scale issues, and compatibility challenges.

· Stakeholder buy-in, privacy, and legal reviews will be fundamental in this transition.

· An unoptimized rollout could lead to reduced conversion rates, increased bounce rates, customer churn, relevancy issues, and increased costs.

· A well-planned, tested, and optimized rollout is key to balancing these potential outcomes.

Other than that, why not? 🙂

JL: What do you think are the main implications for sellers in the future with AI changing the way customers search and find products on the platform?

DM: A: I think people see this as a zero-sum game where an update, such as BERT, would completely remove Lexical matching. People may think that when the update comes, it’s a switch over to a completely different way of working. You have to account for the impact of decades of data, infrastructure, and historic compatibility. Try getting that past the shareholders; they’ll have a heart attack. I am working my way through 1234 papers with my team which I think will be whittled down to about 150 – 200 (my A9 bot has just over 50 loaded in and get more powerful by the day). The key thing to remember, papers do not always mean they are executed and when new papers arrive, it does not mean prior parts of the algo becomes redundant. You have to look at the context here and try and peice it all together from a science perspective.

Here’s a breakdown of how BERT, Lexical Matching, and Semantic Matching could work together (there are papers on all these):

· Lexical Matching: This is the first step, where the system checks if the words in your search query, like “beard oil”, appear in product titles, etc. It’s a straightforward process, akin to looking up a word in the dictionary. However, it can sometimes fall short, especially with spelling errors or synonyms.

· Semantic Matching: To overcome the limitations of lexical matching, the system also uses semantic matching. This process understands the meaning behind your search query. For example, it knows that “beard oil” is a type of beard product. It uses a technique called Word2Vec to represent words as dense vectors, so that two semantically similar words would have similar representations. This helps the system find relevant products even if the exact words in your query don’t appear in the indexed sections of the product detail page.

· BERT: BERT (Bidirectional Encoder Representations from Transformers) is a more advanced model that understands the context of words in your search query. For example, it knows that “beard” and “oil” together refer to a specific product type. BERT is trained on a large amount of data and can understand complex language nuances. It helps refine the search results and make them more accurate.

So, you have these stacked to improve the search experience for the customer. They are trying to move away from the problem of people trying to game the system with mindless keyword stuffing to avoid serving up a load of irrelevant search results.

JL: Do you think there will be changes in the A9 algorithm to accommodate this? What do you envisage?

DM: Their end goal is relevance; serving the best search experience to the customer.

Sellers should focus on everything around the product (and market fit) and not hacks (see these as a bit of fun on the side). No amount of hacks can save a subpar product. Invest 95% of your time in getting the product right; if you get that bit right, the rest is much easier to execute.

JL: Do you think Amazon will replace seller support with AI eventually?

DM: Amazon has been using AI for at least three years now. It’s called Cuatro AI and will get better, though some of the responses from support tickets leave a lot of room for improvement. However, support is bad enough already—imagine if you removed all the humans too?

JL: Do you think voice search and Alexa will play a bigger role in the shopping experience?

A: The future of voice search is likely to be significantly influenced by advancements in Natural Language Processing (NLP) models like BERT. BERT, or Bidirectional Encoder Representations from Transformers, has shown state-of-the-art performance on various NLP tasks, including two-sentence classification with cross-encoders. This makes it a powerful tool for understanding and processing voice search queries, which often involve complex, conversational language.

In terms of specific improvements, BERT’s ability to understand the context of words in a sentence by looking at them in both directions (left and right) could lead to more accurate voice search results. For instance, it could better understand the intent behind a user’s voice query, leading to more relevant search results.

Leave a Reply

Discover more from AMAZING WAVE

Subscribe now to keep reading and get access to the full archive.

Continue reading