Log In Sign Up

You Do Not Need a Bigger Boat: Recommendations at Reasonable Scale in a (Mostly) Serverless and Open Stack

by   Jacopo Tagliabue, et al.

We argue that immature data pipelines are preventing a large portion of industry practitioners from leveraging the latest research on recommender systems. We propose our template data stack for machine learning at "reasonable scale", and show how many challenges are solved by embracing a serverless paradigm. Leveraging our experience, we detail how modern open source can provide a pipeline processing terabytes of data with limited infrastructure work.


page 1

page 2

page 3

page 4


Towards effective research recommender systems for repositories

In this paper, we argue why and how the integration of recommender syste...

An analysis of open source software licensing questions in Stack Exchange sites

Free and open source software is widely used in the creation of software...

On Detecting Data Pollution Attacks On Recommender Systems Using Sequential GANs

Recommender systems are an essential part of any e-commerce platform. Re...

Scaling Enterprise Recommender Systems for Decentralization

Within decentralized organizations, the local demand for recommender sys...

"Birds in the Clouds": Adventures in Data Engineering

Leveraging their eBird crowdsourcing project, the Cornell Lab of Ornitho...

Extending Open Bandit Pipeline to Simulate Industry Challenges

Bandit algorithms are often used in the e-commerce industry to train Mac...

1. Introduction

With almost 4 trillion dollars spent yearly in online retail (Cramer-Flood, 2020), research in the eCommerce space gained considerable traction in the last few years, with important insights for recommendation systems (Hansen et al., 2020), IR/NLP (Tagliabue et al., 2020; Bi et al., 2020; Tagliabue and Yu, 2020) and more (Tsagkias et al., 2020). A quick look at eCommerce workshops at major ML venues reveals an unsettling pattern222See Appendix A for details.: contributions (and implied benefits) are all but evenly distributed in the market, as the majority of innovation is concentrated in few large players.

The barrier to entry for cutting-edge recommendation systems in eCommerce is indeed high and multi-faceted: lack of open, representative datasets (as highlighted for example in (Tagliabue et al., 2021)), non-relevant benchmarks in the literature (see for example the arguments in (Requena et al., 2020) when replicating (Toth, A., Tan, L., Di Fabbrizio, G. and Datta, A., 2017)), expensive computational resources (Strubell et al., 2019; Bender et al., 2021). Even when things are smooth on the modelling side, bringing a recommender into production remains a formidable challenge for shops in the mid-long tail, lacking best practices and a tool-chain that works at “reasonable scale”. In this contribution, we tackle this problem directly; in particular:

  • we highlight the peculiar constraints (and the opportunities) that lie at “reasonable scale” – that is, mid-to-large shops, with dozens (not hundreds) of ML engineer, making between 10 and 500 million (not billion) USD / year, and producing terabytes (not petabytes) / year of data in behavioral signals;

  • we show a deep-dive into an end-to-end stack (mostly) built with open-source tools, and show how to productionize a recommender system with (almost) no explicit infrastructure work; we motivate our choices with insights gained by deploying models for dozens of digital shops at all scales.

With the growing number of providers in the MLOps space and an ever-changing landscape, a major obstacle to democratization of machine learning is knowing how the tools in the ecosystem play together: the sheer amount of choices to be made may feel overwhelming and the fear of making mistakes may further slow down the adoption of the most appropriate tools. By providing a worked-out example for a recommender pipeline, we hope to provide both a review of important concepts for ‘reasonable scale” recommenders, and actionable insights for all the practitioners outside of few retail giants that need to make adoption choices with limited resources333We will also provide a full implementation as part of the open source project started with Metaflow:

2. Principles for models at reasonable scale

Practitioners building models for shops in the mid-long tail face many challenges, as companies which are late adopters of machine learning tools tend to be less mature across the entire stack – from data collection to model serving. A guiding principle to produce impact quickly and reliably is independence: the more data scientists need to rely on other teams (to get data, provision GPUs, serve models, etc.), the more likely something will get “lost in translation”, and time-to-ROI will soar. On the other hand, we should not assume that data scientists come with an unreasonably complete skill-set: if now it is their job to provision GPU, we just shifted the burden, not increased velocity. The following principles condense what we learned by working with dozens of organizations, and provide a framework to make strategic decisions and prioritize resources in a constrained environment:

  1. data is king: the biggest marginal gain is always in the data – making clean and standardized data accessible is significant more important than small modelling adjustments. Indeed, as the market is quickly recognizing, modelling per se is getting increasingly commoditized, making proprietary data flows even more important from a strategic point of of view;

  2. ELT is better than ETL: a clear separation between data ingestion and processing produces reliable, reproducible data pipelines. In particular, great care needs to be taken to ingest and store data as an immutable raw record of the state of the system at a given point in time;

  3. Paas/Faas is better than Iaas: maintaining and scaling infrastructure with devoted people is costly – and unnecessary. At reasonable scale, many providers offer fully-managed services to run our computation without worrying about downtime (Jiang et al., 2021), replication, auto-scaling. When resources are constrained, we should invest our time and effort in our core problem – providing good recommendations –, and buy everything else: we keep the team small and our costs more predictable.444As predicting the cost of scaling a PaaS service to more users is significantly easier than predict the impact of new hires maintaining a Kubernetes cluster. The key observation here is that high-quality engineering is the scarcest resource, so we should do everything in our power to have that resource focused on ML.

  4. distributed computing is the root of all evil: distributed systems like Spark played a pivotal role in the Big Data revolution. However, even as a managed service, distributed computing is slow, hard to debug, and force programming patterns unfamiliar to many scientists. At reasonable scale there are better tools to do the heavy lifting for our computations, freeing us completely from distributed computing and all its overhead.

The key take-away is that working at “reasonable scale” comes some advantages: the scale make a lot of tools affordable, and streamline a lot of the complexities needed with more sophisticated systems. As we shall see, by selecting the right tools we can empower relatively small teams to produce a great impact.

3. Desiderata for In-Session Recommendations

We use in-session recommendation as an example, and outline what is needed at a functional level for a system to work, from data ingestion to model serving; we then dive deep on how to build a tool chain satisfying these desiderata. We chose in-session recommendation as it is a well-studied research topic and a prominent use case for digital shops (Bianchi et al., 2020):

  • raw data ingestion, which includes getting shopper’s data in a scalable way, storing them safely, guaranteeing re-playability from raw events;

  • data preparation

    , which includes data visualizations and BI dashboard, data quality checks, data wrangling and feature preparation;

  • model training

    , which includes model training, hyperparameter search, behavioral checklists;

  • model serving, which includes serving predictions at scale;

  • orchestration: which includes a monitoring UI, automated retries, and a notification system.

A useful exercise is to visualize the process, and see the journey of a shopping event from the browser (collected with a standard Javascript SDK555For example, Google Analytics:, up to the training loop in our machine learning model (see Fig. 1).

Figure 1. Data processing from events collected on the browser (shopper clicking on a product in 1), to a carousel of recommendations served on the PDP, 6. Raw data (2) is sent to a table (3), and stored raw in an append-only fashion. From there, data is transformed in usable format for training (for example, exploding important properties in a devoted table, 4), and a model is then trained (5) to serve recommendations (6).

4. An End-to-End Stack

Fig. 2 depicts a modern data stack that combines the principles at “reasonable scale” (Section 2) with the functional components from Section 3:

  • raw data ingestion is achieved in PaaS (with auto-scaling) through AWS Lambda (Tagliabue, );666Please note that the above example feature AWS components, but equivalent modules are available in all major cloud providers. storage is again achieved in a PaaS-like manner through Snoflake (Dageville et al., 2016);

  • data preparation starts with dbt777Open sourced at, which builds a SQL-based DAG of transformations to prepare normalized tables for data visualization888Preset is a PaaS version of the open-source Superset, and QA999Great Expectations is an open-source tool for data validation, available at;

  • model training happens with Metaflow101010Open sourced at, which allows the definition of ML tasks as a DAG, and abstracts away cloud execution (including GPU provisioning) through simple decorators;

  • model serving happens through AWS Sagemaker111111, which is our hosted tool of choice for serving models in auto-scaling and with a variety of hardware options: please note that since Metaflow comes with artefacts and versioning, deployment options are plenty and easy to change;

  • orchestration happens with Prefect121212Open sourced at, which also offers a hosted version for job monitoring and admin purposes.

There are three crucial observations of how it all fits together: first, there are no resources directly maintained by ML engineers131313A Prefect agent would be the exception, but that could also be avoided by running on AWS step functions directly). as all tools are maintained and scaled automatically141414Models inside Metaflow may need to still be manually updated, of course, but that is core ML engineering.; second, the distributed nature of “reasonable size” computing is abstracted away in Snowflake through plain SQL: everything downstream of data aggregation/preparation can happen comfortably locally; third, warehouse aside, most tools are either already open source, or substitutable with open ones 151515Serverless computing for example is available as open-source as well:

Figure 2. An end-to-end data stack for companies at “reasonable scale”, from data ingestion and storage, to visualize, QA, training and serving.

5. Conclusion

We argued that infrastructure and architectural barriers preventing practitioners from leveraging the latest ML research can be surmounted by embracing a serverless paradigm. We know from experience that the stack we proposed (or a similar one) can process terabytes of data (from raw events to GPU-powered recommendations), with limited-to-none devOps work, and mostly relying on a thriving community of open-source solutions. Of course, data and model work (Sambasivan et al., 2021) still needs to happen, but that is why we built everything in the first place: we should be happy that catching our prey in this growing ecosystem won’t require a bigger boat.


  • E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell (2021)

    On the dangers of stochastic parrots: can language models be too big?

    In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 610–623. External Links: ISBN 9781450383097, Link, Document Cited by: §1.
  • K. Bi, Q. Ai, and W. Croft (2020) A transformer-based embedding model for personalized product search. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. Cited by: §1.
  • F. Bianchi, J. Tagliabue, B. Yu, L. Bigon, and C. Greco (2020) Fantastic embeddings and how to align them: zero-shot inference in a multi-shop scenario. In Proceedings of the SIGIR 2020 eCom workshop, External Links: Link Cited by: §3.
  • E. Cramer-Flood (2020) Global ecommerce 2020. ecommerce decelerates amid global retail contraction but remains a bright spot.. External Links: Link Cited by: §1.
  • B. Dageville, T. Cruanes, M. Zukowski, V. Antonov, A. Avanes, J. Bock, J. Claybaugh, D. Engovatov, M. Hentschel, J. Huang, A. W. Lee, A. Motivala, A. Q. Munir, S. Pelley, P. Povinec, G. Rahn, S. Triantafyllis, and P. Unterbrunner (2016) The snowflake elastic data warehouse. In Proceedings of the 2016 International Conference on Management of Data, SIGMOD ’16, New York, NY, USA, pp. 215–226. External Links: ISBN 9781450335317, Link, Document Cited by: 1st item.
  • C. Hansen, C. Hansen, L. Maystre, R. Mehrotra, B. Brost, F. Tomasi, and M. Lalmas (2020) Contextual and sequential user embeddings for large-scale music recommendation. In Fourteenth ACM Conference on Recommender Systems, RecSys ’20, New York, NY, USA, pp. 53–62. External Links: ISBN 9781450375832, Link, Document Cited by: §1.
  • J. Jiang, S. Gan, Y. Liu, F. Wang, G. Alonso, A. Klimovic, A. Singla, W. Wu, and C. Zhang (2021) Towards demystifying serverless machine learning training. In ACM SIGMOD International Conference on Management of Data (SIGMOD 2021), External Links: Link Cited by: item 3.
  • B. Requena, G. Cassani, J. Tagliabue, C. Greco, and L. Lacasa (2020) Shopper intent prediction from clickstream e-commerce data with minimal browsing information. Scientific Reports 10, pp. 2045–2322. External Links: Document Cited by: §1.
  • N. Sambasivan, S. Kapania, H. Highfill, D. Akrong, P. Paritosh, and L. M. Aroyo (2021) “Everyone wants to do the model work, not the data work”: data cascades in high-stakes ai. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, New York, NY, USA. External Links: ISBN 9781450380966, Link, Document Cited by: §5.
  • E. Strubell, A. Ganesh, and A. McCallum (2019)

    Energy and policy considerations for deep learning in NLP

    In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 3645–3650. External Links: Link, Document Cited by: §1.
  • J. Tagliabue, C. Greco, J. Roy, F. Bianchi, G. Cassani, B. Yu, and P. J. Chia (2021) SIGIR 2021 e-commerce workshop data challenge. In SIGIR eCom 2021, Cited by: §1.
  • J. Tagliabue, B. Yu, and M. Beaulieu (2020) How to grow a (product) tree: personalized category suggestions for eCommerce type-ahead. In Proceedings of The 3rd Workshop on e-Commerce and NLP, Seattle, WA, USA, pp. 7–18. External Links: Link, Document Cited by: §1.
  • J. Tagliabue and B. Yu (2020) Shopping in the multiverse: a counterfactual approach to in-session attribution. In Proceedings of the SIGIR 2020 Workshop on eCommerce (ECOM 20), Cited by: §1.
  • [14] J. Tagliabue Serving 1x1 pixels from aws lambda endpoints.. Note: 2021-06-01 Cited by: 1st item.
  • Toth, A., Tan, L., Di Fabbrizio, G. and Datta, A. (2017) Predicting shopping behavior with mixture of rnns. In Proceedings of the SIGIR 2017 Workshop on eCommerce (ECOM 17), Cited by: §1.
  • M. Tsagkias, T. H. King, S. Kallumadi, V. Murdock, and M. de Rijke (2020) Challenges and research opportunities in ecommerce search and recommendations. In SIGIR Forum, Vol. 54. Cited by: §1.

Appendix A Appendix: Research Distribution

Figure 3 shows the number of paper per company at eCommerce-themed events at major conferences in 2020 (KDD, SIGIR, ACL, WWW, RecSys). A total of 28 industry players took part in those events; out of 28, only 2 companies are not large public companies, and only one contributed multiple times (Coveo, with 6 research papers).

Figure 3. Number of research papers in eCommerce events at top tier conferences in 2020: almost all contributions are from public companies with a B2C business model.

Appendix B Appendix: Bio

Jacopo Tagliabue was co-founder and CTO of Tooso, an A.I. company in San Francisco acquired by Coveo in 2019. Jacopo is currently the Lead A.I. Scientist at Coveo, where he ships machine learning models to hundreds of companies and millions of shoppers. When not busy building A.I. products, he is exploring research topics at the intersection of language, reasoning and learning: he is a committee member for international NLP/IR workshops, and his work is often featured in the general press and A.I. venues (including SIGIR, RecSys, ACL and best industry paper at NAACL). In previous lives, he managed to get a Ph.D., do scienc-y things for a pro basketball team, and simulate a pre-Columbian civilization.