Lightning talk (5 minutes)
Deployment
Inference
Ensemble models

Deployment of large ensemble models can create large overhead on memory and disk space with energy consumption as a derivative. One solution is to reduce the quality of input data or the rate of inference. In this session, we will speak about a set of tools to make our trained ensemble networks smaller in size and in memory, enabling them to run on cheaper hardware or on edge devices.

Shahar Gigi