TPUs are designed from the ground up to minimize time to accuracy when training large models. TPUs are Google’s custom-developed application specific integrated circuits (ASICs) used to accelerate ML workloads. Over the past decade, there has been tremendous evolution in the hardware accelerators used to train large ML models like those Snap uses for ad ranking, from general-purpose multicore central processing units (CPUs) to graphics processing units (GPUs) to TPUs. Better ad recommendation models translate to more relevant ads for users, driving greater engagement and improving conversion rates for advertisers. It’s simple, really: the more models Snap’s engineers can train, the more likely they are to find the models that perform better-and the less it costs to do so. With 300+ million users daily and millions of ads to rank, training models quickly and efficiently is a large part of a Snap ML engineer’s daily workload. Snap’s ad ranking team is charged with training the models that make sure the right ad is served to the right Snapchatter at the right time. But there’s a lot more to the story than the how, and that’s what we’re sharing here. Snap’s blog Training Large-Scale Recommendation Models with TPUs tells the story of how the Snap ad ranking team leveraged Google’s leading-edge TPUs to train deep learning models quickly and efficiently. Using Google's Cloud Tensor Processing Units (TPUs), Snap has accelerated its pace of innovation and model improvement to enhance the user experience. has been applying deep learning models to improve its recommendation engines on a daily basis. While many people still think of academic research when it comes to deep learning, Snap Inc.
0 Comments
Leave a Reply. |