According to Search Engine Journal, an update of Keras-based TF-Ranking coincides with the recent pace of Google updates and allows rapid development of more powerful ranking and spam algorithms.
Google has just announced the release of improved technology that makes it easier and faster to research and develop new algorithms that can be deployed quickly. That gives Google the ability to rapidly create new anti-spam algorithms, improved natural language processing and ranking related algorithms and be able to get them into production faster than ever.
Improved TF-Ranking Coincides with Dates of Recent Google Updates. This is of interest because Google has rolled out several spam-fighting algorithms and two core algorithm updates in June and July 2021. Those developments directly followed the May 2021 publication of this new technology. The timing could be coincidental, but considering everything that the new version of Keras-based TF-Ranking does, it may be important to familiarize oneself with it in order to understand why Google has increased the pace of releasing new ranking-related algorithm updates.
Google announced a new version of TF-Ranking that can be used to improve neural learning to rank algorithms as well as natural language processing algorithms like BERT. It’s a powerful way to create new algorithms and to amplify existing ones, so to speak, and to do it in a way that is incredibly fast. According to Google, TensorFlow is a machine learning platform.
The first open-source deep learning library for learning to rank (LTR) at scale.
Google’s article published on their AI Blog says that the new TF-Ranking is a major release that makes it easier than ever to set up learning to rank (LTR) models and get them into live production faster. That means that Google can create new algorithms and add them to search faster than ever.
Our native Keras ranking model has a brand-new workflow design, including a flexible ModelBuilder, a DatasetBuilder to set up training data, and a Pipeline to train the model with the provided dataset. These components make building a customized LTR model easier than ever and facilitate rapid exploration of new model structures for production and research.