Built on the 14nm process node, Xeon ‘Cooper Lake’ server-grade processors will feature bfloat 16, a 16-bit floating point format particularly useful for deep learning implementations. This numerical format condenses a range equal to that of a 32-bit floating value with some clever bit management suited to AI education, and its particularly conducive to improving image classification, speech recognition, recommendation engines, and machine translation. Intel also has plans to roll out bfloat 16 across its entire Xeon, FPGA, and AI processor lineup.
Facebook’s latest tech venture has seen it dipping its toes into the AI and machine learning silicon game with seasoned vets Intel. The social media giant’s brief is far bigger than just a place to post dog pictures nowadays, and the last twelve months have been particularly newsworthy for the tech giant following congressional grillings and the Cambridge Analytica scandal.