Over at the Open Compute Project Global Summit 2019 Intel’s Jason Waxman announced a collaborative effort with social media giant Facebook to develop the upcoming Cooper Lake server CPUs. Available with up to 112 cores across a four-socket design and the bfloat 16 format, these Xeon chips are built to accelerate machine learning and AI smarts.

Built on the 14nm process node, Xeon ‘Cooper Lake’ server-grade processors will feature bfloat 16, a 16-bit floating point format particularly useful for deep learning implementations. This numerical format condenses a range equal to that of a 32-bit floating value with some clever bit management suited to AI education, and its particularly conducive to improving image classification, speech recognition, recommendation engines, and machine translation. Intel also has plans to roll out bfloat 16 across its entire Xeon, FPGA, and AI processor lineup.

Facebook’s latest tech venture has seen it dipping its toes into the AI and machine learning silicon game with seasoned vets Intel. The social media giant’s brief is far bigger than just a place to post dog pictures nowadays, and the last twelve months have been particularly newsworthy for the tech giant following congressional grillings and the Cambridge Analytica scandal.