The open-source tensorflow machine learning library is becoming faster, thanks to the collaboration between Google and Intel.
The Open Source One API Deep Neural Network Library (oneDNN) developed by Intel is now running by default in TenserFlow, a Google-led project. OneDNN is an open-source cross-platform performance library of deep learning building blocks designed for developers of both deep learning applications and frameworks such as TenserFlow.
According to Intel, OneDNN’s promise for enterprises and data scientists is a significant acceleration of up to 3 times the performance for AI operations with TensorFlow, one of the most widely used open-source machine learning technologies used today.
A.G., chief engineer of Intel’s AI Frameworks. “Intel has been collaborating with TenserFlow on OneDNN feature integration for many years,” Ramesh told VentureBeat.
This The OneDNN library was first made available as a preview opt-in feature starting with TensorFlow 2.5, released in May 2021. After a year of community testing and positive feedback, Ramesh said that OneDNN was launched by default in the latest Tensorflow 2.9. Update
OneDNN Library Model brings AI performance improvements for execution
Ramesh explained that with OneDNN, data scientists will see performance improvements in model execution times.
OneDNN updates apply to all Linux x86 packages and CPUs with the 2nd Gen Intel Xeon Scalable processors and neural-network-centric hardware features found on newer CPUs. Intel calls this performance optimization “software AI acceleration” and says it can have measurable effects in certain cases.
Ramesh added that business users and data scientists would be able to access low performance data types – int8 for prediction and Bfloat16 for prediction and training – to gain additional performance benefits from AI accelerators like Intel Deep Learning Boost.
Accelerate Deep Learning with OneDNN
According to Slintel, Tenserflow has a market share of 37%. The use of tensorflow is 53% in Kagal’s 2021 State of Data Science and Machine Learning Survey.
However, while TensorFlow is a popular technology, OneDNN Library and Intel’s approach to machine learning optimization is not just about TensorFlow. Ramesh said that Intel software optimization through OneDNN and other oneAPI libraries has measured benefits to many popular open source deep learning frameworks such as TensorFlow, PyTorch and Apache MXNet, as well as machine learning frameworks such as Scikit-learn and XGBoost.
It adds that most optimization-related default frameworks have already been up-streamed in the distribution.
Intel’s strategy for creating AI optimizations like OneDNN
The OneDNN library is part of Intel’s comprehensive strategy to help developers, data scientists, researchers and data engineers enable AI.
Wei Lee, vice president and general manager of AI and Analytics, told VentureBeat that Intel’s goal is to make it as easy as possible for any type of user to accelerate their end-to-end AI journey from edge to cloud – no matter what software they use. Wanted to, running on Intel. Lee said having an open ecosystem in software offerings helps enable innovation. He noted that Intel is working with Python to contribute and optimize industry structures such as PyTorch and TensorFlow, from language-level contributions to the release of Intel-developed tools that increase productivity, such as the OpenVINO toolkit. “Intel recently announced Project Apollo at Vision which brings even more, new open source AI reference offers that will accelerate the process of adopting AI everywhere in all industries,” Lee said.
Venturebeat’s mission Digital Town Square is set to become a place for technical decision makers to gain knowledge about the changing enterprise technology and practices. Learn more about membership.