When people first start out with machine learning, AI, and robotics they often expect to find obscure and difficult to use programming languages. But in reality, Python is one of the more common languages for all of those tasks. But leveraging Python in that way isn’t always an easy matter. Machine learning is much closer to a computer’s underlying hardware than most other tasks in Python. And performing those functions often raises unexpected errors like “your CPU supports instructions that this TensorFlow binary was not compiled to use: avx2″. This can be especially frustrating as people often use Python specifically to avoid dealing with compiler issues. But as you’ll soon see, this is fairly easy to solve.
What Does the Error Mean?
When you see the error message you might assume that there’s some incompatibility between your CPU and TensorFlow. But it’s actually the exact opposite. The TensorFlow binary is telling you that it’ll work on your computer but that it could potentially leverage more advanced functionality. Specifically, TensorFlow can use a CPU’s Advanced Vector Extension (AVX) to improve its overall speed and efficiency. But TensorFlow needs to be specifically compiled with that optimization enabled in order to make use of it.
The error is essentially just telling you that the TensorFlow binary has noted that your CPU supports AVX but that the library wasn’t compiled to make use of it. You don’t really need to fix this error as it won’t cause any harm. But at the same time, leaving the error does mean that you’ll continue to be spammed by it and that you’ll be forgoing some potentially significant improvements to the operating speed of your Python code. In short, while you don’t absolutely need to fix the error it’s still something that’s strongly suggested. Especially given the fact that fixing the problem with the TensorFlow library is fairly straightforward.
A More In-depth Look at TensorFlow and AVX
You might wonder why AVX isn’t enabled by default. The answer comes down to the nature of x86/64 processors. Many elements of this processor type are standardized. That’s why you can largely just download programs without needing to check which company designed your processor. One x64 processor will largely be able to handle software written on any other x64 processor. There’s only one major exception to this rule – CPU extensions.
The CPU’s underlying architecture is extremely old. The current x86/64 framework dates back to the 8086 chips from the late 1970s. Code written for the 8086 has a good chance of running on a modern x86 processor. Though the reverse isn’t true. This is because CPU manufacturers have figured out more efficient ways to accomplish various tasks. These new methodologies are implemented as extensions to the base CPU. But targeting an extension means breaking compatibility with processors that predate those methods. In this case, AVX uses a fused multiply-add technique that essentially enables complex algebra to be completed in one rather than many steps.
The benefits of faster mathematical processing for machine learning almost go without saying. But at the same time, people usually don’t want to risk breaking compatibility with older processors. One of the best ways of compromising is to have a program or library check for CPU extensions and then alert the user if they’re found. That way it’s up to you to choose whether you want to make use of that capability or not. Furthermore, because of Python’s structure, you generally don’t need to worry about breaking compatibility with other processors in your Python code. Just because you’re using an AVX extension in your TensorFlow backend doesn’t mean other people running your code will. At least if you’re shipping the Python code as a standard plain-text script.
It might seem obvious that you’d want to use AVX support. But there’s one catch to doing so. TensorFlow can use more than just your CPU. It can also use your GPU. And your GPU will almost always be faster than your CPU with AVX support. Just as your CPU with AVX support will be faster than the same chip without AVX support. If you have a good GPU in your computer then there’s not much benefit to using the AVX extension with TensorFlow. But if you don’t, and your computer supports AVX, then you’ll want to enable support for AVX in TensorFlow.
Two Methods To Fix the Error
At this point, you’ve seen that you have a lot to gain by fixing the error and little to lose. There are two main methods to get around the Python error. The first is easy, and the one you’ll want to use if your computer has a good GPU device. You simply need to add the following to the top of your primary Python script.
import os
os.environ[‘TF_CPP_MIN_LOG_LEVEL’] = ‘2’
This code sets TensorFlow’s logging to 2. At level 2 TensorFlow will ignore info and warning messages. If you were to increase that value to 3 it’d also ignore full errors. But if you do want to use AVX instructions and CPU computations you’ll need to compile TensorFlow from source. Unfortunately, this will be considerably more involved than just using the “pip install tensorflow” command.
The compilation steps will differ a little depending on your operating system. But for Linux and most Unix compatible systems you could use a variation following commands. You’ll of course want to modify this for the unique elements of any system. For example, using yum instead of apt for a system that doesn’t use dpkg. But the basic elements are the same in every system. You download the TensorFlow source code and dependencies. Then you compile the code with specific flags. And, finally, you install the new build. For an Ubuntu Linux system, you’d want to use the following commands. But you should be able to translate this to your own operating system fairly easily.
sudo apt install python3-dev python3-pip
sudo apt install apt-transport-https curl gnupg
curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg –dearmor >bazel-archive-keyring.gpg
sudo mv bazel-archive-keyring.gpg /usr/share/keyrings
echo “deb [arch=amd64 signed-by=/usr/share/keyrings/bazel-archive-keyring.gpg] https://storage.googleapis.com/bazel-apt stable jdk1.8” | sudo tee /etc/apt/sources.list.d/bazel.list
sudo apt update && sudo apt install bazel
sudo apt update && sudo apt full-upgrade
sudo apt install bazel-1.0.0
pip install -U –user pip numpy wheel packaging requests opt_einsum
pip install -U –user keras_preprocessing –no-deps
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
./configure
bazel build -c opt –copt=-mavx –copt=-mavx2 –copt=-mfma –copt=-mfpmath=both –copt=-msse4.2 –config=cuda -k //tensorflow/tools/pip_package:build_pip_package
pip install /tmp/tensorflow_pkg/tensorflow-version-tags.whl
The configure stage automatically uses an equivalent to the -march=native flag. This will, in turn, detect CPU extensions and enable AVX support along with any similar potential optimization. At this point, you should have an AVX-enabled version of TensorFlow installed on your system.