Webinar: Improving Application Performance on the Intel Xeon Phi Processor
This series of 2 hour theory and practical webinars delivered over 7 days will teach the fundamental skills needed to achieve optimum performance on the Intel® Xeon Phi™ Processor.
NAG HPC Engineers will show how to gain application performance gains on the Intel Xeon Phi Processor through the use of OpenMP; this entails fully utilizing all cores as well as efficient use of its SIMD vectorization capabilities.
The webinars feature practical ‘hands-on’ sessions to fully illustrate each key topic. Logins to remotely access the Intel Xeon Phi Processor will be provided.
The first three webinars introduce the Intel Xeon Phi Processor and teach how to use OpenMP to introduce parallelism. OpenMP also enables vectorization capabilities, and participants in this course will learn to use this feature effectively for the new AVX-512 vectorization capabilities available in the Intel Xeon Phi Processor. This ends with a practical exercise taking a real-world code that is not parallelized or vectorized and using OpenMP to achieve these two things for significant performance gains.
The final webinars focus on optimizing the Intel Xeon Phi Processor resource utilization for already-parallel codes.
Here we take an existing parallel code and further optimize data locality, data layout, and thread affinity. We demonstrate the use of Intel VTune to identify when an application can benefit from these optimizations.
By the end of this course, attendees will know the Intel Xeon Phi Processor and what applications can best leverage it. They will also know how to use OpenMP to utilize multicore parallelism as well as vectorization, and they will know how to further optimize already-parallel applications to even more efficiently utilize the Intel Xeon Phi Processor and maximize performance.
Access to Intel Xeon Phi Processors for training material development and lab exercises is provided by the Texas Advanced Computing Center at the University of Texas, Austin.