IntelMKL¶
IntelĀ® oneAPI Math Kernel Library (oneMKL) is a computing math library of highly optimized, extensively threaded routines for applications that require maximum performance. The library provides Fortran and C programming language interfaces. oneMKL C language interfaces can be called from applications written in either C or C++, as well as in any other language that can reference a C interface.
Usage¶
IntelMKL module¶
IntelMKL has to be loaded using Lmod prior to running it.
$ module load imkl
spider
$ module spider imkl
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
imkl:
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Description:
Intel oneAPI Math Kernel Library
Versions:
imkl/2021.2.0-iimpi-2021a
imkl/2021.2.0-iompi-2021a
imkl/2021.4.0
imkl/2022.1.0
imkl/2022.2.1
imkl/2023.1.0
Other possible modules matches:
imkl-FFTW
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
To find other possible module matches execute:
$ module -r spider '.*imkl.*'
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
For detailed information about a specific "imkl" package (including how to load the modules) use the module's full name.
Note that names that have a trailing (E) are extensions provided by other modules.
For example:
$ module spider imkl/2023.1.0
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Software version¶
Here you can check the available versions for IntelMKL in the different clusters
imkl/11.3.3.210-iimpi-2016b
imkl/2017.1.132-gimpi-2017a
imkl/2017.1.132-iimpi-2017a
imkl/2017.3.196-iimpi-2017b
imkl/2018.1.163-iimpi-2018a
imkl/2018.3.222-iimpi-2018b
imkl/2019.1.144-iimpi-2019a
imkl/2019.5.281-iimpi-2019b
imkl/2020.1.217-iimpi-2020a
imkl/2020.4.304-iimpi-2020b
imkl/2021.2.0-iimpi-2021a
imkl/2021.4.0
imkl/2022.1.0
imkl/2019.5.281-iimpi-2019b
imkl/2019.5.281-iimpic-2019b
imkl/2020.1.217-iimpi-2020a
imkl/2020.4.304-iimpi-2020b
imkl/2021.2.0-gompi-2021a
imkl/2021.2.0-iimpi-2021a
imkl/2021.4.0
imkl/2022.1.0
imkl/2021.2.0-iimpi-2021a
imkl/2021.2.0-iompi-2021a
imkl/2021.4.0
imkl/2022.1.0
imkl/2022.2.1
imkl/2023.1.0
How to use IntelMKL¶
In order to use it, you will need to compile your code against the library.
Let's see how to use it with an example, using the following code:
Example code using IntelMKL
#include <iostream>
#include <mkl.h>
int main() {
// Define the dimensions of the matrices
const int m = 2; // Number of rows in matrix A and C
const int n = 3; // Number of columns in matrix B and C
const int k = 2; // Number of columns in matrix A and rows in matrix B
// Define the input matrices A and B
double A[m * k] = {1.0, 2.0,
3.0, 4.0};
double B[k * n] = {5.0, 6.0, 7.0,
8.0, 9.0, 10.0};
// Define the output matrix C and initialize it to zero
double C[m * n] = {0.0};
// Define the leading dimensions
const int lda = k; // Leading dimension of A
const int ldb = n; // Leading dimension of B
const int ldc = n; // Leading dimension of C
// Define the scalars alpha and beta
double alpha = 1.0;
double beta = 0.0;
// Perform matrix multiplication C = alpha * A * B + beta * C
cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans,
m, n, k, alpha, A, lda, B, ldb, beta, C, ldc);
// Print the result matrix C
std::cout << "Result matrix C:" << std::endl;
for (int i = 0; i < m; ++i) {
for (int j = 0; j < n; ++j) {
std::cout << C[i * n + j] << " ";
}
std::cout << std::endl;
}
return 0;
}
First, we load the module:
$ module load imkl/2023.1.0
To compile our code, we will use the following line:
$ icpc -o prueba prueba.cpp -qmkl
Once compiled, we can execute it using the following submission script:
Batch script for a execution using IntelMKL
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=IntelMKL_JOB
#SBATCH --mem=20gb
#SBATCH --cpus-per-task=1
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --output=%x-%j.out
#SBATCH --error=%x-%j.err
module load imkl/2023.1.0
srun ./example