1. Basic concepts

1.1. Vector types

An implementation of the RISC-V V-extension features 32 vector registers of length VLEN bits. Each vector register holds a number of elements. The wider element, in bits, that an implementation supports is called ELEN.

A vector, thus, can hold VLEN/ELEN elements of the widest element implemented. This also means that the same vector can hold twice that number of the element is half the size. This is, a vector of floats will always hold twice the number of elements that a vector of doubles can hold.

Vector registers in the V-extension can be grouped. Grouping can be 1 (no grouping actually), 2, 4 or 8. Grouping means larger vectors but in a smaller number (e.g. there are only 16 registers with grouping 2). Grouping is part of the state of the extension and it is called LMUL (length multiplier). A LMUL of 1 means no grouping.

In EPI ELEN=64 so the following types are available to operate the vectors under different LMUL configurations.

Table 1. Available vector types
Vector of LMUL=1 LMUL=2 LMUL=4 LMUL=8

double

__epi_1xf64

__epi_2xf64

__epi_4xf64

__epi_8xf64

float

__epi_2xf32

__epi_4xf32

__epi_8xf32

__epi_16xf32

int64_t

__epi_1xi64

__epi_2xi64

__epi_4xi64

__epi_8xi64

int32_t

__epi_2xi32

__epi_4xi32

__epi_8xi32

__epi_16xi32

int16_t

__epi_4xi16

__epi_8xi16

__epi_16xi16

__epi_32xi16

int8_t

__epi_8xi8

__epi_16xi8

__epi_32xi8

__epi_64xi8

The syntax of vector types is __epi_<factor>x<ty>.

  • factor is the relative number of elements of the vector respect to VLEN/ELEN. This way __epi_2xf32 and __epi_2xf64 have the same number of elements but different element type.

  • ty is the element type. This way __epi_2xf32 and __epi_4xf32 have a different number of elements but the same element type.

1.2. Mask types

Mask types are unrelated to LMUL in that they always use a single vector register. However the <factor> value is still useful. The element type of a mask is i1.

  • __epi_1xi1

  • __epi_2xi1

  • __epi_4xi1

  • __epi_8xi1

  • __epi_16xi1

  • __epi_32xi1

  • __epi_64xi1

For example, a relational operation between two __epi_2x<ty> will compute a mask of type __epi_2xi1.

1.3. Tuple types

Tuple types represent a pair of vectors. Currently tuples of LMUL=1 is implemented.

2 elements 3 elements 4 elements 5 elements 6 elements 7 elements 8 elements

__epi_1xf64x2

__epi_1xf64x3

__epi_1xf64x4

__epi_1xf64x5

__epi_1xf64x6

__epi_1xf64x7

__epi_1xf64x8

__epi_2xf32x2

__epi_2xf32x3

__epi_2xf32x4

__epi_2xf32x5

__epi_2xf32x6

__epi_2xf32x7

__epi_2xf32x8

__epi_1xi64x2

__epi_1xi64x3

__epi_1xi64x4

__epi_1xi64x5

__epi_1xi64x6

__epi_1xi64x7

__epi_1xi64x8

__epi_2xi32x2

__epi_2xi32x3

__epi_2xi32x4

__epi_2xi32x5

__epi_2xi32x6

__epi_2xi32x7

__epi_2xi32x8

__epi_4xi16x2

__epi_4xi16x3

__epi_4xi16x4

__epi_4xi16x5

__epi_4xi16x6

__epi_4xi16x7

__epi_4xi16x8

__epi_8xi8x2

__epi_8xi8x3

__epi_8xi8x4

__epi_8xi8x5

__epi_8xi8x6

__epi_8xi8x7

__epi_8xi8x8

Some EPI builtins return two vectors and use tuple types of 2 elements.

To access the elements of the tuple use the fields v0, v1, … v7, depending on the number of elements of the tuple type.

__epi_1xf64x2 mytuple;

... = mytuple.v0; // __epi_1xf64
... = mytuple.v1; // __epi_1xf64

1.4. Mixed types

If your code is mixing widths (e.g. vectors of float and double at the same time) there are two possible approaches:

  • Underusing the registers that hold narrower registers. For instance using __epi_1xf64 and __epi_2xf32 but using the latter as if only had half of the elements (as if it was the nonexisting type __epi_1xf32). This can be achieved using a granted vector length obtained with element width 64 (i.e. the wider element). This approach is complicated if we need to convert the lower elements of __epi_2xf32 into an __epi_1xf64 because of the SLEN parameter (which need not be VLEN).

  • Grouping registers. For instance, using __epi_2xf64 and __epi_2xf32. The former type must be operated under LMUL=2 while the latter can be operated under LMUL=1. The granted vector length can be requested using the wider (with __epi_m2) or the narrower (with __epi_m1) type.

1.5. Cache flags

Some loads and store instructions allow an extra flags operand. This operand needs not to be constant at runtime, but its value must be either 0 or __epi_nt.

Flags Meaning

0

Temporal operation: the load or store will allocate the loaded/stored data in case of cache miss.

__epi_nt

Non-temporal operation: the load or store will not allocate the loaded/stored data in case of cache miss.

2. Function reference

2.1. Vector configuration

2.1.1. Change the granted vector length

Description

Use this builtin to set the granted vector length given a requested vector length (rvl), and single element width (sew) and a length multiplier (lmul).

This builtin returns the granted vector length. This granted vector length is suitable to use in other builtins that require it.

Valid values for the sew parameter are:

  • __epi_e8, for elements of 8 bits (like char, signed char, unsigned char)

  • __epi_e16, for elements of 16 bits (like short, unsigned short or the unsupported __float16)

  • __epi_e32, for elements of 32 bits (like int, unsigned int or float)

  • __epi_e64, for elements of 64 bit (like long, unsigned long or double)

Valid values for the lmul parameter are:

  • __epi_m1, for LMUL=1

  • __epi_m2, for LMUL=2

  • __epi_m4, for LMUL=4

  • __epi_m8, for LMUL=8

Instruction
vsetvli
Prototypes
unsigned long int __builtin_epi_vsetvl(unsigned long int rvl,
                                       /* constant */ unsigned long int sew,
                                       /* constant */ unsigned long int lmul);
Operation
gvl = compute_vector_length(rvl, sew, lmul)
result = gvl

2.1.2. Set the granted vector length to the maximum length

Description

Use this builtin to set the granted vector length to the maximum value allowed given a single element width (sew) and a length multiplier (lmul).

This builtin returns the granted vector length. This granted vector length is suitable to use in other builtins that require it.

Instruction
vsetvli
Prototypes
unsigned long int
__builtin_epi_vsetvlmax(/* constant */ unsigned long int sew,
                        /* constant */ unsigned long int lmul);
Operation
gvl = compute_vlmax(sew, lmul)
result = gvl

2.2. Floating-point arithmetic operations

2.2.1. Elementwise floating-point addition

Description

Use these builtins to do an elementwise addition of two floating-point vectors.

Instruction
vfadd.vv
Prototypes
__epi_2xf32 __builtin_epi_vfadd_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfadd_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfadd_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfadd_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfadd_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfadd_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfadd_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfadd_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] + b[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfadd_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                           __epi_2xf32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfadd_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                           __epi_1xf64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfadd_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                           __epi_4xf32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfadd_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                           __epi_2xf64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfadd_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                           __epi_8xf32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfadd_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                           __epi_4xf64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfadd_16xf32_mask(__epi_16xf32 merge, __epi_16xf32 a,
                                             __epi_16xf32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfadd_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                           __epi_8xf64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] + b[element]
   else
     result[element] = merge[element]

2.2.2. Elementwise floating-point division

Description

Use these builtins to do an elementwise division of two floating-point vectors.

Instruction
vfdiv.vv
Prototypes
__epi_2xf32 __builtin_epi_vfdiv_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfdiv_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfdiv_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfdiv_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfdiv_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfdiv_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfdiv_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfdiv_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] / b[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfdiv_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                           __epi_2xf32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfdiv_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                           __epi_1xf64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfdiv_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                           __epi_4xf32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfdiv_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                           __epi_2xf64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfdiv_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                           __epi_8xf32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfdiv_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                           __epi_4xf64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfdiv_16xf32_mask(__epi_16xf32 merge, __epi_16xf32 a,
                                             __epi_16xf32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfdiv_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                           __epi_8xf64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] / b[element]
   else
     result[element] = merge[element]

2.2.3. Floating-point multiply and add (overwrite addend)

Description

Use these builtins to do an elementwise floating-point multiply and add.

At low level, the parameter c will be located in a vector register that will be overwritten by the vfmacc instruction.

Instruction
vfmacc.vv
Prototypes
__epi_2xf32 __builtin_epi_vfmacc_2xf32(__epi_2xf32 c, __epi_2xf32 a,
                                       __epi_2xf32 b, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmacc_1xf64(__epi_1xf64 c, __epi_1xf64 a,
                                       __epi_1xf64 b, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmacc_4xf32(__epi_4xf32 c, __epi_4xf32 a,
                                       __epi_4xf32 b, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmacc_2xf64(__epi_2xf64 c, __epi_2xf64 a,
                                       __epi_2xf64 b, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmacc_8xf32(__epi_8xf32 c, __epi_8xf32 a,
                                       __epi_8xf32 b, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmacc_4xf64(__epi_4xf64 c, __epi_4xf64 a,
                                       __epi_4xf64 b, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmacc_16xf32(__epi_16xf32 c, __epi_16xf32 a,
                                         __epi_16xf32 b, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmacc_8xf64(__epi_8xf64 c, __epi_8xf64 a,
                                       __epi_8xf64 b, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] * b[element] + c[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfmacc_2xf32_mask(__epi_2xf32 c, __epi_2xf32 a,
                                            __epi_2xf32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmacc_1xf64_mask(__epi_1xf64 c, __epi_1xf64 a,
                                            __epi_1xf64 b, __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmacc_4xf32_mask(__epi_4xf32 c, __epi_4xf32 a,
                                            __epi_4xf32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmacc_2xf64_mask(__epi_2xf64 c, __epi_2xf64 a,
                                            __epi_2xf64 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmacc_8xf32_mask(__epi_8xf32 c, __epi_8xf32 a,
                                            __epi_8xf32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmacc_4xf64_mask(__epi_4xf64 c, __epi_4xf64 a,
                                            __epi_4xf64 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmacc_16xf32_mask(__epi_16xf32 c, __epi_16xf32 a,
                                              __epi_16xf32 b, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmacc_8xf64_mask(__epi_8xf64 c, __epi_8xf64 a,
                                            __epi_8xf64 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] * b[element] + c[element]
   else
     result[element] = c[element]

2.2.4. Floating-point multiply and add (overwrite multiplicand)

Description

Use these builtins to do an elementwise floating-point multiply and add.

At low level, the parameter a will be located in a vector register that will be overwritten by the vfmadd instruction.

Instruction
vfmadd.vv
Prototypes
__epi_2xf32 __builtin_epi_vfmadd_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                       __epi_2xf32 c, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmadd_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                       __epi_1xf64 c, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmadd_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                       __epi_4xf32 c, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmadd_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                       __epi_2xf64 c, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmadd_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                       __epi_8xf32 c, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmadd_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                       __epi_4xf64 c, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmadd_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                         __epi_16xf32 c, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmadd_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                       __epi_8xf64 c, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] * b[element] + c[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfmadd_2xf32_mask(__epi_2xf32 a, __epi_2xf32 b,
                                            __epi_2xf32 c, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmadd_1xf64_mask(__epi_1xf64 a, __epi_1xf64 b,
                                            __epi_1xf64 c, __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmadd_4xf32_mask(__epi_4xf32 a, __epi_4xf32 b,
                                            __epi_4xf32 c, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmadd_2xf64_mask(__epi_2xf64 a, __epi_2xf64 b,
                                            __epi_2xf64 c, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmadd_8xf32_mask(__epi_8xf32 a, __epi_8xf32 b,
                                            __epi_8xf32 c, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmadd_4xf64_mask(__epi_4xf64 a, __epi_4xf64 b,
                                            __epi_4xf64 c, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmadd_16xf32_mask(__epi_16xf32 a, __epi_16xf32 b,
                                              __epi_16xf32 c, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmadd_8xf64_mask(__epi_8xf64 a, __epi_8xf64 b,
                                            __epi_8xf64 c, __epi_8xi1 mask,
                                            unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] * b[element] + c[element]
   else
     result[element] = a[element]

2.2.5. Elementwise floating-point maximum

Description

Use these builtins to compute elementwise the maximum of two floating-point vectors.

Instruction
vfmax.vv
Prototypes
__epi_2xf32 __builtin_epi_vfmax_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmax_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmax_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmax_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmax_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmax_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmax_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmax_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = max(a[element], b[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfmax_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                           __epi_2xf32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmax_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                           __epi_1xf64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmax_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                           __epi_4xf32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmax_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                           __epi_2xf64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmax_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                           __epi_8xf32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmax_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                           __epi_4xf64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmax_16xf32_mask(__epi_16xf32 merge, __epi_16xf32 a,
                                             __epi_16xf32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmax_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                           __epi_8xf64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = max(a[element], b[element])
   else
     result[element] = merge[element]

2.2.6. Elementwise floating-point minimum

Description

Use these builtins to compute elementwise the minimum of two floating-point vectors.

Instruction
vfmin.vv
Prototypes
__epi_2xf32 __builtin_epi_vfmin_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmin_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmin_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmin_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmin_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmin_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmin_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmin_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = min(a[element], b[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfmin_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                           __epi_2xf32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmin_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                           __epi_1xf64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmin_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                           __epi_4xf32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmin_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                           __epi_2xf64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmin_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                           __epi_8xf32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmin_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                           __epi_4xf64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmin_16xf32_mask(__epi_16xf32 merge, __epi_16xf32 a,
                                             __epi_16xf32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmin_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                           __epi_8xf64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = min(a[element], b[element])
   else
     result[element] = merge[element]

2.2.7. Floating-point multiply and subtract (overwrite subtrahend)

Description

Use these builtins to do an elementwise floating-point multiply and subtract.

At low level, the parameter c will be located in a vector register that will be overwritten by the vfmsac instruction.

Instruction
vfmsac.vv
Prototypes
__epi_2xf32 __builtin_epi_vfmsac_2xf32(__epi_2xf32 c, __epi_2xf32 a,
                                       __epi_2xf32 b, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmsac_1xf64(__epi_1xf64 c, __epi_1xf64 a,
                                       __epi_1xf64 b, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmsac_4xf32(__epi_4xf32 c, __epi_4xf32 a,
                                       __epi_4xf32 b, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmsac_2xf64(__epi_2xf64 c, __epi_2xf64 a,
                                       __epi_2xf64 b, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmsac_8xf32(__epi_8xf32 c, __epi_8xf32 a,
                                       __epi_8xf32 b, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmsac_4xf64(__epi_4xf64 c, __epi_4xf64 a,
                                       __epi_4xf64 b, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmsac_16xf32(__epi_16xf32 c, __epi_16xf32 a,
                                         __epi_16xf32 b, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmsac_8xf64(__epi_8xf64 c, __epi_8xf64 a,
                                       __epi_8xf64 b, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] * b[element] - c[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfmsac_2xf32_mask(__epi_2xf32 c, __epi_2xf32 a,
                                            __epi_2xf32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmsac_1xf64_mask(__epi_1xf64 c, __epi_1xf64 a,
                                            __epi_1xf64 b, __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmsac_4xf32_mask(__epi_4xf32 c, __epi_4xf32 a,
                                            __epi_4xf32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmsac_2xf64_mask(__epi_2xf64 c, __epi_2xf64 a,
                                            __epi_2xf64 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmsac_8xf32_mask(__epi_8xf32 c, __epi_8xf32 a,
                                            __epi_8xf32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmsac_4xf64_mask(__epi_4xf64 c, __epi_4xf64 a,
                                            __epi_4xf64 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmsac_16xf32_mask(__epi_16xf32 c, __epi_16xf32 a,
                                              __epi_16xf32 b, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmsac_8xf64_mask(__epi_8xf64 c, __epi_8xf64 a,
                                            __epi_8xf64 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] * b[element] - c[element]
   else
     result[element] = c[element]

2.2.8. Floating-point multiply and subtract (overwrite multiplicand)

Description

Use these builtins to do an elementwise floating-point multiply and subtract.

At low level, the parameter a will be located in a vector register that will be overwritten by the vfmsub instruction.

Instruction
vfmsub.vv
Prototypes
__epi_2xf32 __builtin_epi_vfmsub_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                       __epi_2xf32 c, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmsub_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                       __epi_1xf64 c, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmsub_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                       __epi_4xf32 c, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmsub_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                       __epi_2xf64 c, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmsub_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                       __epi_8xf32 c, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmsub_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                       __epi_4xf64 c, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmsub_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                         __epi_16xf32 c, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmsub_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                       __epi_8xf64 c, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] * b[element] - c[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfmsub_2xf32_mask(__epi_2xf32 a, __epi_2xf32 b,
                                            __epi_2xf32 c, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmsub_1xf64_mask(__epi_1xf64 a, __epi_1xf64 b,
                                            __epi_1xf64 c, __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmsub_4xf32_mask(__epi_4xf32 a, __epi_4xf32 b,
                                            __epi_4xf32 c, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmsub_2xf64_mask(__epi_2xf64 a, __epi_2xf64 b,
                                            __epi_2xf64 c, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmsub_8xf32_mask(__epi_8xf32 a, __epi_8xf32 b,
                                            __epi_8xf32 c, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmsub_4xf64_mask(__epi_4xf64 a, __epi_4xf64 b,
                                            __epi_4xf64 c, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmsub_16xf32_mask(__epi_16xf32 a, __epi_16xf32 b,
                                              __epi_16xf32 c, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmsub_8xf64_mask(__epi_8xf64 a, __epi_8xf64 b,
                                            __epi_8xf64 c, __epi_8xi1 mask,
                                            unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] * b[element] - c[element]
   else
     result[element] = a[element]

2.2.9. Elementwise floating-point multiplication

Description

Use these builtins to do an elementwise multiplication of two floating-point vectors.

Instruction
vfmul.vv
Prototypes
__epi_2xf32 __builtin_epi_vfmul_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmul_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmul_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmul_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmul_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmul_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmul_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmul_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] * b[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfmul_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                           __epi_2xf32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmul_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                           __epi_1xf64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmul_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                           __epi_4xf32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmul_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                           __epi_2xf64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmul_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                           __epi_8xf32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmul_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                           __epi_4xf64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmul_16xf32_mask(__epi_16xf32 merge, __epi_16xf32 a,
                                             __epi_16xf32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmul_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                           __epi_8xf64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] * b[element]
   else
     result[element] = merge[element]

2.2.10. Floating-point negate multiply and add (overwrite addend)

Description

Use these builtins to do an elementwise floating-point negate multiply and add.

At low level, the parameter c will be located in a vector register that will be overwritten by the vfnmacc instruction.

Instruction
vfnmacc.vv
Prototypes
__epi_2xf32 __builtin_epi_vfnmacc_2xf32(__epi_2xf32 c, __epi_2xf32 a,
                                        __epi_2xf32 b, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfnmacc_1xf64(__epi_1xf64 c, __epi_1xf64 a,
                                        __epi_1xf64 b, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfnmacc_4xf32(__epi_4xf32 c, __epi_4xf32 a,
                                        __epi_4xf32 b, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfnmacc_2xf64(__epi_2xf64 c, __epi_2xf64 a,
                                        __epi_2xf64 b, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfnmacc_8xf32(__epi_8xf32 c, __epi_8xf32 a,
                                        __epi_8xf32 b, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfnmacc_4xf64(__epi_4xf64 c, __epi_4xf64 a,
                                        __epi_4xf64 b, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfnmacc_16xf32(__epi_16xf32 c, __epi_16xf32 a,
                                          __epi_16xf32 b,
                                          unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfnmacc_8xf64(__epi_8xf64 c, __epi_8xf64 a,
                                        __epi_8xf64 b, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = -( a[element] * b[element] ) - c[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfnmacc_2xf32_mask(__epi_2xf32 c, __epi_2xf32 a,
                                             __epi_2xf32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfnmacc_1xf64_mask(__epi_1xf64 c, __epi_1xf64 a,
                                             __epi_1xf64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfnmacc_4xf32_mask(__epi_4xf32 c, __epi_4xf32 a,
                                             __epi_4xf32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfnmacc_2xf64_mask(__epi_2xf64 c, __epi_2xf64 a,
                                             __epi_2xf64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfnmacc_8xf32_mask(__epi_8xf32 c, __epi_8xf32 a,
                                             __epi_8xf32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfnmacc_4xf64_mask(__epi_4xf64 c, __epi_4xf64 a,
                                             __epi_4xf64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfnmacc_16xf32_mask(__epi_16xf32 c, __epi_16xf32 a,
                                               __epi_16xf32 b, __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfnmacc_8xf64_mask(__epi_8xf64 c, __epi_8xf64 a,
                                             __epi_8xf64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = -( a[element] * b[element] ) - c[element]
   else
     result[element] = c[element]

2.2.11. Floating-point negate multiply and add (overwrite multiplicand)

Description

Use these builtins to do an elementwise floating-point negate multiply and add.

At low level, the parameter a will be located in a vector register that will be overwritten by the vfnmadd instruction.

Instruction
vfnmadd.vv
Prototypes
__epi_2xf32 __builtin_epi_vfnmadd_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                        __epi_2xf32 c, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfnmadd_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                        __epi_1xf64 c, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfnmadd_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                        __epi_4xf32 c, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfnmadd_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                        __epi_2xf64 c, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfnmadd_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                        __epi_8xf32 c, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfnmadd_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                        __epi_4xf64 c, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfnmadd_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                          __epi_16xf32 c,
                                          unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfnmadd_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                        __epi_8xf64 c, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = -( a[element] * b[element] ) - c[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfnmadd_2xf32_mask(__epi_2xf32 a, __epi_2xf32 b,
                                             __epi_2xf32 c, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfnmadd_1xf64_mask(__epi_1xf64 a, __epi_1xf64 b,
                                             __epi_1xf64 c, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfnmadd_4xf32_mask(__epi_4xf32 a, __epi_4xf32 b,
                                             __epi_4xf32 c, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfnmadd_2xf64_mask(__epi_2xf64 a, __epi_2xf64 b,
                                             __epi_2xf64 c, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfnmadd_8xf32_mask(__epi_8xf32 a, __epi_8xf32 b,
                                             __epi_8xf32 c, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfnmadd_4xf64_mask(__epi_4xf64 a, __epi_4xf64 b,
                                             __epi_4xf64 c, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfnmadd_16xf32_mask(__epi_16xf32 a, __epi_16xf32 b,
                                               __epi_16xf32 c, __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfnmadd_8xf64_mask(__epi_8xf64 a, __epi_8xf64 b,
                                             __epi_8xf64 c, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = -( a[element] * b[element] ) - c[element]
   else
     result[element] = a[element]

2.2.12. Floating-point negate multiply and subtract (overwrite subtrahend)

Description

Use these builtins to do an elementwise floating-point multiply and subtract.

At low level, the parameter c will be located in a vector register that will be overwritten by the vfnmsac instruction.

Instruction
vfnmsac.vv
Prototypes
__epi_2xf32 __builtin_epi_vfnmsac_2xf32(__epi_2xf32 c, __epi_2xf32 a,
                                        __epi_2xf32 b, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfnmsac_1xf64(__epi_1xf64 c, __epi_1xf64 a,
                                        __epi_1xf64 b, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfnmsac_4xf32(__epi_4xf32 c, __epi_4xf32 a,
                                        __epi_4xf32 b, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfnmsac_2xf64(__epi_2xf64 c, __epi_2xf64 a,
                                        __epi_2xf64 b, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfnmsac_8xf32(__epi_8xf32 c, __epi_8xf32 a,
                                        __epi_8xf32 b, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfnmsac_4xf64(__epi_4xf64 c, __epi_4xf64 a,
                                        __epi_4xf64 b, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfnmsac_16xf32(__epi_16xf32 c, __epi_16xf32 a,
                                          __epi_16xf32 b,
                                          unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfnmsac_8xf64(__epi_8xf64 c, __epi_8xf64 a,
                                        __epi_8xf64 b, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = -( a[element] * b[element] ) + c[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfnmsac_2xf32_mask(__epi_2xf32 c, __epi_2xf32 a,
                                             __epi_2xf32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfnmsac_1xf64_mask(__epi_1xf64 c, __epi_1xf64 a,
                                             __epi_1xf64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfnmsac_4xf32_mask(__epi_4xf32 c, __epi_4xf32 a,
                                             __epi_4xf32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfnmsac_2xf64_mask(__epi_2xf64 c, __epi_2xf64 a,
                                             __epi_2xf64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfnmsac_8xf32_mask(__epi_8xf32 c, __epi_8xf32 a,
                                             __epi_8xf32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfnmsac_4xf64_mask(__epi_4xf64 c, __epi_4xf64 a,
                                             __epi_4xf64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfnmsac_16xf32_mask(__epi_16xf32 c, __epi_16xf32 a,
                                               __epi_16xf32 b, __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfnmsac_8xf64_mask(__epi_8xf64 c, __epi_8xf64 a,
                                             __epi_8xf64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = -( a[element] * b[element] ) + c[element]
   else
     result[element] = c[element]

2.2.13. Floating-point negate multiply and subtract (overwrite multiplicand)

Description

Use these builtins to do an elementwise floating-point multiply and subtract.

At low level, the parameter a will be located in a vector register that will be overwritten by the vfnmsub instruction.

Instruction
vfnmsub.vv
Prototypes
__epi_2xf32 __builtin_epi_vfnmsub_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                        __epi_2xf32 c, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfnmsub_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                        __epi_1xf64 c, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfnmsub_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                        __epi_4xf32 c, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfnmsub_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                        __epi_2xf64 c, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfnmsub_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                        __epi_8xf32 c, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfnmsub_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                        __epi_4xf64 c, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfnmsub_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                          __epi_16xf32 c,
                                          unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfnmsub_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                        __epi_8xf64 c, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = -( a[element] * b[element] ) + c[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfnmsub_2xf32_mask(__epi_2xf32 a, __epi_2xf32 b,
                                             __epi_2xf32 c, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfnmsub_1xf64_mask(__epi_1xf64 a, __epi_1xf64 b,
                                             __epi_1xf64 c, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfnmsub_4xf32_mask(__epi_4xf32 a, __epi_4xf32 b,
                                             __epi_4xf32 c, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfnmsub_2xf64_mask(__epi_2xf64 a, __epi_2xf64 b,
                                             __epi_2xf64 c, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfnmsub_8xf32_mask(__epi_8xf32 a, __epi_8xf32 b,
                                             __epi_8xf32 c, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfnmsub_4xf64_mask(__epi_4xf64 a, __epi_4xf64 b,
                                             __epi_4xf64 c, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfnmsub_16xf32_mask(__epi_16xf32 a, __epi_16xf32 b,
                                               __epi_16xf32 c, __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfnmsub_8xf64_mask(__epi_8xf64 a, __epi_8xf64 b,
                                             __epi_8xf64 c, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = -( a[element] * b[element] ) + c[element]
   else
     result[element] = a[element]

2.2.14. Floating-point compute maximum of vector

Description

Use these builtins to compute the maximum element of a floating-point vector. The initial maximum is taken from the first element of the vector b

Instruction
vfredmax.vs
Prototypes
__epi_2xf32 __builtin_epi_vfredmax_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                         unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfredmax_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                         unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfredmax_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                         unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfredmax_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                         unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfredmax_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                         unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfredmax_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                         unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfredmax_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                           unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfredmax_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                         unsigned long int gvl);
Operation
if gvl > 0:
  current_max = b[0]
  for element = 0 to gvl - 1
     current_max = max(current_max, a[element])

  result[0] = current_max
Masked prototypes
__epi_2xf32 __builtin_epi_vfredmax_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                              __epi_2xf32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfredmax_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                              __epi_1xf64 b, __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfredmax_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                              __epi_4xf32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfredmax_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                              __epi_2xf64 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfredmax_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                              __epi_8xf32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfredmax_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                              __epi_4xf64 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfredmax_16xf32_mask(__epi_16xf32 merge,
                                                __epi_16xf32 a, __epi_16xf32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfredmax_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                              __epi_8xf64 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
Masked operation
if gvl > 0:
  current_max = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_max = max(current_max, a[element])
     else
       result[element] = merge[element]

  result[0] = current_max

2.2.15. Floating-point compute minimum of vector

Description

Use these builtins to compute the minium element of a floating-point vector. The initial minium is taken from the first element of the vector b

Instruction
vfredmin.vs
Prototypes
__epi_2xf32 __builtin_epi_vfredmin_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                         unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfredmin_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                         unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfredmin_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                         unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfredmin_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                         unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfredmin_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                         unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfredmin_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                         unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfredmin_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                           unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfredmin_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                         unsigned long int gvl);
Operation
if gvl > 0:
  current_min = b[0]
  for element = 0 to gvl - 1
     current_min = max(current_min, a[element])

  result[0] = current_min
Masked prototypes
__epi_2xf32 __builtin_epi_vfredmin_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                              __epi_2xf32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfredmin_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                              __epi_1xf64 b, __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfredmin_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                              __epi_4xf32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfredmin_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                              __epi_2xf64 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfredmin_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                              __epi_8xf32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfredmin_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                              __epi_4xf64 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfredmin_16xf32_mask(__epi_16xf32 merge,
                                                __epi_16xf32 a, __epi_16xf32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfredmin_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                              __epi_8xf64 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
Masked operation
if gvl > 0:
  current_min = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_min = max(current_min, a[element])
     else
       result[element] = merge[element]

  result[0] = current_min

2.2.16. Floating-point ordered sum of vector

Description

Use these builtins to compute the sum of all the elements of a floating-point vector. The initial result of the sum is taken from the first element of the vector b.

This operation preserves the order of the floating-point addition as described.

Instruction
vfredosum.vs
Prototypes
__epi_2xf32 __builtin_epi_vfredosum_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                          unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfredosum_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                          unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfredosum_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                          unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfredosum_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                          unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfredosum_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                          unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfredosum_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                          unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfredosum_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                            unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfredosum_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                          unsigned long int gvl);
Operation
if gvl > 0:
  current_sum = b[0]
  for element = 0 to gvl - 1
     current_sum = current_sum + a[element]

  result[0] = current_sum
Masked prototypes
__epi_2xf32 __builtin_epi_vfredosum_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                               __epi_2xf32 b, __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfredosum_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                               __epi_1xf64 b, __epi_1xi1 mask,
                                               unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfredosum_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                               __epi_4xf32 b, __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfredosum_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                               __epi_2xf64 b, __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfredosum_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                               __epi_8xf32 b, __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfredosum_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                               __epi_4xf64 b, __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfredosum_16xf32_mask(__epi_16xf32 merge,
                                                 __epi_16xf32 a, __epi_16xf32 b,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfredosum_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                               __epi_8xf64 b, __epi_8xi1 mask,
                                               unsigned long int gvl);
Masked operation
if gvl > 0:
  current_sum = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_sum = current_sum + a[element]
     else
       result[element] = merge[element]

  result[0] = current_sum

2.2.17. Floating-point unordered sum of vector

Description

Use these builtins to compute the sum of all the elements of a floating-point vector. The initial result of the sum is taken from the first element of the vector b.

This builtin will numerically compute the sum as described but floating point operations can be reassociated for efficiency reasons.

Instruction
vfredsum.vs
Prototypes
__epi_2xf32 __builtin_epi_vfredsum_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                         unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfredsum_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                         unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfredsum_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                         unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfredsum_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                         unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfredsum_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                         unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfredsum_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                         unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfredsum_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                           unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfredsum_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                         unsigned long int gvl);
Operation
if gvl > 0:
  current_sum = b[0]
  for element = 0 to gvl - 1
     current_sum = current_sum + a[element]

  result[0] = current_sum
Masked prototypes
__epi_2xf32 __builtin_epi_vfredsum_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                              __epi_2xf32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfredsum_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                              __epi_1xf64 b, __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfredsum_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                              __epi_4xf32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfredsum_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                              __epi_2xf64 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfredsum_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                              __epi_8xf32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfredsum_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                              __epi_4xf64 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfredsum_16xf32_mask(__epi_16xf32 merge,
                                                __epi_16xf32 a, __epi_16xf32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfredsum_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                              __epi_8xf64 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
Masked operation
if gvl > 0:
  current_sum = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_sum = current_sum + a[element]
     else
       result[element] = merge[element]

  result[0] = current_sum

2.2.18. Elementwise floating-point sign copy

Description

Use these builtins to generate a vector of floating-point elements whose magnitude is the same as the elements of the first operand but their sign is taken from the elements of the second operand.

Instruction
vfsgnj.vv
Prototypes
__epi_2xf32 __builtin_epi_vfsgnj_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                       unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsgnj_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                       unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsgnj_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                       unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsgnj_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                       unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsgnj_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                       unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsgnj_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                       unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsgnj_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                         unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsgnj_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                       unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fsgnj(a[element], b[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfsgnj_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                            __epi_2xf32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsgnj_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                            __epi_1xf64 b, __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsgnj_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                            __epi_4xf32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsgnj_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                            __epi_2xf64 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsgnj_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                            __epi_8xf32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsgnj_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                            __epi_4xf64 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsgnj_16xf32_mask(__epi_16xf32 merge,
                                              __epi_16xf32 a, __epi_16xf32 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsgnj_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                            __epi_8xf64 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fsgnj(a[element], b[element])
   else
     result[element] = merge[element]

2.2.19. Elementwise floating-point inverted sign copy

Description

Use these builtins to generate a vector of floating-point elements whose magnitude is the same as the elements of the first operand but their sign is the opposite sign of the corresponding element of the second operand.

This is useful to negate a vector of floating-point element using the same vector for the two operands.

Instruction
vfsgnjn.vv
Prototypes
__epi_2xf32 __builtin_epi_vfsgnjn_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                        unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsgnjn_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                        unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsgnjn_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                        unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsgnjn_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                        unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsgnjn_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                        unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsgnjn_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                        unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsgnjn_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                          unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsgnjn_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fsgnjn(a[element], b[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfsgnjn_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                             __epi_2xf32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsgnjn_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                             __epi_1xf64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsgnjn_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                             __epi_4xf32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsgnjn_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                             __epi_2xf64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsgnjn_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                             __epi_8xf32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsgnjn_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                             __epi_4xf64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsgnjn_16xf32_mask(__epi_16xf32 merge,
                                               __epi_16xf32 a, __epi_16xf32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsgnjn_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                             __epi_8xf64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fsgnjn(a[element], b[element])
   else
     result[element] = merge[element]

2.2.20. Elementwise floating-point XOR sign

Description

Use these builtins to generate a vector of floating-point elements whose magnitude is the same as the elements of the first operand but their sign is the exclusive or of their original sign and the sign of the corresponding element of the second vector.

A positive floating-point element has a sign of 0. A negative floating-point element has a sign of 1.

This is useful to compute the absolute value of a vector of floating-point elements. To do that use the same vector for the two operands.

Instruction
vfsgnjx.vv
Prototypes
__epi_2xf32 __builtin_epi_vfsgnjx_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                        unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsgnjx_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                        unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsgnjx_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                        unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsgnjx_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                        unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsgnjx_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                        unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsgnjx_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                        unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsgnjx_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                          unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsgnjx_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fsgnjx(a[element], b[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfsgnjx_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                             __epi_2xf32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsgnjx_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                             __epi_1xf64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsgnjx_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                             __epi_4xf32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsgnjx_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                             __epi_2xf64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsgnjx_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                             __epi_8xf32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsgnjx_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                             __epi_4xf64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsgnjx_16xf32_mask(__epi_16xf32 merge,
                                               __epi_16xf32 a, __epi_16xf32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsgnjx_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                             __epi_8xf64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fsgnjx(a[element], b[element])
   else
     result[element] = merge[element]

2.2.21. Elementwise floating-point square-root

Description

Use these builtins to compute the elementwise square root of a given floating-point vector.

Instruction
vfsqrt.v
Prototypes
__epi_2xf32 __builtin_epi_vfsqrt_2xf32(__epi_2xf32 a, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsqrt_1xf64(__epi_1xf64 a, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsqrt_4xf32(__epi_4xf32 a, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsqrt_2xf64(__epi_2xf64 a, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsqrt_8xf32(__epi_8xf32 a, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsqrt_4xf64(__epi_4xf64 a, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsqrt_16xf32(__epi_16xf32 a, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsqrt_8xf64(__epi_8xf64 a, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = sqrt(a[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfsqrt_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                            __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsqrt_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                            __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsqrt_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                            __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsqrt_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                            __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsqrt_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsqrt_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                            __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsqrt_16xf32_mask(__epi_16xf32 merge,
                                              __epi_16xf32 a, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsqrt_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = sqrt(a[element])
   else
     result[element] = merge[element]

2.2.22. Elementwise floating-point subtraction

Description

Use these builtins to do an elementwise subtraction of two floating-point vectors.

Instruction
vfsub.vv
Prototypes
__epi_2xf32 __builtin_epi_vfsub_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsub_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsub_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsub_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsub_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsub_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsub_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsub_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] - b[element]
Masked prototypes
__epi_2xf32 __builtin_epi_vfsub_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                           __epi_2xf32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfsub_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                           __epi_1xf64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfsub_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                           __epi_4xf32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfsub_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                           __epi_2xf64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfsub_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                           __epi_8xf32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfsub_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                           __epi_4xf64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfsub_16xf32_mask(__epi_16xf32 merge, __epi_16xf32 a,
                                             __epi_16xf32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfsub_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                           __epi_8xf64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] - b[element]
   else
     result[element] = merge[element]

2.2.23. Elementwise widening floating-point addition

Description

Use these builtins to do an elementwise addition of two floating-point vectors.

Before doing the addition, the elements of the two vectors are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfwadd.vv
Prototypes
__epi_2xf64 __builtin_epi_vfwadd_2xf64(__epi_2xf32 a, __epi_2xf32 b,
                                       unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwadd_4xf64(__epi_4xf32 a, __epi_4xf32 b,
                                       unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwadd_8xf64(__epi_8xf32 a, __epi_8xf32 b,
                                       unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwadd_16xf64(__epi_16xf32 a, __epi_16xf32 b,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = wide_fp(a[element]) + wide_fp(b[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwadd_2xf64_mask(__epi_2xf64 merge, __epi_2xf32 a,
                                            __epi_2xf32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwadd_4xf64_mask(__epi_4xf64 merge, __epi_4xf32 a,
                                            __epi_4xf32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwadd_8xf64_mask(__epi_8xf64 merge, __epi_8xf32 a,
                                            __epi_8xf32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwadd_16xf64_mask(__epi_16xf64 merge,
                                              __epi_16xf32 a, __epi_16xf32 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = wide_fp(a[element]) + wide_fp(b[element])
   else
     result[element] = merge[element]

2.2.24. Elementwise widening floating-point addition (second operand)

Description

Use these builtins to do an elementwise addition of two floating-point vectors.

Before doing the addition, the elements of the second vector operand are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfwadd.wv
Prototypes
__epi_2xf64 __builtin_epi_vfwadd_w_2xf64(__epi_2xf64 a, __epi_2xf32 b,
                                         unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwadd_w_4xf64(__epi_4xf64 a, __epi_4xf32 b,
                                         unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwadd_w_8xf64(__epi_8xf64 a, __epi_8xf32 b,
                                         unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwadd_w_16xf64(__epi_16xf64 a, __epi_16xf32 b,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] + wide_fp(b[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwadd_w_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                              __epi_2xf32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwadd_w_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                              __epi_4xf32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwadd_w_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                              __epi_8xf32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwadd_w_16xf64_mask(__epi_16xf64 merge,
                                                __epi_16xf64 a, __epi_16xf32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] + wide_fp(b[element])
   else
     result[element] = merge[element]

2.2.25. Floating-point widening multiply and add

Description

Use these builtins to do an elementwise floating-point multiply and add.

The elements of the three vector operands are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfwmacc.vv
Prototypes
__epi_2xf64 __builtin_epi_vfwmacc_2xf64(__epi_2xf32 a, __epi_2xf32 b,
                                        __epi_2xf64 c, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwmacc_4xf64(__epi_4xf32 a, __epi_4xf32 b,
                                        __epi_4xf64 c, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwmacc_8xf64(__epi_8xf32 a, __epi_8xf32 b,
                                        __epi_8xf64 c, unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwmacc_16xf64(__epi_16xf32 a, __epi_16xf32 b,
                                          __epi_16xf64 c,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = widen_fp(a[element]) * widen_fp(b[element]) + widen_fp(c[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwmacc_2xf64_mask(__epi_2xf32 a, __epi_2xf32 b,
                                             __epi_2xf64 c, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwmacc_4xf64_mask(__epi_4xf32 a, __epi_4xf32 b,
                                             __epi_4xf64 c, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwmacc_8xf64_mask(__epi_8xf32 a, __epi_8xf32 b,
                                             __epi_8xf64 c, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwmacc_16xf64_mask(__epi_16xf32 a, __epi_16xf32 b,
                                               __epi_16xf64 c, __epi_16xi1 mask,
                                               unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = widen_fp(a[element]) * widen_fp(b[element]) + widen_fp(c[element])
   else
     result[element] = c[element]

2.2.26. Floating-point widening multiply and subtract

Description

Use these builtins to do an elementwise floating-point multiply and subtract.

The elements of the three vector operands are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfwmsac.vv
Prototypes
__epi_2xf64 __builtin_epi_vfwmsac_2xf64(__epi_2xf32 a, __epi_2xf32 b,
                                        __epi_2xf64 c, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwmsac_4xf64(__epi_4xf32 a, __epi_4xf32 b,
                                        __epi_4xf64 c, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwmsac_8xf64(__epi_8xf32 a, __epi_8xf32 b,
                                        __epi_8xf64 c, unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwmsac_16xf64(__epi_16xf32 a, __epi_16xf32 b,
                                          __epi_16xf64 c,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = widen_fp(a[element]) * widen_fp(b[element]) - widen_fp(c[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwmsac_2xf64_mask(__epi_2xf32 a, __epi_2xf32 b,
                                             __epi_2xf64 c, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwmsac_4xf64_mask(__epi_4xf32 a, __epi_4xf32 b,
                                             __epi_4xf64 c, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwmsac_8xf64_mask(__epi_8xf32 a, __epi_8xf32 b,
                                             __epi_8xf64 c, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwmsac_16xf64_mask(__epi_16xf32 a, __epi_16xf32 b,
                                               __epi_16xf64 c, __epi_16xi1 mask,
                                               unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = widen_fp(a[element]) * widen_fp(b[element]) - widen_fp(c[element])
   else
     result[element] = c[element]

2.2.27. Elementwise widening floating-point multiplication

Description

Use these builtins to do an elementwise multiplication of two floating-point vectors. Before doing the multiplication, the elements of the second vector operand are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfwmul.vv
Prototypes
__epi_2xf64 __builtin_epi_vfwmul_2xf64(__epi_2xf32 a, __epi_2xf32 b,
                                       unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwmul_4xf64(__epi_4xf32 a, __epi_4xf32 b,
                                       unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwmul_8xf64(__epi_8xf32 a, __epi_8xf32 b,
                                       unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwmul_16xf64(__epi_16xf32 a, __epi_16xf32 b,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = widen_fp(a[element]) * widen_fp(b[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwmul_2xf64_mask(__epi_2xf64 merge, __epi_2xf32 a,
                                            __epi_2xf32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwmul_4xf64_mask(__epi_4xf64 merge, __epi_4xf32 a,
                                            __epi_4xf32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwmul_8xf64_mask(__epi_8xf64 merge, __epi_8xf32 a,
                                            __epi_8xf32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwmul_16xf64_mask(__epi_16xf64 merge,
                                              __epi_16xf32 a, __epi_16xf32 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = widen_fp(a[element]) * widen_fp(b[element])
   else
     result[element] = merge[element]

2.2.28. Floating-point widening negate multiply and add

Description

Use these builtins to do an elementwise floating-point negate multiply and add.

The elements of the three vector operands are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfnmacc.vv
Prototypes
__epi_2xf64 __builtin_epi_vfwnmacc_2xf64(__epi_2xf32 a, __epi_2xf32 b,
                                         __epi_2xf64 c, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwnmacc_4xf64(__epi_4xf32 a, __epi_4xf32 b,
                                         __epi_4xf64 c, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwnmacc_8xf64(__epi_8xf32 a, __epi_8xf32 b,
                                         __epi_8xf64 c, unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwnmacc_16xf64(__epi_16xf32 a, __epi_16xf32 b,
                                           __epi_16xf64 c,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = -( widen_fp(a[element]) * widen_fp(b[element]) ) - widen_fp(c[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwnmacc_2xf64_mask(__epi_2xf32 a, __epi_2xf32 b,
                                              __epi_2xf64 c, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwnmacc_4xf64_mask(__epi_4xf32 a, __epi_4xf32 b,
                                              __epi_4xf64 c, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwnmacc_8xf64_mask(__epi_8xf32 a, __epi_8xf32 b,
                                              __epi_8xf64 c, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwnmacc_16xf64_mask(__epi_16xf32 a, __epi_16xf32 b,
                                                __epi_16xf64 c,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = -( widen_fp(a[element]) * widen_fp(b[element]) ) - widen_fp(c[element])
   else
     result[element] = c[element]

2.2.29. Floating-point widening negate multiply and subtract

Description

Use these builtins to do an elementwise floating-point multiply and subtract.

The elements of the three vector operands are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfwnmsac.vv
Prototypes
__epi_2xf64 __builtin_epi_vfwnmsac_2xf64(__epi_2xf32 a, __epi_2xf32 b,
                                         __epi_2xf64 c, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwnmsac_4xf64(__epi_4xf32 a, __epi_4xf32 b,
                                         __epi_4xf64 c, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwnmsac_8xf64(__epi_8xf32 a, __epi_8xf32 b,
                                         __epi_8xf64 c, unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwnmsac_16xf64(__epi_16xf32 a, __epi_16xf32 b,
                                           __epi_16xf64 c,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = -( widen_fp(a[element]) * widen_fp(b[element]) ) + widen_fp(c[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwnmsac_2xf64_mask(__epi_2xf32 a, __epi_2xf32 b,
                                              __epi_2xf64 c, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwnmsac_4xf64_mask(__epi_4xf32 a, __epi_4xf32 b,
                                              __epi_4xf64 c, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwnmsac_8xf64_mask(__epi_8xf32 a, __epi_8xf32 b,
                                              __epi_8xf64 c, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwnmsac_16xf64_mask(__epi_16xf32 a, __epi_16xf32 b,
                                                __epi_16xf64 c,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = -( widen_fp(a[element]) * widen_fp(b[element]) ) + widen_fp(c[element])
   else
     result[element] = merge[element]

2.2.30. Floating-point ordered sum of vector

Description

Use these builtins to compute the sum of all the elements of a floating-point vector. The initial result of the sum is taken from the first element of the vector b.

This operation preserves the order of the floating-point addition as described.

The elements of the two vector operands are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfwredosum.vs
Prototypes
__epi_2xf64 __builtin_epi_vfwredosum_2xf64(__epi_2xf32 a, __epi_2xf64 b,
                                           unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwredosum_4xf64(__epi_4xf32 a, __epi_4xf64 b,
                                           unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwredosum_8xf64(__epi_8xf32 a, __epi_8xf64 b,
                                           unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwredosum_16xf64(__epi_16xf32 a, __epi_16xf64 b,
                                             unsigned long int gvl);
Operation
if gvl > 0:
  current_sum = fp_widen(b[0])
  for element = 0 to gvl - 1
     current_sum = current_sum + fp_widen(a[element])

  result[0] = current_sum
Masked prototypes
__epi_2xf64 __builtin_epi_vfwredosum_2xf64_mask(__epi_2xf64 merge,
                                                __epi_2xf32 a, __epi_2xf64 b,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwredosum_4xf64_mask(__epi_4xf64 merge,
                                                __epi_4xf32 a, __epi_4xf64 b,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwredosum_8xf64_mask(__epi_8xf64 merge,
                                                __epi_8xf32 a, __epi_8xf64 b,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwredosum_16xf64_mask(__epi_16xf64 merge,
                                                  __epi_16xf32 a,
                                                  __epi_16xf64 b,
                                                  __epi_16xi1 mask,
                                                  unsigned long int gvl);
Masked operation
if gvl > 0:
  current_sum = fp_widen(b[0])
  for element = 0 to gvl - 1
     if mask[element] then
       current_sum = current_sum + fp_widen(a[element])
     else
       result[element] = merge[element]

  result[0] = current_sum

2.2.31. Floating-point ordered sum of vector

Description

Use these builtins to compute the sum of all the elements of a floating-point vector. The initial result of the sum is taken from the first element of the vector b.

This builtin will compute the sum in any valid sequential order.

Instruction
vfwredsum.vs
Prototypes
__epi_2xf64 __builtin_epi_vfwredsum_2xf64(__epi_2xf32 a, __epi_2xf64 b,
                                          unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwredsum_4xf64(__epi_4xf32 a, __epi_4xf64 b,
                                          unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwredsum_8xf64(__epi_8xf32 a, __epi_8xf64 b,
                                          unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwredsum_16xf64(__epi_16xf32 a, __epi_16xf64 b,
                                            unsigned long int gvl);
Operation
if gvl > 0:
  current_sum = fp_widen(b[0])
  for element = 0 to gvl - 1
     current_sum = current_sum + fp_widen(a[element])

  result[0] = current_sum
Masked prototypes
__epi_2xf64 __builtin_epi_vfwredsum_2xf64_mask(__epi_2xf64 merge, __epi_2xf32 a,
                                               __epi_2xf64 b, __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwredsum_4xf64_mask(__epi_4xf64 merge, __epi_4xf32 a,
                                               __epi_4xf64 b, __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwredsum_8xf64_mask(__epi_8xf64 merge, __epi_8xf32 a,
                                               __epi_8xf64 b, __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwredsum_16xf64_mask(__epi_16xf64 merge,
                                                 __epi_16xf32 a, __epi_16xf64 b,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
Masked operation
if gvl > 0:
  current_sum = fp_widen(b[0])
  for element = 0 to gvl - 1
     if mask[element] then
       current_sum = current_sum + fp_widen(a[element])
     else
       result[element] = merge[element]

  result[0] = current_sum

2.2.32. Elementwise widening floating-point subtraction

Description

Use these builtins to do an elementwise subtraction of two floating-point vectors.

Before doing the subtraction, the elements of the two vectors are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfwsub.vv
Prototypes
__epi_2xf64 __builtin_epi_vfwsub_2xf64(__epi_2xf32 a, __epi_2xf32 b,
                                       unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwsub_4xf64(__epi_4xf32 a, __epi_4xf32 b,
                                       unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwsub_8xf64(__epi_8xf32 a, __epi_8xf32 b,
                                       unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwsub_16xf64(__epi_16xf32 a, __epi_16xf32 b,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fp_widen(a[element]) - fp_widen(b[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwsub_2xf64_mask(__epi_2xf64 merge, __epi_2xf32 a,
                                            __epi_2xf32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwsub_4xf64_mask(__epi_4xf64 merge, __epi_4xf32 a,
                                            __epi_4xf32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwsub_8xf64_mask(__epi_8xf64 merge, __epi_8xf32 a,
                                            __epi_8xf32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwsub_16xf64_mask(__epi_16xf64 merge,
                                              __epi_16xf32 a, __epi_16xf32 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fp_widen(a[element]) - fp_widen(b[element])
   else
     result[element] = merge[element]

2.2.33. Elementwise widening floating-point subtraction (second operand)

Description

Use these builtins to do an elementwise subtraction of two floating-point vectors.

Before doing the subtraction, the elements of second vector operand are widened to floating-point values with twice the number of bits as the original elements.

Instruction
vfwsub.wv
Prototypes
__epi_2xf64 __builtin_epi_vfwsub_w_2xf64(__epi_2xf64 a, __epi_2xf32 b,
                                         unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwsub_w_4xf64(__epi_4xf64 a, __epi_4xf32 b,
                                         unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwsub_w_8xf64(__epi_8xf64 a, __epi_8xf32 b,
                                         unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwsub_w_16xf64(__epi_16xf64 a, __epi_16xf32 b,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] - fp_widen(b[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwsub_w_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                              __epi_2xf32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwsub_w_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                              __epi_4xf32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwsub_w_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                              __epi_8xf32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwsub_w_16xf64_mask(__epi_16xf64 merge,
                                                __epi_16xf64 a, __epi_16xf32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] - fp_widen(b[element])
   else
     result[element] = merge[element]

2.3. Floating-point relational operations

2.3.1. Compare elementwise two floating-point vectors for equality

Description

Use these builtins to compare to floating-point vectors for equality.

The result is a mask that enables the element if the the floating-point comparison holds for that element.

Instruction
vmfeq.vv
Prototypes
__epi_2xi1 __builtin_epi_vmfeq_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfeq_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfeq_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfeq_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                     unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfeq_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfeq_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfeq_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfeq_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] == b[element]
Masked prototypes
__epi_2xi1 __builtin_epi_vmfeq_2xf32_mask(__epi_2xi1 merge, __epi_2xf32 a,
                                          __epi_2xf32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfeq_1xf64_mask(__epi_1xi1 merge, __epi_1xf64 a,
                                          __epi_1xf64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfeq_4xf32_mask(__epi_4xi1 merge, __epi_4xf32 a,
                                          __epi_4xf32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfeq_2xf64_mask(__epi_2xi1 merge, __epi_2xf64 a,
                                          __epi_2xf64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfeq_8xf32_mask(__epi_8xi1 merge, __epi_8xf32 a,
                                          __epi_8xf32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfeq_4xf64_mask(__epi_4xi1 merge, __epi_4xf64 a,
                                          __epi_4xf64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfeq_16xf32_mask(__epi_16xi1 merge, __epi_16xf32 a,
                                            __epi_16xf32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfeq_8xf64_mask(__epi_8xi1 merge, __epi_8xf64 a,
                                          __epi_8xf64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] == b[element]
   else
     result[element] = merge[element]

2.3.2. Compare elementwise two floating-point vectors for greater-or-equal

Description

Use these builtins to evaluate if the elements of the first floating-point vector are greater or equal than the corresponding elements of the second floating-point vector.

The result is a mask that enables the element if the the floating-point comparison holds for that element.

Instruction
vmfge.vf
Prototypes
__epi_2xi1 __builtin_epi_vmfge_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfge_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfge_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfge_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                     unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfge_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfge_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfge_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfge_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] >= b[element]
Masked prototypes
__epi_2xi1 __builtin_epi_vmfge_2xf32_mask(__epi_2xi1 merge, __epi_2xf32 a,
                                          __epi_2xf32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfge_1xf64_mask(__epi_1xi1 merge, __epi_1xf64 a,
                                          __epi_1xf64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfge_4xf32_mask(__epi_4xi1 merge, __epi_4xf32 a,
                                          __epi_4xf32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfge_2xf64_mask(__epi_2xi1 merge, __epi_2xf64 a,
                                          __epi_2xf64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfge_8xf32_mask(__epi_8xi1 merge, __epi_8xf32 a,
                                          __epi_8xf32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfge_4xf64_mask(__epi_4xi1 merge, __epi_4xf64 a,
                                          __epi_4xf64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfge_16xf32_mask(__epi_16xi1 merge, __epi_16xf32 a,
                                            __epi_16xf32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfge_8xf64_mask(__epi_8xi1 merge, __epi_8xf64 a,
                                          __epi_8xf64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] >= b[element]
   else
     result[element] = merge[element]

2.3.3. Compare elementwise two floating-point vectors for greater-than

Description

Use these builtins to evaluate if the elements of the first floating-point vector are greater than, but not equal to, the corresponding elements of the second floating-point vector.

The result is a mask that enables the element if the the floating-point comparison holds for that element.

Instruction
vmfgt.vf
Prototypes
__epi_2xi1 __builtin_epi_vmfgt_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfgt_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfgt_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfgt_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                     unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfgt_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfgt_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfgt_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfgt_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] > b[element]
Masked prototypes
__epi_2xi1 __builtin_epi_vmfgt_2xf32_mask(__epi_2xi1 merge, __epi_2xf32 a,
                                          __epi_2xf32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfgt_1xf64_mask(__epi_1xi1 merge, __epi_1xf64 a,
                                          __epi_1xf64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfgt_4xf32_mask(__epi_4xi1 merge, __epi_4xf32 a,
                                          __epi_4xf32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfgt_2xf64_mask(__epi_2xi1 merge, __epi_2xf64 a,
                                          __epi_2xf64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfgt_8xf32_mask(__epi_8xi1 merge, __epi_8xf32 a,
                                          __epi_8xf32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfgt_4xf64_mask(__epi_4xi1 merge, __epi_4xf64 a,
                                          __epi_4xf64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfgt_16xf32_mask(__epi_16xi1 merge, __epi_16xf32 a,
                                            __epi_16xf32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfgt_8xf64_mask(__epi_8xi1 merge, __epi_8xf64 a,
                                          __epi_8xf64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] > b[element]
   else
     result[element] = merge[element]

2.3.4. Compare elementwise two floating-point vectors for lower-or-equal

Description

Use these builtins to evaluate if the elements of the first floating-point vector are lower or equal than the corresponding elements of the second floating-point vector.

The result is a mask that enables the element if the the floating-point comparison holds for that element.

Instruction
vmfle.vv
Prototypes
__epi_2xi1 __builtin_epi_vmfle_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfle_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfle_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfle_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                     unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfle_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfle_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfle_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfle_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] <= b[element]
Masked prototypes
__epi_2xi1 __builtin_epi_vmfle_2xf32_mask(__epi_2xi1 merge, __epi_2xf32 a,
                                          __epi_2xf32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfle_1xf64_mask(__epi_1xi1 merge, __epi_1xf64 a,
                                          __epi_1xf64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfle_4xf32_mask(__epi_4xi1 merge, __epi_4xf32 a,
                                          __epi_4xf32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfle_2xf64_mask(__epi_2xi1 merge, __epi_2xf64 a,
                                          __epi_2xf64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfle_8xf32_mask(__epi_8xi1 merge, __epi_8xf32 a,
                                          __epi_8xf32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfle_4xf64_mask(__epi_4xi1 merge, __epi_4xf64 a,
                                          __epi_4xf64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfle_16xf32_mask(__epi_16xi1 merge, __epi_16xf32 a,
                                            __epi_16xf32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfle_8xf64_mask(__epi_8xi1 merge, __epi_8xf64 a,
                                          __epi_8xf64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] <= b[element]
   else
     result[element] = merge[element]

2.3.5. Compare elementwise two floating-point vectors for greater-than

Description

Use these builtins to evaluate if the elements of the first floating-point vector are lower than, but not equal to, the corresponding elements of the second floating-point vector.

The result is a mask that enables the element if the the floating-point comparison holds.

Instruction
vmflt.vv
Prototypes
__epi_2xi1 __builtin_epi_vmflt_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmflt_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmflt_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmflt_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                     unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmflt_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmflt_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmflt_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmflt_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] < b[element]
Masked prototypes
__epi_2xi1 __builtin_epi_vmflt_2xf32_mask(__epi_2xi1 merge, __epi_2xf32 a,
                                          __epi_2xf32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmflt_1xf64_mask(__epi_1xi1 merge, __epi_1xf64 a,
                                          __epi_1xf64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmflt_4xf32_mask(__epi_4xi1 merge, __epi_4xf32 a,
                                          __epi_4xf32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmflt_2xf64_mask(__epi_2xi1 merge, __epi_2xf64 a,
                                          __epi_2xf64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmflt_8xf32_mask(__epi_8xi1 merge, __epi_8xf32 a,
                                          __epi_8xf32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmflt_4xf64_mask(__epi_4xi1 merge, __epi_4xf64 a,
                                          __epi_4xf64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmflt_16xf32_mask(__epi_16xi1 merge, __epi_16xf32 a,
                                            __epi_16xf32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmflt_8xf64_mask(__epi_8xi1 merge, __epi_8xf64 a,
                                          __epi_8xf64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] < b[element]
   else
     result[element] = merge[element]

2.3.6. Compare elementwise two floating-point vectors for inequality

Description

Use these builtins to compare to floating-point vectors for inequality.

The result is a mask that enables the element if the the floating-point comparison holds for that element.

Instruction
vmfne.vv
Prototypes
__epi_2xi1 __builtin_epi_vmfne_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfne_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfne_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfne_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                     unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfne_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfne_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfne_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfne_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] != b[element]
Masked prototypes
__epi_2xi1 __builtin_epi_vmfne_2xf32_mask(__epi_2xi1 merge, __epi_2xf32 a,
                                          __epi_2xf32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmfne_1xf64_mask(__epi_1xi1 merge, __epi_1xf64 a,
                                          __epi_1xf64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfne_4xf32_mask(__epi_4xi1 merge, __epi_4xf32 a,
                                          __epi_4xf32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmfne_2xf64_mask(__epi_2xi1 merge, __epi_2xf64 a,
                                          __epi_2xf64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfne_8xf32_mask(__epi_8xi1 merge, __epi_8xf32 a,
                                          __epi_8xf32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmfne_4xf64_mask(__epi_4xi1 merge, __epi_4xf64 a,
                                          __epi_4xf64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmfne_16xf32_mask(__epi_16xi1 merge, __epi_16xf32 a,
                                            __epi_16xf32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmfne_8xf64_mask(__epi_8xi1 merge, __epi_8xf64 a,
                                          __epi_8xf64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] != b[element]
   else
     result[element] = merge[element]

2.4. Integer arithmetic operations

2.4.1. Elementwise addition with carry-in

Description

Use these builtins to compute the elementwise addition of two integer vectors and a carry in.

Instruction
vadc.vvm
Prototypes
__epi_8xi8 __builtin_epi_vadc_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   __epi_8xi1 carry_in, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vadc_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     __epi_4xi1 carry_in,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vadc_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     __epi_2xi1 carry_in,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vadc_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     __epi_1xi1 carry_in,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vadc_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     __epi_16xi1 carry_in,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vadc_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     __epi_8xi1 carry_in,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vadc_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     __epi_4xi1 carry_in,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vadc_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     __epi_2xi1 carry_in,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vadc_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     __epi_32xi1 carry_in,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vadc_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       __epi_16xi1 carry_in,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vadc_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     __epi_8xi1 carry_in,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vadc_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     __epi_4xi1 carry_in,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vadc_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     __epi_64xi1 carry_in,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vadc_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       __epi_32xi1 carry_in,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vadc_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       __epi_16xi1 carry_in,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vadc_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     __epi_8xi1 carry_in,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] + b[element] + carry_in[element]

2.4.2. Elementwise integer addition

Description

Use these builtins to do an elementwise addition of two integer vectors.

Instruction
vadd.vv
Prototypes
__epi_8xi8 __builtin_epi_vadd_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vadd_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vadd_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vadd_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vadd_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vadd_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vadd_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vadd_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vadd_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vadd_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vadd_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vadd_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vadd_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vadd_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vadd_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vadd_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] + b[element]
Masked prototypes
__epi_8xi8 __builtin_epi_vadd_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vadd_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vadd_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vadd_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vadd_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vadd_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vadd_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vadd_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vadd_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vadd_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vadd_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vadd_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vadd_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vadd_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vadd_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vadd_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] + b[element]
   else
     result[element] = merge[element]

2.4.3. Elementwise integer division

Description

Use these builtins to do an elementwise division of two integer vectors.

Instruction
vdiv.vv
Prototypes
__epi_8xi8 __builtin_epi_vdiv_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vdiv_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vdiv_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vdiv_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vdiv_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vdiv_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vdiv_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vdiv_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vdiv_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vdiv_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vdiv_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vdiv_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vdiv_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vdiv_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vdiv_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vdiv_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] / b[element]
Masked prototypes
__epi_8xi8 __builtin_epi_vdiv_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vdiv_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vdiv_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vdiv_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vdiv_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vdiv_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vdiv_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vdiv_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vdiv_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vdiv_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vdiv_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vdiv_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vdiv_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vdiv_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vdiv_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vdiv_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] / b[element]
   else
     result[element] = merge[element]

2.4.4. Elementwise unsigned integer division

Description

Use these builtins to do an elementwise unsigned division of two integer vectors.

Instruction
vdivu.vv
Prototypes
__epi_8xi8 __builtin_epi_vdivu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vdivu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vdivu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_1xi64 __builtin_epi_vdivu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vdivu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi16 __builtin_epi_vdivu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vdivu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vdivu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vdivu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vdivu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vdivu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vdivu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vdivu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vdivu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vdivu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vdivu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = divu(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vdivu_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_vdivu_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vdivu_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi64 __builtin_epi_vdivu_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                           __epi_1xi64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_vdivu_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_vdivu_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vdivu_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vdivu_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                           __epi_2xi64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_vdivu_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vdivu_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vdivu_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vdivu_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                           __epi_4xi64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_vdivu_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vdivu_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vdivu_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vdivu_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                           __epi_8xi64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = divu(a[element], b[element])
   else
     result[element] = merge[element]

2.4.5. Elementwise carry-out of addition

Description

Use these builtins to compute the carry-out of an elementwise addition of two integer vectors.

Instruction
vmadc.vv
Prototypes
__epi_8xi1 __builtin_epi_vmadc_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmadc_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmadc_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmadc_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmadc_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmadc_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmadc_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmadc_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmadc_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmadc_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmadc_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmadc_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmadc_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmadc_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmadc_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmadc_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = carry_out(a[element] + b[element])

2.4.6. Elementwise carry-out of addition with given carry-in

Description

Use these builtins to compute the carry-out of the addition of two integer vectors and a carry-in.

This operation is useful to compute wider-than-ELEN integer vector addition.

Instruction
vmadc.vvm
Prototypes
__epi_8xi1 __builtin_epi_vmadc_carry_in_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                             __epi_8xi1 carry_in,
                                             unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmadc_carry_in_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                              __epi_4xi1 carry_in,
                                              unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmadc_carry_in_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                              __epi_2xi1 carry_in,
                                              unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmadc_carry_in_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                              __epi_1xi1 carry_in,
                                              unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmadc_carry_in_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                               __epi_16xi1 carry_in,
                                               unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmadc_carry_in_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                              __epi_8xi1 carry_in,
                                              unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmadc_carry_in_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                              __epi_4xi1 carry_in,
                                              unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmadc_carry_in_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                              __epi_2xi1 carry_in,
                                              unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmadc_carry_in_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                               __epi_32xi1 carry_in,
                                               unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmadc_carry_in_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                                __epi_16xi1 carry_in,
                                                unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmadc_carry_in_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                              __epi_8xi1 carry_in,
                                              unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmadc_carry_in_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                              __epi_4xi1 carry_in,
                                              unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmadc_carry_in_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                               __epi_64xi1 carry_in,
                                               unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmadc_carry_in_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                                __epi_32xi1 carry_in,
                                                unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmadc_carry_in_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                                __epi_16xi1 carry_in,
                                                unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmadc_carry_in_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                              __epi_8xi1 carry_in,
                                              unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = carry_out(a[element] + b[element] + carry_in[element])

2.4.7. Elementwise integer maximum

Description

Use these builtins to compute elementwise the maximum of two integer vectors.

Instruction
vmax.vv
Prototypes
__epi_8xi8 __builtin_epi_vmax_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmax_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmax_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmax_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmax_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmax_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmax_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmax_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmax_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmax_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmax_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmax_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmax_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmax_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmax_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmax_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = max(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vmax_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmax_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmax_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmax_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmax_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmax_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmax_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmax_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmax_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmax_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmax_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmax_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmax_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmax_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmax_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmax_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = max(a[element], b[element])
   else
     result[element] = merge[element]

2.4.8. Elementwise integer maximum

Description

Use these builtins to compute elementwise the maximum of two unsigned integer vectors.

Instruction
vmaxu.vv
Prototypes
__epi_8xi8 __builtin_epi_vmaxu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmaxu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmaxu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmaxu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmaxu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmaxu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmaxu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmaxu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmaxu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmaxu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmaxu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmaxu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmaxu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmaxu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmaxu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmaxu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = maxu(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vmaxu_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmaxu_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmaxu_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmaxu_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                           __epi_1xi64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmaxu_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmaxu_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmaxu_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmaxu_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                           __epi_2xi64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmaxu_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmaxu_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmaxu_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmaxu_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                           __epi_4xi64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmaxu_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmaxu_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmaxu_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmaxu_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                           __epi_8xi64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = maxu(a[element], b[element])
   else
     result[element] = merge[element]

2.4.9. Elementwise integer minimum

Description

Use these builtins to compute elementwise the minimum of two integer vectors.

Instruction
vmin.vv
Prototypes
__epi_8xi8 __builtin_epi_vmin_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmin_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmin_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmin_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmin_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmin_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmin_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmin_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmin_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmin_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmin_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmin_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmin_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmin_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmin_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmin_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = min(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vmin_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmin_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmin_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmin_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmin_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmin_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmin_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmin_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmin_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmin_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmin_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmin_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmin_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmin_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmin_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmin_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = min(a[element], b[element])
   else
     result[element] = merge[element]

2.4.10. Elementwise integer minimum

Description

Use these builtins to compute elementwise the minimum of two unsigned integer vectors.

Instruction
vminu.vv
Prototypes
__epi_8xi8 __builtin_epi_vminu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vminu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vminu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_1xi64 __builtin_epi_vminu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vminu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi16 __builtin_epi_vminu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vminu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vminu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vminu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vminu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vminu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vminu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vminu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vminu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vminu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vminu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = minu(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vminu_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_vminu_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vminu_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi64 __builtin_epi_vminu_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                           __epi_1xi64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_vminu_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_vminu_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vminu_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vminu_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                           __epi_2xi64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_vminu_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vminu_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vminu_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vminu_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                           __epi_4xi64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_vminu_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vminu_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vminu_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vminu_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                           __epi_8xi64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = minu(a[element], b[element])
   else
     result[element] = merge[element]

2.4.11. Elementwise borrow-out of subtraction

Description

Use these builtins to compute the borrow-out of an elementwise subtraction of two integer vectors.

Instruction
vmsbc.vv
Prototypes
__epi_8xi1 __builtin_epi_vmsbc_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsbc_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsbc_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsbc_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsbc_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsbc_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsbc_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsbc_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsbc_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsbc_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsbc_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsbc_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsbc_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsbc_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsbc_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsbc_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = borrow_out(a[element] - b[element])

2.4.12. Elementwise borrow-out of subtraction with given borrow-in

Description

Use these builtins to compute the borrow-out of the subtraction of two integer vectors and a borrow-in.

This operation is useful to compute wider-than-ELEN integer vector subtraction.

Instruction
vmsbc.vvm
Prototypes
__epi_8xi1 __builtin_epi_vmsbc_borrow_in_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                              __epi_8xi1 borrow_in,
                                              unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsbc_borrow_in_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                               __epi_4xi1 borrow_in,
                                               unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsbc_borrow_in_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                               __epi_2xi1 borrow_in,
                                               unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsbc_borrow_in_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                               __epi_1xi1 borrow_in,
                                               unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsbc_borrow_in_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                                __epi_16xi1 borrow_in,
                                                unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsbc_borrow_in_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                               __epi_8xi1 borrow_in,
                                               unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsbc_borrow_in_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                               __epi_4xi1 borrow_in,
                                               unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsbc_borrow_in_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                               __epi_2xi1 borrow_in,
                                               unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsbc_borrow_in_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                                __epi_32xi1 borrow_in,
                                                unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsbc_borrow_in_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                                 __epi_16xi1 borrow_in,
                                                 unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsbc_borrow_in_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                               __epi_8xi1 borrow_in,
                                               unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsbc_borrow_in_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                               __epi_4xi1 borrow_in,
                                               unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsbc_borrow_in_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                                __epi_64xi1 borrow_in,
                                                unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsbc_borrow_in_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                                 __epi_32xi1 borrow_in,
                                                 unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsbc_borrow_in_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                                 __epi_16xi1 borrow_in,
                                                 unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsbc_borrow_in_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                               __epi_8xi1 borrow_in,
                                               unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = borrow_out(a[element] - b[element] - borrow_in[element])

2.4.13. Elementwise integer multiplication

Description

Use these builtins to do an elementwise multiplication of two integer vectors.

This operation returns the lower bits of the multiplication

Instruction
vmul.vv
Prototypes
__epi_8xi8 __builtin_epi_vmul_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmul_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmul_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmul_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmul_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmul_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmul_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmul_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmul_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmul_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmul_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmul_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmul_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmul_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmul_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmul_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] * b[element]
Masked prototypes
__epi_8xi8 __builtin_epi_vmul_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmul_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmul_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmul_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmul_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmul_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmul_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmul_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmul_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmul_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmul_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmul_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmul_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmul_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmul_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmul_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] * b[element]
   else
     result[element] = merge[element]

2.4.14. Elementwise integer multiplication (higher bits)

Description

Use these builtins to do an elementwise multiplication of two integer vectors.

This operation returns the higher bits of the multiplication

Instruction
vmulh.vv
Prototypes
__epi_8xi8 __builtin_epi_vmulh_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmulh_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmulh_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmulh_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmulh_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmulh_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmulh_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmulh_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmulh_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmulh_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmulh_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmulh_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmulh_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmulh_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmulh_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmulh_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = mulh(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vmulh_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmulh_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmulh_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmulh_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                           __epi_1xi64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmulh_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmulh_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmulh_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmulh_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                           __epi_2xi64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmulh_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmulh_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmulh_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmulh_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                           __epi_4xi64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmulh_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmulh_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmulh_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmulh_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                           __epi_8xi64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = mulh(a[element], b[element])
   else
     result[element] = merge[element]

2.4.15. Elementwise mixed sign integer multiplication (higher bits)

Description

Use these builtins to do an elementwise multiplication of two unsigned integer vectors.

The second integer operand is interpreted as an unsigned integer vector.

This operation returns the higher bits of the multiplication

Instruction
vmulhsu.vv
Prototypes
__epi_8xi8 __builtin_epi_vmulhsu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                      unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmulhsu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                        unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmulhsu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                        unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmulhsu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                        unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmulhsu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                        unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmulhsu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmulhsu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmulhsu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                        unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmulhsu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmulhsu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmulhsu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmulhsu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                        unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmulhsu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmulhsu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmulhsu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmulhsu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = mulhsu([element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vmulhsu_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                           __epi_8xi8 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmulhsu_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                             __epi_4xi16 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmulhsu_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                             __epi_2xi32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmulhsu_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                             __epi_1xi64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmulhsu_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                             __epi_16xi8 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmulhsu_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                             __epi_8xi16 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmulhsu_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                             __epi_4xi32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmulhsu_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                             __epi_2xi64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmulhsu_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                             __epi_32xi8 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmulhsu_16xi16_mask(__epi_16xi16 merge,
                                               __epi_16xi16 a, __epi_16xi16 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmulhsu_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                             __epi_8xi32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmulhsu_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                             __epi_4xi64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmulhsu_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                             __epi_64xi8 b, __epi_64xi1 mask,
                                             unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmulhsu_32xi16_mask(__epi_32xi16 merge,
                                               __epi_32xi16 a, __epi_32xi16 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmulhsu_16xi32_mask(__epi_16xi32 merge,
                                               __epi_16xi32 a, __epi_16xi32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmulhsu_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                             __epi_8xi64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = mulhsu([element], b[element])
   else
     result[element] = merge[element]

2.4.16. Elementwise unsigned integer multiplication (higher bits)

Description

Use these builtins to do an elementwise multiplication of two unsigned integer vectors.

This operation returns the higher bits of the multiplication

Instruction
vmulhu.vv
Prototypes
__epi_8xi8 __builtin_epi_vmulhu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                     unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmulhu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                       unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmulhu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                       unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmulhu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                       unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmulhu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                       unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmulhu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                       unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmulhu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                       unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmulhu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                       unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmulhu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                       unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmulhu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                         unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmulhu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                       unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmulhu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                       unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmulhu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                       unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmulhu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                         unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmulhu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                         unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmulhu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                       unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = mulhu(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vmulhu_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                          __epi_8xi8 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmulhu_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                            __epi_4xi16 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmulhu_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                            __epi_2xi32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmulhu_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                            __epi_1xi64 b, __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmulhu_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                            __epi_16xi8 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmulhu_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                            __epi_8xi16 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmulhu_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                            __epi_4xi32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmulhu_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                            __epi_2xi64 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmulhu_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                            __epi_32xi8 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmulhu_16xi16_mask(__epi_16xi16 merge,
                                              __epi_16xi16 a, __epi_16xi16 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmulhu_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                            __epi_8xi32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmulhu_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                            __epi_4xi64 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmulhu_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                            __epi_64xi8 b, __epi_64xi1 mask,
                                            unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmulhu_32xi16_mask(__epi_32xi16 merge,
                                              __epi_32xi16 a, __epi_32xi16 b,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmulhu_16xi32_mask(__epi_16xi32 merge,
                                              __epi_16xi32 a, __epi_16xi32 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmulhu_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                            __epi_8xi64 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = mulhu(a[element], b[element])
   else
     result[element] = merge[element]

2.4.17. Narrowing elementwise arithmetic shift-right

Description

Use these builtins to do an arithmetic shift-right of the elements of the first operand with the shift amount specified in the elements of the second operand.

The result of the operation is a vector of integer elements whose bitwidth is half of the input integer vector operands.

This builtin can be used to narrow an integer vector if the shift amount is set to zero.

Instruction
vnsra.vv
Prototypes
__epi_8xi8 __builtin_epi_vnsra_8xi8(__epi_8xi16 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vnsra_4xi16(__epi_4xi32 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vnsra_2xi32(__epi_2xi64 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vnsra_16xi8(__epi_16xi16 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi16 __builtin_epi_vnsra_8xi16(__epi_8xi32 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vnsra_4xi32(__epi_4xi64 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vnsra_32xi8(__epi_32xi16 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vnsra_16xi16(__epi_16xi32 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vnsra_8xi32(__epi_8xi64 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vnsra_64xi8(__epi_64xi16 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vnsra_32xi16(__epi_32xi32 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vnsra_16xi32(__epi_16xi64 a, __epi_16xi32 b,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = narrow_int(sra(a[element], b[element]))
Masked prototypes
__epi_8xi8 __builtin_epi_vnsra_8xi8_mask(__epi_8xi8 merge, __epi_8xi16 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_vnsra_4xi16_mask(__epi_4xi16 merge, __epi_4xi32 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vnsra_2xi32_mask(__epi_2xi32 merge, __epi_2xi64 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_vnsra_16xi8_mask(__epi_16xi8 merge, __epi_16xi16 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_vnsra_8xi16_mask(__epi_8xi16 merge, __epi_8xi32 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vnsra_4xi32_mask(__epi_4xi32 merge, __epi_4xi64 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_vnsra_32xi8_mask(__epi_32xi8 merge, __epi_32xi16 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vnsra_16xi16_mask(__epi_16xi16 merge, __epi_16xi32 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vnsra_8xi32_mask(__epi_8xi32 merge, __epi_8xi64 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_vnsra_64xi8_mask(__epi_64xi8 merge, __epi_64xi16 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vnsra_32xi16_mask(__epi_32xi16 merge, __epi_32xi32 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vnsra_16xi32_mask(__epi_16xi32 merge, __epi_16xi64 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = narrow_int(sra(a[element], b[element]))
   else
     result[element] = merge[element]

2.4.18. Narrowing elementwise logical shift-right

Description

Use these builtins to do a logical shift right of the elements of the first operand with the shift amount specified in the elements of the second operand.

The result of the operation is a vector of integer elements whose bitwidth is half of the input integer vector operands.

This builtin can be used to narrow an integer vector if the shift amount is set to zero.

Instruction
vnsrl.vv
Prototypes
__epi_8xi8 __builtin_epi_vnsrl_8xi8(__epi_8xi16 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vnsrl_4xi16(__epi_4xi32 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vnsrl_2xi32(__epi_2xi64 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vnsrl_16xi8(__epi_16xi16 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi16 __builtin_epi_vnsrl_8xi16(__epi_8xi32 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vnsrl_4xi32(__epi_4xi64 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vnsrl_32xi8(__epi_32xi16 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vnsrl_16xi16(__epi_16xi32 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vnsrl_8xi32(__epi_8xi64 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vnsrl_64xi8(__epi_64xi16 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vnsrl_32xi16(__epi_32xi32 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vnsrl_16xi32(__epi_16xi64 a, __epi_16xi32 b,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = narrow_int(srl(a[element], b[element]))
Masked prototypes
__epi_8xi8 __builtin_epi_vnsrl_8xi8_mask(__epi_8xi8 merge, __epi_8xi16 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_vnsrl_4xi16_mask(__epi_4xi16 merge, __epi_4xi32 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vnsrl_2xi32_mask(__epi_2xi32 merge, __epi_2xi64 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_vnsrl_16xi8_mask(__epi_16xi8 merge, __epi_16xi16 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_vnsrl_8xi16_mask(__epi_8xi16 merge, __epi_8xi32 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vnsrl_4xi32_mask(__epi_4xi32 merge, __epi_4xi64 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_vnsrl_32xi8_mask(__epi_32xi8 merge, __epi_32xi16 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vnsrl_16xi16_mask(__epi_16xi16 merge, __epi_16xi32 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vnsrl_8xi32_mask(__epi_8xi32 merge, __epi_8xi64 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_vnsrl_64xi8_mask(__epi_64xi8 merge, __epi_64xi16 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vnsrl_32xi16_mask(__epi_32xi16 merge, __epi_32xi32 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vnsrl_16xi32_mask(__epi_16xi32 merge, __epi_16xi64 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = narrow_int(srl(a[element], b[element]))
   else
     result[element] = merge[element]

2.4.19. Integer vector bitwise-and reduction

Description

Use these builtins to compute the bitwise-and of all the elements of a integer vector. The initial result of the bitwise-and is taken from the first element of the vector b.

Instruction
vredand.vs
Prototypes
__epi_8xi8 __builtin_epi_vredand_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                      unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredand_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                        unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredand_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                        unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredand_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                        unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredand_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                        unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredand_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredand_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredand_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                        unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredand_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredand_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredand_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredand_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                        unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredand_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredand_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredand_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredand_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                        unsigned long int gvl);
Operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     current_red = bitwise_and(current_red, a[element])

  result[0] = current_red
Masked prototypes
__epi_8xi8 __builtin_epi_vredand_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                           __epi_8xi8 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredand_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                             __epi_4xi16 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredand_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                             __epi_2xi32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredand_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                             __epi_1xi64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredand_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                             __epi_16xi8 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredand_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                             __epi_8xi16 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredand_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                             __epi_4xi32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredand_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                             __epi_2xi64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredand_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                             __epi_32xi8 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredand_16xi16_mask(__epi_16xi16 merge,
                                               __epi_16xi16 a, __epi_16xi16 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredand_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                             __epi_8xi32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredand_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                             __epi_4xi64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredand_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                             __epi_64xi8 b, __epi_64xi1 mask,
                                             unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredand_32xi16_mask(__epi_32xi16 merge,
                                               __epi_32xi16 a, __epi_32xi16 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredand_16xi32_mask(__epi_16xi32 merge,
                                               __epi_16xi32 a, __epi_16xi32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredand_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                             __epi_8xi64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_red = bitwise_and(current_red, a[element])
     else
       result[element] = merge[element]

  result[0] = current_red

2.4.20. Integer vector maximum reduction

Description

Use these builtins to compute the maximum of all the elements of a integer vector and a given scalar value. The scalar value is taken from the first element of the vector b.

Instruction
vredmax.vs
Prototypes
__epi_8xi8 __builtin_epi_vredmax_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                      unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredmax_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                        unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredmax_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                        unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredmax_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                        unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredmax_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                        unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredmax_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredmax_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredmax_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                        unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredmax_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredmax_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredmax_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredmax_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                        unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredmax_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredmax_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredmax_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredmax_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                        unsigned long int gvl);
Operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     current_red = max(current_red, a[element])

  result[0] = current_red
Masked prototypes
__epi_8xi8 __builtin_epi_vredmax_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                           __epi_8xi8 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredmax_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                             __epi_4xi16 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredmax_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                             __epi_2xi32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredmax_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                             __epi_1xi64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredmax_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                             __epi_16xi8 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredmax_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                             __epi_8xi16 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredmax_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                             __epi_4xi32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredmax_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                             __epi_2xi64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredmax_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                             __epi_32xi8 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredmax_16xi16_mask(__epi_16xi16 merge,
                                               __epi_16xi16 a, __epi_16xi16 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredmax_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                             __epi_8xi32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredmax_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                             __epi_4xi64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredmax_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                             __epi_64xi8 b, __epi_64xi1 mask,
                                             unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredmax_32xi16_mask(__epi_32xi16 merge,
                                               __epi_32xi16 a, __epi_32xi16 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredmax_16xi32_mask(__epi_16xi32 merge,
                                               __epi_16xi32 a, __epi_16xi32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredmax_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                             __epi_8xi64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_red = max(current_red, a[element])
     else
       result[element] = merge[element]

  result[0] = current_red

2.4.21. Unsigned integer vector maximum reduction

Description

Use these builtins to compute the maximum of all the elements of a integer vector and a given scalar value. The scalar value is taken from the first element of the vector b.

Instruction
vredmaxu.vs
Prototypes
__epi_8xi8 __builtin_epi_vredmaxu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                       unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredmaxu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                         unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredmaxu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                         unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredmaxu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                         unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredmaxu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                         unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredmaxu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                         unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredmaxu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                         unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredmaxu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                         unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredmaxu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                         unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredmaxu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                           unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredmaxu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                         unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredmaxu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                         unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredmaxu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                         unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredmaxu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                           unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredmaxu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                           unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredmaxu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                         unsigned long int gvl);
Operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     current_red = maxu(current_red, a[element])

  result[0] = current_red
Masked prototypes
__epi_8xi8 __builtin_epi_vredmaxu_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                            __epi_8xi8 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredmaxu_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                              __epi_4xi16 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredmaxu_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                              __epi_2xi32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredmaxu_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                              __epi_1xi64 b, __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredmaxu_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                              __epi_16xi8 b, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredmaxu_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                              __epi_8xi16 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredmaxu_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                              __epi_4xi32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredmaxu_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                              __epi_2xi64 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredmaxu_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                              __epi_32xi8 b, __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredmaxu_16xi16_mask(__epi_16xi16 merge,
                                                __epi_16xi16 a, __epi_16xi16 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredmaxu_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                              __epi_8xi32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredmaxu_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                              __epi_4xi64 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredmaxu_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                              __epi_64xi8 b, __epi_64xi1 mask,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredmaxu_32xi16_mask(__epi_32xi16 merge,
                                                __epi_32xi16 a, __epi_32xi16 b,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredmaxu_16xi32_mask(__epi_16xi32 merge,
                                                __epi_16xi32 a, __epi_16xi32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredmaxu_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                              __epi_8xi64 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
Masked operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_red = maxu(current_red, a[element])
     else
       result[element] = merge[element]

  result[0] = current_red

2.4.22. Integer vector minimum reduction

Description

Use these builtins to compute the minimum of all the elements of a integer vector and a given scalar value. The scalar value is taken from the first element of the vector b.

Instruction
vredmin.vs
Prototypes
__epi_8xi8 __builtin_epi_vredmin_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                      unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredmin_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                        unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredmin_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                        unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredmin_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                        unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredmin_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                        unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredmin_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredmin_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredmin_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                        unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredmin_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredmin_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredmin_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredmin_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                        unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredmin_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredmin_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredmin_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredmin_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                        unsigned long int gvl);
Operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     current_red = min(current_red, a[element])

  result[0] = current_red
Masked prototypes
__epi_8xi8 __builtin_epi_vredmin_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                           __epi_8xi8 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredmin_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                             __epi_4xi16 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredmin_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                             __epi_2xi32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredmin_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                             __epi_1xi64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredmin_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                             __epi_16xi8 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredmin_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                             __epi_8xi16 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredmin_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                             __epi_4xi32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredmin_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                             __epi_2xi64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredmin_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                             __epi_32xi8 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredmin_16xi16_mask(__epi_16xi16 merge,
                                               __epi_16xi16 a, __epi_16xi16 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredmin_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                             __epi_8xi32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredmin_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                             __epi_4xi64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredmin_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                             __epi_64xi8 b, __epi_64xi1 mask,
                                             unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredmin_32xi16_mask(__epi_32xi16 merge,
                                               __epi_32xi16 a, __epi_32xi16 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredmin_16xi32_mask(__epi_16xi32 merge,
                                               __epi_16xi32 a, __epi_16xi32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredmin_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                             __epi_8xi64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_red = min(current_red, a[element])
     else
       result[element] = merge[element]

  result[0] = current_red

2.4.23. Unsigned integer vector minimum reduction

Description

Use these builtins to compute the minimum of all the elements of a integer vector and a given scalar value. The scalar value is taken from the first element of the vector b.

Instruction
vredminu.vs
Prototypes
__epi_8xi8 __builtin_epi_vredminu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                       unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredminu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                         unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredminu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                         unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredminu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                         unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredminu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                         unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredminu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                         unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredminu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                         unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredminu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                         unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredminu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                         unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredminu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                           unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredminu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                         unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredminu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                         unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredminu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                         unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredminu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                           unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredminu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                           unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredminu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                         unsigned long int gvl);
Operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     current_red = minu(current_red, a[element])

  result[0] = current_red
Masked prototypes
__epi_8xi8 __builtin_epi_vredminu_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                            __epi_8xi8 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredminu_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                              __epi_4xi16 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredminu_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                              __epi_2xi32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredminu_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                              __epi_1xi64 b, __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredminu_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                              __epi_16xi8 b, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredminu_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                              __epi_8xi16 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredminu_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                              __epi_4xi32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredminu_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                              __epi_2xi64 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredminu_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                              __epi_32xi8 b, __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredminu_16xi16_mask(__epi_16xi16 merge,
                                                __epi_16xi16 a, __epi_16xi16 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredminu_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                              __epi_8xi32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredminu_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                              __epi_4xi64 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredminu_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                              __epi_64xi8 b, __epi_64xi1 mask,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredminu_32xi16_mask(__epi_32xi16 merge,
                                                __epi_32xi16 a, __epi_32xi16 b,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredminu_16xi32_mask(__epi_16xi32 merge,
                                                __epi_16xi32 a, __epi_16xi32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredminu_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                              __epi_8xi64 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
Masked operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_red = minu(current_red, a[element])
     else
       result[element] = merge[element]

  result[0] = current_red

2.4.24. Integer vector bitwise-or reduction

Description

Use these builtins to compute the bitwise-or of all the elements of a integer vector. The initial result of the bitwise-or is taken from the first element of the vector b.

Instruction
vredor.vs
Prototypes
__epi_8xi8 __builtin_epi_vredor_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                     unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredor_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                       unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredor_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                       unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredor_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                       unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredor_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                       unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredor_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                       unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredor_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                       unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredor_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                       unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredor_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                       unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredor_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                         unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredor_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                       unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredor_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                       unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredor_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                       unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredor_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                         unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredor_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                         unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredor_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                       unsigned long int gvl);
Operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     current_red = bitwise_or(current_red, a[element])

  result[0] = current_red
Masked prototypes
__epi_8xi8 __builtin_epi_vredor_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                          __epi_8xi8 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredor_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                            __epi_4xi16 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredor_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                            __epi_2xi32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredor_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                            __epi_1xi64 b, __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredor_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                            __epi_16xi8 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredor_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                            __epi_8xi16 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredor_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                            __epi_4xi32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredor_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                            __epi_2xi64 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredor_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                            __epi_32xi8 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredor_16xi16_mask(__epi_16xi16 merge,
                                              __epi_16xi16 a, __epi_16xi16 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredor_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                            __epi_8xi32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredor_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                            __epi_4xi64 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredor_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                            __epi_64xi8 b, __epi_64xi1 mask,
                                            unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredor_32xi16_mask(__epi_32xi16 merge,
                                              __epi_32xi16 a, __epi_32xi16 b,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredor_16xi32_mask(__epi_16xi32 merge,
                                              __epi_16xi32 a, __epi_16xi32 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredor_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                            __epi_8xi64 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
Masked operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_red = bitwise_or(current_red, a[element])
     else
       result[element] = merge[element]

  result[0] = current_red

2.4.25. Sum of integer vector

Description

Use these builtins to compute the sum of all the elements of an integer vector. The initial result of the sum is taken from the first element of the vector b.

Instruction
vredsum.vs
Prototypes
__epi_8xi8 __builtin_epi_vredsum_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                      unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredsum_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                        unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredsum_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                        unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredsum_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                        unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredsum_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                        unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredsum_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredsum_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredsum_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                        unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredsum_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredsum_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredsum_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredsum_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                        unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredsum_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredsum_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredsum_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredsum_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                        unsigned long int gvl);
Operation
if gvl > 0:
  current_sum = b[0]
  for element = 0 to gvl - 1
     current_sum = current_sum + a[element]

  result[0] = current_sum
Masked prototypes
__epi_8xi8 __builtin_epi_vredsum_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                           __epi_8xi8 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredsum_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                             __epi_4xi16 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredsum_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                             __epi_2xi32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredsum_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                             __epi_1xi64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredsum_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                             __epi_16xi8 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredsum_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                             __epi_8xi16 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredsum_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                             __epi_4xi32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredsum_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                             __epi_2xi64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredsum_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                             __epi_32xi8 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredsum_16xi16_mask(__epi_16xi16 merge,
                                               __epi_16xi16 a, __epi_16xi16 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredsum_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                             __epi_8xi32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredsum_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                             __epi_4xi64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredsum_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                             __epi_64xi8 b, __epi_64xi1 mask,
                                             unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredsum_32xi16_mask(__epi_32xi16 merge,
                                               __epi_32xi16 a, __epi_32xi16 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredsum_16xi32_mask(__epi_16xi32 merge,
                                               __epi_16xi32 a, __epi_16xi32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredsum_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                             __epi_8xi64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
if gvl > 0:
  current_sum = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_sum = current_sum + a[element]
     else
       result[element] = merge[element]

  result[0] = current_sum

2.4.26. Integer vector bitwise-xor reduction

Description

Use these builtins to compute the bitwise-xor of all the elements of a integer vector. The initial result of the bitwise-or is taken from the first element of the vector b.

Instruction
vredxor.vs
Prototypes
__epi_8xi8 __builtin_epi_vredxor_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                      unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredxor_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                        unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredxor_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                        unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredxor_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                        unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredxor_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                        unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredxor_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredxor_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredxor_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                        unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredxor_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredxor_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredxor_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredxor_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                        unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredxor_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredxor_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredxor_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredxor_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                        unsigned long int gvl);
Operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     current_red = bitwise_xor(current_red, a[element])

  result[0] = current_red
Masked prototypes
__epi_8xi8 __builtin_epi_vredxor_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                           __epi_8xi8 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi16 __builtin_epi_vredxor_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                             __epi_4xi16 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi32 __builtin_epi_vredxor_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                             __epi_2xi32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_1xi64 __builtin_epi_vredxor_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                             __epi_1xi64 b, __epi_1xi1 mask,
                                             unsigned long int gvl);
__epi_16xi8 __builtin_epi_vredxor_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                             __epi_16xi8 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi16 __builtin_epi_vredxor_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                             __epi_8xi16 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi32 __builtin_epi_vredxor_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                             __epi_4xi32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi64 __builtin_epi_vredxor_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                             __epi_2xi64 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_32xi8 __builtin_epi_vredxor_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                             __epi_32xi8 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi16 __builtin_epi_vredxor_16xi16_mask(__epi_16xi16 merge,
                                               __epi_16xi16 a, __epi_16xi16 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi32 __builtin_epi_vredxor_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                             __epi_8xi32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi64 __builtin_epi_vredxor_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                             __epi_4xi64 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_64xi8 __builtin_epi_vredxor_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                             __epi_64xi8 b, __epi_64xi1 mask,
                                             unsigned long int gvl);
__epi_32xi16 __builtin_epi_vredxor_32xi16_mask(__epi_32xi16 merge,
                                               __epi_32xi16 a, __epi_32xi16 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi32 __builtin_epi_vredxor_16xi32_mask(__epi_16xi32 merge,
                                               __epi_16xi32 a, __epi_16xi32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi64 __builtin_epi_vredxor_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                             __epi_8xi64 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
if gvl > 0:
  current_red = b[0]
  for element = 0 to gvl - 1
     if mask[element] then
       current_red = bitwise_xor(current_red, a[element])
     else
       result[element] = merge[element]

  result[0] = current_red

2.4.27. Elementwise integer division remainder

Description

Use these builtins to compute the elementwise integer division remainder of two integer vectors.

Instruction
vrem.vv
Prototypes
__epi_8xi8 __builtin_epi_vrem_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vrem_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vrem_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vrem_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vrem_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vrem_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vrem_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vrem_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vrem_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vrem_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vrem_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vrem_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vrem_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vrem_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vrem_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vrem_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = rem(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vrem_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vrem_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vrem_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vrem_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vrem_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vrem_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vrem_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vrem_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vrem_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vrem_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vrem_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vrem_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vrem_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vrem_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vrem_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vrem_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = rem(a[element], b[element])
   else
     result[element] = merge[element]

2.4.28. Elementwise unsigned integer division remainder

Description

Use these builtins to compute the elementwise integer division remainder of two unsigned integer vectors.

Instruction
vremu.vv
Prototypes
__epi_8xi8 __builtin_epi_vremu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vremu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vremu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_1xi64 __builtin_epi_vremu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vremu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi16 __builtin_epi_vremu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vremu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vremu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vremu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vremu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vremu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vremu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vremu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vremu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vremu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vremu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = remu(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vremu_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_vremu_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vremu_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi64 __builtin_epi_vremu_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                           __epi_1xi64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_vremu_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_vremu_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vremu_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vremu_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                           __epi_2xi64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_vremu_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vremu_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vremu_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vremu_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                           __epi_4xi64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_vremu_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vremu_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vremu_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vremu_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                           __epi_8xi64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = remu(a[element], b[element])
   else
     result[element] = merge[element]

2.4.29. Elementwise subtraction with borrow-in

Description

Use these builtins to compute the elementwise subtraction of two integer vectors and a borrow in.

Instruction
vsbc.vvm
Prototypes
__epi_8xi8 __builtin_epi_vsbc_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   __epi_8xi1 borrow_in, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsbc_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     __epi_4xi1 borrow_in,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsbc_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     __epi_2xi1 borrow_in,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsbc_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     __epi_1xi1 borrow_in,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsbc_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     __epi_16xi1 borrow_in,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsbc_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     __epi_8xi1 borrow_in,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsbc_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     __epi_4xi1 borrow_in,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsbc_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     __epi_2xi1 borrow_in,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsbc_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     __epi_32xi1 borrow_in,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsbc_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       __epi_16xi1 borrow_in,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsbc_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     __epi_8xi1 borrow_in,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsbc_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     __epi_4xi1 borrow_in,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsbc_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     __epi_64xi1 borrow_in,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsbc_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       __epi_32xi1 borrow_in,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsbc_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       __epi_16xi1 borrow_in,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsbc_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     __epi_8xi1 borrow_in,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] - b[element] - borrow_in[element]

2.4.30. Elementwise integer subtraction

Description

Use these builtins to do an elementwise subtraction of two integer vectors.

Instruction
vsub.vv
Prototypes
__epi_8xi8 __builtin_epi_vsub_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsub_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsub_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsub_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsub_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsub_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsub_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsub_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsub_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsub_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsub_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsub_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsub_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsub_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsub_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsub_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] - b[element]
Masked prototypes
__epi_8xi8 __builtin_epi_vsub_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsub_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsub_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsub_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsub_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsub_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsub_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsub_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsub_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsub_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsub_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsub_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsub_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsub_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsub_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsub_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] - b[element]
   else
     result[element] = merge[element]

2.4.31. Elementwise widening integer addition

Description

Use these builtins to do an elementwise addition of two integer vectors.

Before doing the addition, the elements of the two vectors are widened to integer values with twice the number of bits as the original elements.

Instruction
vwadd.vv
Prototypes
__epi_8xi16 __builtin_epi_vwadd_8xi16(__epi_8xi8 a, __epi_8xi8 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwadd_4xi32(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwadd_2xi64(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwadd_16xi16(__epi_16xi8 a, __epi_16xi8 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwadd_8xi32(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwadd_4xi64(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwadd_32xi16(__epi_32xi8 a, __epi_32xi8 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwadd_16xi32(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwadd_8xi64(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwadd_64xi16(__epi_64xi8 a, __epi_64xi8 b,
                                        unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwadd_32xi32(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwadd_16xi64(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = wide_int(a[element]) + wide_int(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwadd_8xi16_mask(__epi_8xi16 merge, __epi_8xi8 a,
                                           __epi_8xi8 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwadd_4xi32_mask(__epi_4xi32 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwadd_2xi64_mask(__epi_2xi64 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwadd_16xi16_mask(__epi_16xi16 merge, __epi_16xi8 a,
                                             __epi_16xi8 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwadd_8xi32_mask(__epi_8xi32 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwadd_4xi64_mask(__epi_4xi64 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwadd_32xi16_mask(__epi_32xi16 merge, __epi_32xi8 a,
                                             __epi_32xi8 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwadd_16xi32_mask(__epi_16xi32 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwadd_8xi64_mask(__epi_8xi64 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwadd_64xi16_mask(__epi_64xi16 merge, __epi_64xi8 a,
                                             __epi_64xi8 b, __epi_64xi1 mask,
                                             unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwadd_32xi32_mask(__epi_32xi32 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwadd_16xi64_mask(__epi_16xi64 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = wide_int(a[element]) + wide_int(b[element])
   else
     result[element] = merge[element]

2.4.32. Elementwise widening integer addition (second operand)

Description

Use these builtins to do an elementwise addition of two integer vectors.

Before doing the addition, the elements of the second vector operand are widened to integer values with twice the number of bits as the original elements.

Instruction
vwadd.wv
Prototypes
__epi_8xi16 __builtin_epi_vwadd_w_8xi16(__epi_8xi16 a, __epi_8xi8 b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwadd_w_4xi32(__epi_4xi32 a, __epi_4xi16 b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwadd_w_2xi64(__epi_2xi64 a, __epi_2xi32 b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwadd_w_16xi16(__epi_16xi16 a, __epi_16xi8 b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwadd_w_8xi32(__epi_8xi32 a, __epi_8xi16 b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwadd_w_4xi64(__epi_4xi64 a, __epi_4xi32 b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwadd_w_32xi16(__epi_32xi16 a, __epi_32xi8 b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwadd_w_16xi32(__epi_16xi32 a, __epi_16xi16 b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwadd_w_8xi64(__epi_8xi64 a, __epi_8xi32 b,
                                        unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwadd_w_64xi16(__epi_64xi16 a, __epi_64xi8 b,
                                          unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwadd_w_32xi32(__epi_32xi32 a, __epi_32xi16 b,
                                          unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwadd_w_16xi64(__epi_16xi64 a, __epi_16xi32 b,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] + wide_int(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwadd_w_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                             __epi_8xi8 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwadd_w_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                             __epi_4xi16 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwadd_w_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                             __epi_2xi32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwadd_w_16xi16_mask(__epi_16xi16 merge,
                                               __epi_16xi16 a, __epi_16xi8 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwadd_w_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                             __epi_8xi16 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwadd_w_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                             __epi_4xi32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwadd_w_32xi16_mask(__epi_32xi16 merge,
                                               __epi_32xi16 a, __epi_32xi8 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwadd_w_16xi32_mask(__epi_16xi32 merge,
                                               __epi_16xi32 a, __epi_16xi16 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwadd_w_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                             __epi_8xi32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwadd_w_64xi16_mask(__epi_64xi16 merge,
                                               __epi_64xi16 a, __epi_64xi8 b,
                                               __epi_64xi1 mask,
                                               unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwadd_w_32xi32_mask(__epi_32xi32 merge,
                                               __epi_32xi32 a, __epi_32xi16 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwadd_w_16xi64_mask(__epi_16xi64 merge,
                                               __epi_16xi64 a, __epi_16xi32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] + wide_int(b[element])
   else
     result[element] = merge[element]

2.4.33. Elementwise widening unsigned integer addition

Description

Use these builtins to do an elementwise addition of two integer vectors.

Before doing the addition, the elements of the two vectors are widened to integer values with twice the number of bits as the original elements.

Instruction
vwaddu.vv
Prototypes
__epi_8xi16 __builtin_epi_vwaddu_8xi16(__epi_8xi8 a, __epi_8xi8 b,
                                       unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwaddu_4xi32(__epi_4xi16 a, __epi_4xi16 b,
                                       unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwaddu_2xi64(__epi_2xi32 a, __epi_2xi32 b,
                                       unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwaddu_16xi16(__epi_16xi8 a, __epi_16xi8 b,
                                         unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwaddu_8xi32(__epi_8xi16 a, __epi_8xi16 b,
                                       unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwaddu_4xi64(__epi_4xi32 a, __epi_4xi32 b,
                                       unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwaddu_32xi16(__epi_32xi8 a, __epi_32xi8 b,
                                         unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwaddu_16xi32(__epi_16xi16 a, __epi_16xi16 b,
                                         unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwaddu_8xi64(__epi_8xi32 a, __epi_8xi32 b,
                                       unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwaddu_64xi16(__epi_64xi8 a, __epi_64xi8 b,
                                         unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwaddu_32xi32(__epi_32xi16 a, __epi_32xi16 b,
                                         unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwaddu_16xi64(__epi_16xi32 a, __epi_16xi32 b,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = wide_int(a[element]) + wide_int(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwaddu_8xi16_mask(__epi_8xi16 merge, __epi_8xi8 a,
                                            __epi_8xi8 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwaddu_4xi32_mask(__epi_4xi32 merge, __epi_4xi16 a,
                                            __epi_4xi16 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwaddu_2xi64_mask(__epi_2xi64 merge, __epi_2xi32 a,
                                            __epi_2xi32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwaddu_16xi16_mask(__epi_16xi16 merge, __epi_16xi8 a,
                                              __epi_16xi8 b, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwaddu_8xi32_mask(__epi_8xi32 merge, __epi_8xi16 a,
                                            __epi_8xi16 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwaddu_4xi64_mask(__epi_4xi64 merge, __epi_4xi32 a,
                                            __epi_4xi32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwaddu_32xi16_mask(__epi_32xi16 merge, __epi_32xi8 a,
                                              __epi_32xi8 b, __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwaddu_16xi32_mask(__epi_16xi32 merge,
                                              __epi_16xi16 a, __epi_16xi16 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwaddu_8xi64_mask(__epi_8xi64 merge, __epi_8xi32 a,
                                            __epi_8xi32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwaddu_64xi16_mask(__epi_64xi16 merge, __epi_64xi8 a,
                                              __epi_64xi8 b, __epi_64xi1 mask,
                                              unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwaddu_32xi32_mask(__epi_32xi32 merge,
                                              __epi_32xi16 a, __epi_32xi16 b,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwaddu_16xi64_mask(__epi_16xi64 merge,
                                              __epi_16xi32 a, __epi_16xi32 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = wide_int(a[element]) + wide_int(b[element])
   else
     result[element] = merge[element]

2.4.34. Elementwise widening unsigned integer addition (second operand)

Description

Use these builtins to do an elementwise addition of two integer vectors.

Before doing the addition, the elements of the second vector operand are widened to integer values with twice the number of bits as the original elements.

Instruction
vwaddu.wv
Prototypes
__epi_8xi16 __builtin_epi_vwaddu_w_8xi16(__epi_8xi16 a, __epi_8xi8 b,
                                         unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwaddu_w_4xi32(__epi_4xi32 a, __epi_4xi16 b,
                                         unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwaddu_w_2xi64(__epi_2xi64 a, __epi_2xi32 b,
                                         unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwaddu_w_16xi16(__epi_16xi16 a, __epi_16xi8 b,
                                           unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwaddu_w_8xi32(__epi_8xi32 a, __epi_8xi16 b,
                                         unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwaddu_w_4xi64(__epi_4xi64 a, __epi_4xi32 b,
                                         unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwaddu_w_32xi16(__epi_32xi16 a, __epi_32xi8 b,
                                           unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwaddu_w_16xi32(__epi_16xi32 a, __epi_16xi16 b,
                                           unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwaddu_w_8xi64(__epi_8xi64 a, __epi_8xi32 b,
                                         unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwaddu_w_64xi16(__epi_64xi16 a, __epi_64xi8 b,
                                           unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwaddu_w_32xi32(__epi_32xi32 a, __epi_32xi16 b,
                                           unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwaddu_w_16xi64(__epi_16xi64 a, __epi_16xi32 b,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] + wide_uint(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwaddu_w_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                              __epi_8xi8 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwaddu_w_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                              __epi_4xi16 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwaddu_w_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                              __epi_2xi32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwaddu_w_16xi16_mask(__epi_16xi16 merge,
                                                __epi_16xi16 a, __epi_16xi8 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwaddu_w_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                              __epi_8xi16 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwaddu_w_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                              __epi_4xi32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwaddu_w_32xi16_mask(__epi_32xi16 merge,
                                                __epi_32xi16 a, __epi_32xi8 b,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwaddu_w_16xi32_mask(__epi_16xi32 merge,
                                                __epi_16xi32 a, __epi_16xi16 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwaddu_w_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                              __epi_8xi32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwaddu_w_64xi16_mask(__epi_64xi16 merge,
                                                __epi_64xi16 a, __epi_64xi8 b,
                                                __epi_64xi1 mask,
                                                unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwaddu_w_32xi32_mask(__epi_32xi32 merge,
                                                __epi_32xi32 a, __epi_32xi16 b,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwaddu_w_16xi64_mask(__epi_16xi64 merge,
                                                __epi_16xi64 a, __epi_16xi32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] + wide_uint(b[element])
   else
     result[element] = merge[element]

2.4.35. Elementwise widening integer multiplication

Description

Use these builtins to do an elementwise multiplication of two integer vectors.

This operation returns the lower bits of the multiplication

Instruction
vwmul.vv
Prototypes
__epi_8xi16 __builtin_epi_vwmul_8xi16(__epi_8xi8 a, __epi_8xi8 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwmul_4xi32(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwmul_2xi64(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwmul_16xi16(__epi_16xi8 a, __epi_16xi8 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwmul_8xi32(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwmul_4xi64(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwmul_32xi16(__epi_32xi8 a, __epi_32xi8 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwmul_16xi32(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwmul_8xi64(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwmul_64xi16(__epi_64xi8 a, __epi_64xi8 b,
                                        unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwmul_32xi32(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwmul_16xi64(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = widen_int(a[element]) * widen_int(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwmul_8xi16_mask(__epi_8xi16 merge, __epi_8xi8 a,
                                           __epi_8xi8 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwmul_4xi32_mask(__epi_4xi32 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwmul_2xi64_mask(__epi_2xi64 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwmul_16xi16_mask(__epi_16xi16 merge, __epi_16xi8 a,
                                             __epi_16xi8 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwmul_8xi32_mask(__epi_8xi32 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwmul_4xi64_mask(__epi_4xi64 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwmul_32xi16_mask(__epi_32xi16 merge, __epi_32xi8 a,
                                             __epi_32xi8 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwmul_16xi32_mask(__epi_16xi32 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwmul_8xi64_mask(__epi_8xi64 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwmul_64xi16_mask(__epi_64xi16 merge, __epi_64xi8 a,
                                             __epi_64xi8 b, __epi_64xi1 mask,
                                             unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwmul_32xi32_mask(__epi_32xi32 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwmul_16xi64_mask(__epi_16xi64 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = widen_int(a[element]) * widen_int(b[element])
   else
     result[element] = merge[element]

2.4.36. Elementwise widening integer multiplication (mixed signs)

Description

Use these builtins to do an elementwise multiplication of two integer vectors.

This operation returns the higher bits of the multiplication

Instruction
vwmulsu.vv
Prototypes
__epi_8xi16 __builtin_epi_vwmulsu_8xi16(__epi_8xi8 a, __epi_8xi8 b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwmulsu_4xi32(__epi_4xi16 a, __epi_4xi16 b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwmulsu_2xi64(__epi_2xi32 a, __epi_2xi32 b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwmulsu_16xi16(__epi_16xi8 a, __epi_16xi8 b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwmulsu_8xi32(__epi_8xi16 a, __epi_8xi16 b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwmulsu_4xi64(__epi_4xi32 a, __epi_4xi32 b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwmulsu_32xi16(__epi_32xi8 a, __epi_32xi8 b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwmulsu_16xi32(__epi_16xi16 a, __epi_16xi16 b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwmulsu_8xi64(__epi_8xi32 a, __epi_8xi32 b,
                                        unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwmulsu_64xi16(__epi_64xi8 a, __epi_64xi8 b,
                                          unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwmulsu_32xi32(__epi_32xi16 a, __epi_32xi16 b,
                                          unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwmulsu_16xi64(__epi_16xi32 a, __epi_16xi32 b,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = widen_int(a[element]) * widen_uint(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwmulsu_8xi16_mask(__epi_8xi16 merge, __epi_8xi8 a,
                                             __epi_8xi8 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwmulsu_4xi32_mask(__epi_4xi32 merge, __epi_4xi16 a,
                                             __epi_4xi16 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwmulsu_2xi64_mask(__epi_2xi64 merge, __epi_2xi32 a,
                                             __epi_2xi32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwmulsu_16xi16_mask(__epi_16xi16 merge,
                                               __epi_16xi8 a, __epi_16xi8 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwmulsu_8xi32_mask(__epi_8xi32 merge, __epi_8xi16 a,
                                             __epi_8xi16 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwmulsu_4xi64_mask(__epi_4xi64 merge, __epi_4xi32 a,
                                             __epi_4xi32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwmulsu_32xi16_mask(__epi_32xi16 merge,
                                               __epi_32xi8 a, __epi_32xi8 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwmulsu_16xi32_mask(__epi_16xi32 merge,
                                               __epi_16xi16 a, __epi_16xi16 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwmulsu_8xi64_mask(__epi_8xi64 merge, __epi_8xi32 a,
                                             __epi_8xi32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwmulsu_64xi16_mask(__epi_64xi16 merge,
                                               __epi_64xi8 a, __epi_64xi8 b,
                                               __epi_64xi1 mask,
                                               unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwmulsu_32xi32_mask(__epi_32xi32 merge,
                                               __epi_32xi16 a, __epi_32xi16 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwmulsu_16xi64_mask(__epi_16xi64 merge,
                                               __epi_16xi32 a, __epi_16xi32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = widen_int(a[element]) * widen_uint(b[element])
   else
     result[element] = merge[element]

2.4.37. Elementwise widening unsigned integer multiplication

Description

Use these builtins to do an elementwise multiplication of two integer vectors.

This operation returns the higher bits of the multiplication

Instruction
vwmulu.vv
Prototypes
__epi_8xi16 __builtin_epi_vwmulu_8xi16(__epi_8xi8 a, __epi_8xi8 b,
                                       unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwmulu_4xi32(__epi_4xi16 a, __epi_4xi16 b,
                                       unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwmulu_2xi64(__epi_2xi32 a, __epi_2xi32 b,
                                       unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwmulu_16xi16(__epi_16xi8 a, __epi_16xi8 b,
                                         unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwmulu_8xi32(__epi_8xi16 a, __epi_8xi16 b,
                                       unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwmulu_4xi64(__epi_4xi32 a, __epi_4xi32 b,
                                       unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwmulu_32xi16(__epi_32xi8 a, __epi_32xi8 b,
                                         unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwmulu_16xi32(__epi_16xi16 a, __epi_16xi16 b,
                                         unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwmulu_8xi64(__epi_8xi32 a, __epi_8xi32 b,
                                       unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwmulu_64xi16(__epi_64xi8 a, __epi_64xi8 b,
                                         unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwmulu_32xi32(__epi_32xi16 a, __epi_32xi16 b,
                                         unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwmulu_16xi64(__epi_16xi32 a, __epi_16xi32 b,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = widen_uint(a[element]) * widen_uint(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwmulu_8xi16_mask(__epi_8xi16 merge, __epi_8xi8 a,
                                            __epi_8xi8 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwmulu_4xi32_mask(__epi_4xi32 merge, __epi_4xi16 a,
                                            __epi_4xi16 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwmulu_2xi64_mask(__epi_2xi64 merge, __epi_2xi32 a,
                                            __epi_2xi32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwmulu_16xi16_mask(__epi_16xi16 merge, __epi_16xi8 a,
                                              __epi_16xi8 b, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwmulu_8xi32_mask(__epi_8xi32 merge, __epi_8xi16 a,
                                            __epi_8xi16 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwmulu_4xi64_mask(__epi_4xi64 merge, __epi_4xi32 a,
                                            __epi_4xi32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwmulu_32xi16_mask(__epi_32xi16 merge, __epi_32xi8 a,
                                              __epi_32xi8 b, __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwmulu_16xi32_mask(__epi_16xi32 merge,
                                              __epi_16xi16 a, __epi_16xi16 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwmulu_8xi64_mask(__epi_8xi64 merge, __epi_8xi32 a,
                                            __epi_8xi32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwmulu_64xi16_mask(__epi_64xi16 merge, __epi_64xi8 a,
                                              __epi_64xi8 b, __epi_64xi1 mask,
                                              unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwmulu_32xi32_mask(__epi_32xi32 merge,
                                              __epi_32xi16 a, __epi_32xi16 b,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwmulu_16xi64_mask(__epi_16xi64 merge,
                                              __epi_16xi32 a, __epi_16xi32 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = widen_uint(a[element]) * widen_uint(b[element])
   else
     result[element] = merge[element]

2.4.38. Elementwise widening integer subtraction

Description

Use these builtins to do an elementwise subtraction of two integer vectors.

Before doing the subtraction, the elements of the two vectors are widened to integer values with twice the number of bits as the original elements.

Instruction
vwsub.vv
Prototypes
__epi_8xi16 __builtin_epi_vwsub_8xi16(__epi_8xi8 a, __epi_8xi8 b,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwsub_4xi32(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwsub_2xi64(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwsub_16xi16(__epi_16xi8 a, __epi_16xi8 b,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwsub_8xi32(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwsub_4xi64(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwsub_32xi16(__epi_32xi8 a, __epi_32xi8 b,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwsub_16xi32(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwsub_8xi64(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwsub_64xi16(__epi_64xi8 a, __epi_64xi8 b,
                                        unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwsub_32xi32(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwsub_16xi64(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = wide_int(a[element]) - wide_int(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwsub_8xi16_mask(__epi_8xi16 merge, __epi_8xi8 a,
                                           __epi_8xi8 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwsub_4xi32_mask(__epi_4xi32 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwsub_2xi64_mask(__epi_2xi64 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwsub_16xi16_mask(__epi_16xi16 merge, __epi_16xi8 a,
                                             __epi_16xi8 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwsub_8xi32_mask(__epi_8xi32 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwsub_4xi64_mask(__epi_4xi64 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwsub_32xi16_mask(__epi_32xi16 merge, __epi_32xi8 a,
                                             __epi_32xi8 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwsub_16xi32_mask(__epi_16xi32 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwsub_8xi64_mask(__epi_8xi64 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwsub_64xi16_mask(__epi_64xi16 merge, __epi_64xi8 a,
                                             __epi_64xi8 b, __epi_64xi1 mask,
                                             unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwsub_32xi32_mask(__epi_32xi32 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwsub_16xi64_mask(__epi_16xi64 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = wide_int(a[element]) - wide_int(b[element])
   else
     result[element] = merge[element]

2.4.39. Elementwise widening integer subtraction (second operand)

Description

Use these builtins to do an elementwise subtraction of two integer vectors.

Before doing the subtraction, the elements of the second vector operand are widened to integer values with twice the number of bits as the original elements.

Instruction
vwsub.wv
Prototypes
__epi_8xi16 __builtin_epi_vwsub_w_8xi16(__epi_8xi16 a, __epi_8xi8 b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwsub_w_4xi32(__epi_4xi32 a, __epi_4xi16 b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwsub_w_2xi64(__epi_2xi64 a, __epi_2xi32 b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwsub_w_16xi16(__epi_16xi16 a, __epi_16xi8 b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwsub_w_8xi32(__epi_8xi32 a, __epi_8xi16 b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwsub_w_4xi64(__epi_4xi64 a, __epi_4xi32 b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwsub_w_32xi16(__epi_32xi16 a, __epi_32xi8 b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwsub_w_16xi32(__epi_16xi32 a, __epi_16xi16 b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwsub_w_8xi64(__epi_8xi64 a, __epi_8xi32 b,
                                        unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwsub_w_64xi16(__epi_64xi16 a, __epi_64xi8 b,
                                          unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwsub_w_32xi32(__epi_32xi32 a, __epi_32xi16 b,
                                          unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwsub_w_16xi64(__epi_16xi64 a, __epi_16xi32 b,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] - wide_int(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwsub_w_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                             __epi_8xi8 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwsub_w_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                             __epi_4xi16 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwsub_w_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                             __epi_2xi32 b, __epi_2xi1 mask,
                                             unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwsub_w_16xi16_mask(__epi_16xi16 merge,
                                               __epi_16xi16 a, __epi_16xi8 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwsub_w_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                             __epi_8xi16 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwsub_w_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                             __epi_4xi32 b, __epi_4xi1 mask,
                                             unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwsub_w_32xi16_mask(__epi_32xi16 merge,
                                               __epi_32xi16 a, __epi_32xi8 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwsub_w_16xi32_mask(__epi_16xi32 merge,
                                               __epi_16xi32 a, __epi_16xi16 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwsub_w_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                             __epi_8xi32 b, __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwsub_w_64xi16_mask(__epi_64xi16 merge,
                                               __epi_64xi16 a, __epi_64xi8 b,
                                               __epi_64xi1 mask,
                                               unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwsub_w_32xi32_mask(__epi_32xi32 merge,
                                               __epi_32xi32 a, __epi_32xi16 b,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwsub_w_16xi64_mask(__epi_16xi64 merge,
                                               __epi_16xi64 a, __epi_16xi32 b,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] + wide_int(b[element])
   else
     result[element] = merge[element]

2.4.40. Elementwise widening unsigned integer subtraction

Description

Use these builtins to do an elementwise subtraction of two integer vectors.

Before doing the subtraction, the elements of the two vectors are widened to integer values with twice the number of bits as the original elements.

Instruction
vwsubu.vv
Prototypes
__epi_8xi16 __builtin_epi_vwsubu_8xi16(__epi_8xi8 a, __epi_8xi8 b,
                                       unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwsubu_4xi32(__epi_4xi16 a, __epi_4xi16 b,
                                       unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwsubu_2xi64(__epi_2xi32 a, __epi_2xi32 b,
                                       unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwsubu_16xi16(__epi_16xi8 a, __epi_16xi8 b,
                                         unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwsubu_8xi32(__epi_8xi16 a, __epi_8xi16 b,
                                       unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwsubu_4xi64(__epi_4xi32 a, __epi_4xi32 b,
                                       unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwsubu_32xi16(__epi_32xi8 a, __epi_32xi8 b,
                                         unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwsubu_16xi32(__epi_16xi16 a, __epi_16xi16 b,
                                         unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwsubu_8xi64(__epi_8xi32 a, __epi_8xi32 b,
                                       unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwsubu_64xi16(__epi_64xi8 a, __epi_64xi8 b,
                                         unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwsubu_32xi32(__epi_32xi16 a, __epi_32xi16 b,
                                         unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwsubu_16xi64(__epi_16xi32 a, __epi_16xi32 b,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = wide_int(a[element]) - wide_int(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwsubu_8xi16_mask(__epi_8xi16 merge, __epi_8xi8 a,
                                            __epi_8xi8 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwsubu_4xi32_mask(__epi_4xi32 merge, __epi_4xi16 a,
                                            __epi_4xi16 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwsubu_2xi64_mask(__epi_2xi64 merge, __epi_2xi32 a,
                                            __epi_2xi32 b, __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwsubu_16xi16_mask(__epi_16xi16 merge, __epi_16xi8 a,
                                              __epi_16xi8 b, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwsubu_8xi32_mask(__epi_8xi32 merge, __epi_8xi16 a,
                                            __epi_8xi16 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwsubu_4xi64_mask(__epi_4xi64 merge, __epi_4xi32 a,
                                            __epi_4xi32 b, __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwsubu_32xi16_mask(__epi_32xi16 merge, __epi_32xi8 a,
                                              __epi_32xi8 b, __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwsubu_16xi32_mask(__epi_16xi32 merge,
                                              __epi_16xi16 a, __epi_16xi16 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwsubu_8xi64_mask(__epi_8xi64 merge, __epi_8xi32 a,
                                            __epi_8xi32 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwsubu_64xi16_mask(__epi_64xi16 merge, __epi_64xi8 a,
                                              __epi_64xi8 b, __epi_64xi1 mask,
                                              unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwsubu_32xi32_mask(__epi_32xi32 merge,
                                              __epi_32xi16 a, __epi_32xi16 b,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwsubu_16xi64_mask(__epi_16xi64 merge,
                                              __epi_16xi32 a, __epi_16xi32 b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = wide_int(a[element]) - wide_int(b[element])
   else
     result[element] = merge[element]

2.4.41. Elementwise widening unsigned integer subtraction (second operand)

Description

Use these builtins to do an elementwise subtraction of two integer vectors.

Before doing the subtraction, the elements of the second vector operand are widened to integer values with twice the number of bits as the original elements.

Instruction
vwsubu.wv
Prototypes
__epi_8xi16 __builtin_epi_vwsubu_w_8xi16(__epi_8xi16 a, __epi_8xi8 b,
                                         unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwsubu_w_4xi32(__epi_4xi32 a, __epi_4xi16 b,
                                         unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwsubu_w_2xi64(__epi_2xi64 a, __epi_2xi32 b,
                                         unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwsubu_w_16xi16(__epi_16xi16 a, __epi_16xi8 b,
                                           unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwsubu_w_8xi32(__epi_8xi32 a, __epi_8xi16 b,
                                         unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwsubu_w_4xi64(__epi_4xi64 a, __epi_4xi32 b,
                                         unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwsubu_w_32xi16(__epi_32xi16 a, __epi_32xi8 b,
                                           unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwsubu_w_16xi32(__epi_16xi32 a, __epi_16xi16 b,
                                           unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwsubu_w_8xi64(__epi_8xi64 a, __epi_8xi32 b,
                                         unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwsubu_w_64xi16(__epi_64xi16 a, __epi_64xi8 b,
                                           unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwsubu_w_32xi32(__epi_32xi32 a, __epi_32xi16 b,
                                           unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwsubu_w_16xi64(__epi_16xi64 a, __epi_16xi32 b,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] - wide_uint(b[element])
Masked prototypes
__epi_8xi16 __builtin_epi_vwsubu_w_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                              __epi_8xi8 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vwsubu_w_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                              __epi_4xi16 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi64 __builtin_epi_vwsubu_w_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                              __epi_2xi32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vwsubu_w_16xi16_mask(__epi_16xi16 merge,
                                                __epi_16xi16 a, __epi_16xi8 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vwsubu_w_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                              __epi_8xi16 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi64 __builtin_epi_vwsubu_w_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                              __epi_4xi32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vwsubu_w_32xi16_mask(__epi_32xi16 merge,
                                                __epi_32xi16 a, __epi_32xi8 b,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vwsubu_w_16xi32_mask(__epi_16xi32 merge,
                                                __epi_16xi32 a, __epi_16xi16 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi64 __builtin_epi_vwsubu_w_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                              __epi_8xi32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_64xi16 __builtin_epi_vwsubu_w_64xi16_mask(__epi_64xi16 merge,
                                                __epi_64xi16 a, __epi_64xi8 b,
                                                __epi_64xi1 mask,
                                                unsigned long int gvl);
__epi_32xi32 __builtin_epi_vwsubu_w_32xi32_mask(__epi_32xi32 merge,
                                                __epi_32xi32 a, __epi_32xi16 b,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi64 __builtin_epi_vwsubu_w_16xi64_mask(__epi_16xi64 merge,
                                                __epi_16xi64 a, __epi_16xi32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] - wide_uint(b[element])
   else
     result[element] = merge[element]

2.5. Integer relational operations

2.5.1. Compare elementwise two integer vectors for equality

Description

Use these builtins to compare to floating-point vectors for equality.

The result is a mask that enables the element if the the floating-point comparison holds for that element.

Instruction
vmseq.vv
Prototypes
__epi_8xi1 __builtin_epi_vmseq_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmseq_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmseq_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmseq_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmseq_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmseq_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmseq_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmseq_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmseq_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmseq_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmseq_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmseq_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmseq_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmseq_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmseq_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmseq_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] == b[element]
Masked prototypes
__epi_8xi1 __builtin_epi_vmseq_8xi8_mask(__epi_8xi1 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmseq_4xi16_mask(__epi_4xi1 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmseq_2xi32_mask(__epi_2xi1 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmseq_1xi64_mask(__epi_1xi1 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmseq_16xi8_mask(__epi_16xi1 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmseq_8xi16_mask(__epi_8xi1 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmseq_4xi32_mask(__epi_4xi1 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmseq_2xi64_mask(__epi_2xi1 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmseq_32xi8_mask(__epi_32xi1 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmseq_16xi16_mask(__epi_16xi1 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmseq_8xi32_mask(__epi_8xi1 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmseq_4xi64_mask(__epi_4xi1 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmseq_64xi8_mask(__epi_64xi1 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmseq_32xi16_mask(__epi_32xi1 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmseq_16xi32_mask(__epi_16xi1 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmseq_8xi64_mask(__epi_8xi1 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] == b[element]
   else
     result[element] = merge[element]

2.5.2. Compare elementwise two integer vectors for greater-than

Description

Use these builtins to evaluate if the elements of the first integer vector are greater than, but not equal to, the corresponding elements of the second integer vector.

The result is a mask that enables the element if the the integer comparison holds for that element.

Instruction
vmsgt.vx
Prototypes
__epi_8xi1 __builtin_epi_vmsgt_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgt_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsgt_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsgt_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgt_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgt_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgt_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsgt_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsgt_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgt_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgt_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgt_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsgt_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsgt_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgt_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgt_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] > b[element]
Masked prototypes
__epi_8xi1 __builtin_epi_vmsgt_8xi8_mask(__epi_8xi1 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgt_4xi16_mask(__epi_4xi1 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsgt_2xi32_mask(__epi_2xi1 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsgt_1xi64_mask(__epi_1xi1 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgt_16xi8_mask(__epi_16xi1 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgt_8xi16_mask(__epi_8xi1 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgt_4xi32_mask(__epi_4xi1 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsgt_2xi64_mask(__epi_2xi1 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsgt_32xi8_mask(__epi_32xi1 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgt_16xi16_mask(__epi_16xi1 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgt_8xi32_mask(__epi_8xi1 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgt_4xi64_mask(__epi_4xi1 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsgt_64xi8_mask(__epi_64xi1 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsgt_32xi16_mask(__epi_32xi1 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgt_16xi32_mask(__epi_16xi1 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgt_8xi64_mask(__epi_8xi1 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] > b[element]
   else
     result[element] = merge[element]

2.5.3. Compare elementwise two unsigned integer vectors for greater-than

Description

Use these builtins to evaluate if the elements of the first unsigned integer vector are greater than, but not equal to, the corresponding elements of the second unsigned integer vector.

The result is a mask that enables the element if the the unsigned integer comparison holds for that element.

Instruction
vmsgtu.vx
Prototypes
__epi_8xi1 __builtin_epi_vmsgtu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgtu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsgtu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsgtu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgtu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgtu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgtu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsgtu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsgtu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgtu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgtu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgtu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                      unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsgtu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                       unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsgtu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgtu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgtu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] > b[element]
Masked prototypes
__epi_8xi1 __builtin_epi_vmsgtu_8xi8_mask(__epi_8xi1 merge, __epi_8xi8 a,
                                          __epi_8xi8 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgtu_4xi16_mask(__epi_4xi1 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsgtu_2xi32_mask(__epi_2xi1 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsgtu_1xi64_mask(__epi_1xi1 merge, __epi_1xi64 a,
                                           __epi_1xi64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgtu_16xi8_mask(__epi_16xi1 merge, __epi_16xi8 a,
                                            __epi_16xi8 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgtu_8xi16_mask(__epi_8xi1 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgtu_4xi32_mask(__epi_4xi1 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsgtu_2xi64_mask(__epi_2xi1 merge, __epi_2xi64 a,
                                           __epi_2xi64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsgtu_32xi8_mask(__epi_32xi1 merge, __epi_32xi8 a,
                                            __epi_32xi8 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgtu_16xi16_mask(__epi_16xi1 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgtu_8xi32_mask(__epi_8xi1 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsgtu_4xi64_mask(__epi_4xi1 merge, __epi_4xi64 a,
                                           __epi_4xi64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsgtu_64xi8_mask(__epi_64xi1 merge, __epi_64xi8 a,
                                            __epi_64xi8 b, __epi_64xi1 mask,
                                            unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsgtu_32xi16_mask(__epi_32xi1 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsgtu_16xi32_mask(__epi_16xi1 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsgtu_8xi64_mask(__epi_8xi1 merge, __epi_8xi64 a,
                                           __epi_8xi64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] > b[element]
   else
     result[element] = merge[element]

2.5.4. Compare elementwise two integer vectors for lower-than-or-equal

Description

Use these builtins to evaluate if the elements of the first integer vector are lower than or equal to the corresponding elements of the second integer vector.

The result is a mask that enables the element if the the integer comparison holds for that element.

Instruction
vmsle.vx
Prototypes
__epi_8xi1 __builtin_epi_vmsle_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsle_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsle_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsle_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsle_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsle_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsle_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsle_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsle_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsle_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsle_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsle_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsle_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsle_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsle_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsle_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] <= b[element]
Masked prototypes
__epi_8xi1 __builtin_epi_vmsle_8xi8_mask(__epi_8xi1 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsle_4xi16_mask(__epi_4xi1 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsle_2xi32_mask(__epi_2xi1 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsle_1xi64_mask(__epi_1xi1 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsle_16xi8_mask(__epi_16xi1 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsle_8xi16_mask(__epi_8xi1 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsle_4xi32_mask(__epi_4xi1 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsle_2xi64_mask(__epi_2xi1 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsle_32xi8_mask(__epi_32xi1 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsle_16xi16_mask(__epi_16xi1 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsle_8xi32_mask(__epi_8xi1 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsle_4xi64_mask(__epi_4xi1 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsle_64xi8_mask(__epi_64xi1 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsle_32xi16_mask(__epi_32xi1 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsle_16xi32_mask(__epi_16xi1 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsle_8xi64_mask(__epi_8xi1 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] <= b[element]
   else
     result[element] = merge[element]

2.5.5. Compare elementwise two unsigned integer vectors for lower-than-or-equal

Description

Use these builtins to evaluate if the elements of the first unsigned integer vector are lower than or equal to the corresponding elements of the second unsigned integer vector.

The result is a mask that enables the element if the the unsigned integer comparison holds for that element.

Instruction
vmsleu.vv
Prototypes
__epi_8xi1 __builtin_epi_vmsleu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsleu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsleu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsleu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsleu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsleu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsleu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsleu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsleu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsleu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsleu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsleu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                      unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsleu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                       unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsleu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsleu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsleu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] <= b[element]
Masked prototypes
__epi_8xi1 __builtin_epi_vmsleu_8xi8_mask(__epi_8xi1 merge, __epi_8xi8 a,
                                          __epi_8xi8 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsleu_4xi16_mask(__epi_4xi1 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsleu_2xi32_mask(__epi_2xi1 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsleu_1xi64_mask(__epi_1xi1 merge, __epi_1xi64 a,
                                           __epi_1xi64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsleu_16xi8_mask(__epi_16xi1 merge, __epi_16xi8 a,
                                            __epi_16xi8 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsleu_8xi16_mask(__epi_8xi1 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsleu_4xi32_mask(__epi_4xi1 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsleu_2xi64_mask(__epi_2xi1 merge, __epi_2xi64 a,
                                           __epi_2xi64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsleu_32xi8_mask(__epi_32xi1 merge, __epi_32xi8 a,
                                            __epi_32xi8 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsleu_16xi16_mask(__epi_16xi1 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsleu_8xi32_mask(__epi_8xi1 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsleu_4xi64_mask(__epi_4xi1 merge, __epi_4xi64 a,
                                           __epi_4xi64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsleu_64xi8_mask(__epi_64xi1 merge, __epi_64xi8 a,
                                            __epi_64xi8 b, __epi_64xi1 mask,
                                            unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsleu_32xi16_mask(__epi_32xi1 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsleu_16xi32_mask(__epi_16xi1 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsleu_8xi64_mask(__epi_8xi1 merge, __epi_8xi64 a,
                                           __epi_8xi64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] <= b[element]
   else
     result[element] = merge[element]

2.5.6. Compare elementwise two integer vectors for lower-than

Description

Use these builtins to evaluate if the elements of the first integer vector are lower than, but not equal to, the corresponding elements of the second integer vector.

The result is a mask that enables the element if the the integer comparison holds for that element.

Instruction
vmslt.vx
Prototypes
__epi_8xi1 __builtin_epi_vmslt_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmslt_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmslt_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmslt_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmslt_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmslt_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmslt_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmslt_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmslt_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmslt_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmslt_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmslt_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmslt_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmslt_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmslt_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmslt_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] < b[element]
Masked prototypes
__epi_8xi1 __builtin_epi_vmslt_8xi8_mask(__epi_8xi1 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmslt_4xi16_mask(__epi_4xi1 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmslt_2xi32_mask(__epi_2xi1 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmslt_1xi64_mask(__epi_1xi1 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmslt_16xi8_mask(__epi_16xi1 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmslt_8xi16_mask(__epi_8xi1 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmslt_4xi32_mask(__epi_4xi1 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmslt_2xi64_mask(__epi_2xi1 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmslt_32xi8_mask(__epi_32xi1 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmslt_16xi16_mask(__epi_16xi1 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmslt_8xi32_mask(__epi_8xi1 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmslt_4xi64_mask(__epi_4xi1 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmslt_64xi8_mask(__epi_64xi1 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmslt_32xi16_mask(__epi_32xi1 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmslt_16xi32_mask(__epi_16xi1 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmslt_8xi64_mask(__epi_8xi1 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] < b[element]
   else
     result[element] = merge[element]

2.5.7. Compare elementwise two unsigned integer vectors for lower-than

Description

Use these builtins to evaluate if the elements of the first unsigned integer vector are lower than, but not equal to, the corresponding elements of the second unsigned integer vector.

The result is a mask that enables the element if the the unsigned integer comparison holds for that element.

Instruction
vmsltu.vv
Prototypes
__epi_8xi1 __builtin_epi_vmsltu_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsltu_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                      unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsltu_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                      unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsltu_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsltu_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsltu_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                      unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsltu_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                      unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsltu_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsltu_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsltu_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                        unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsltu_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                      unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsltu_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                      unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsltu_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                       unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsltu_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                        unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsltu_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                        unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsltu_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] < b[element]
Masked prototypes
__epi_8xi1 __builtin_epi_vmsltu_8xi8_mask(__epi_8xi1 merge, __epi_8xi8 a,
                                          __epi_8xi8 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsltu_4xi16_mask(__epi_4xi1 merge, __epi_4xi16 a,
                                           __epi_4xi16 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsltu_2xi32_mask(__epi_2xi1 merge, __epi_2xi32 a,
                                           __epi_2xi32 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsltu_1xi64_mask(__epi_1xi1 merge, __epi_1xi64 a,
                                           __epi_1xi64 b, __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsltu_16xi8_mask(__epi_16xi1 merge, __epi_16xi8 a,
                                            __epi_16xi8 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsltu_8xi16_mask(__epi_8xi1 merge, __epi_8xi16 a,
                                           __epi_8xi16 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsltu_4xi32_mask(__epi_4xi1 merge, __epi_4xi32 a,
                                           __epi_4xi32 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsltu_2xi64_mask(__epi_2xi1 merge, __epi_2xi64 a,
                                           __epi_2xi64 b, __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsltu_32xi8_mask(__epi_32xi1 merge, __epi_32xi8 a,
                                            __epi_32xi8 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsltu_16xi16_mask(__epi_16xi1 merge, __epi_16xi16 a,
                                             __epi_16xi16 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsltu_8xi32_mask(__epi_8xi1 merge, __epi_8xi32 a,
                                           __epi_8xi32 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsltu_4xi64_mask(__epi_4xi1 merge, __epi_4xi64 a,
                                           __epi_4xi64 b, __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsltu_64xi8_mask(__epi_64xi1 merge, __epi_64xi8 a,
                                            __epi_64xi8 b, __epi_64xi1 mask,
                                            unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsltu_32xi16_mask(__epi_32xi1 merge, __epi_32xi16 a,
                                             __epi_32xi16 b, __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsltu_16xi32_mask(__epi_16xi1 merge, __epi_16xi32 a,
                                             __epi_16xi32 b, __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsltu_8xi64_mask(__epi_8xi1 merge, __epi_8xi64 a,
                                           __epi_8xi64 b, __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] < b[element]
   else
     result[element] = merge[element]

2.5.8. Compare elementwise two integer vectors for inequality

Description

Use these builtins to compare to floating-point vectors for inequality.

The result is a mask that enables the element if the the floating-point comparison holds for that element.

Instruction
vmsne.vv
Prototypes
__epi_8xi1 __builtin_epi_vmsne_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsne_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsne_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsne_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsne_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                      unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsne_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsne_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsne_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsne_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsne_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsne_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsne_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsne_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsne_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsne_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsne_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a[element] != b[element]
Masked prototypes
__epi_8xi1 __builtin_epi_vmsne_8xi8_mask(__epi_8xi1 merge, __epi_8xi8 a,
                                         __epi_8xi8 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsne_4xi16_mask(__epi_4xi1 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsne_2xi32_mask(__epi_2xi1 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsne_1xi64_mask(__epi_1xi1 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsne_16xi8_mask(__epi_16xi1 merge, __epi_16xi8 a,
                                           __epi_16xi8 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsne_8xi16_mask(__epi_8xi1 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsne_4xi32_mask(__epi_4xi1 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsne_2xi64_mask(__epi_2xi1 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsne_32xi8_mask(__epi_32xi1 merge, __epi_32xi8 a,
                                           __epi_32xi8 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsne_16xi16_mask(__epi_16xi1 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsne_8xi32_mask(__epi_8xi1 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsne_4xi64_mask(__epi_4xi1 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsne_64xi8_mask(__epi_64xi1 merge, __epi_64xi8 a,
                                           __epi_64xi8 b, __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsne_32xi16_mask(__epi_32xi1 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsne_16xi32_mask(__epi_16xi1 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi1 __builtin_epi_vmsne_8xi64_mask(__epi_8xi1 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = a[element] != b[element]
   else
     result[element] = merge[element]

2.6. Memory accesses

2.6.1. Load contiguous elements from memory into a vector

Description

Use these builtins to load elements contiguous in memory into a vector.

Instruction
vle.v
Prototypes
__epi_8xi8 __builtin_epi_vload_8xi8(const signed char *address,
                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_4xi16(const signed short int *address,
                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_2xi32(const signed int *address,
                                      unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_1xi64(const signed long int *address,
                                      unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_2xf32(const float *address,
                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_1xf64(const double *address,
                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_16xi8(const signed char *address,
                                      unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_8xi16(const signed short int *address,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_4xi32(const signed int *address,
                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_2xi64(const signed long int *address,
                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_4xf32(const float *address,
                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_2xf64(const double *address,
                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_32xi8(const signed char *address,
                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_16xi16(const signed short int *address,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_8xi32(const signed int *address,
                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_4xi64(const signed long int *address,
                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_8xf32(const float *address,
                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_4xf64(const double *address,
                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_64xi8(const signed char *address,
                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_32xi16(const signed short int *address,
                                        unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_16xi32(const signed int *address,
                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_8xi64(const signed long int *address,
                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_16xf32(const float *address,
                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_8xf64(const double *address,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8 __builtin_epi_vload_8xi8_mask(__epi_8xi8 merge,
                                         const signed char *address,
                                         __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_4xi16_mask(__epi_4xi16 merge,
                                           const signed short int *address,
                                           __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_2xi32_mask(__epi_2xi32 merge,
                                           const signed int *address,
                                           __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_1xi64_mask(__epi_1xi64 merge,
                                           const signed long int *address,
                                           __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_2xf32_mask(__epi_2xf32 merge,
                                           const float *address,
                                           __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_1xf64_mask(__epi_1xf64 merge,
                                           const double *address,
                                           __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_16xi8_mask(__epi_16xi8 merge,
                                           const signed char *address,
                                           __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_8xi16_mask(__epi_8xi16 merge,
                                           const signed short int *address,
                                           __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_4xi32_mask(__epi_4xi32 merge,
                                           const signed int *address,
                                           __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_2xi64_mask(__epi_2xi64 merge,
                                           const signed long int *address,
                                           __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_4xf32_mask(__epi_4xf32 merge,
                                           const float *address,
                                           __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_2xf64_mask(__epi_2xf64 merge,
                                           const double *address,
                                           __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_32xi8_mask(__epi_32xi8 merge,
                                           const signed char *address,
                                           __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_16xi16_mask(__epi_16xi16 merge,
                                             const signed short int *address,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_8xi32_mask(__epi_8xi32 merge,
                                           const signed int *address,
                                           __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_4xi64_mask(__epi_4xi64 merge,
                                           const signed long int *address,
                                           __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_8xf32_mask(__epi_8xf32 merge,
                                           const float *address,
                                           __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_4xf64_mask(__epi_4xf64 merge,
                                           const double *address,
                                           __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_64xi8_mask(__epi_64xi8 merge,
                                           const signed char *address,
                                           __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_32xi16_mask(__epi_32xi16 merge,
                                             const signed short int *address,
                                             __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_16xi32_mask(__epi_16xi32 merge,
                                             const signed int *address,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_8xi64_mask(__epi_8xi64 merge,
                                           const signed long int *address,
                                           __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_16xf32_mask(__epi_16xf32 merge,
                                             const float *address,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_8xf64_mask(__epi_8xf64 merge,
                                           const double *address,
                                           __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_element(address)
     address = address + SEW / 8
   else
     result[element] = merge[element]

2.6.2. Load contiguous elements from memory into a vector (cache-flags)

Description

Use these builtins to load elements contiguous in memory into a vector specifying the cache behaviour in the flags parameter.

Instruction
vle.v
Prototypes
__epi_8xi8 __builtin_epi_vload_ext_8xi8(const signed char *address,
                                        unsigned long int flags,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_4xi16(const signed short int *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_2xi32(const signed int *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_1xi64(const signed long int *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_ext_2xf32(const float *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_ext_1xf64(const double *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_16xi8(const signed char *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_8xi16(const signed short int *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_4xi32(const signed int *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_2xi64(const signed long int *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_ext_4xf32(const float *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_ext_2xf64(const double *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_32xi8(const signed char *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_16xi16(const signed short int *address,
                                            unsigned long int flags,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_8xi32(const signed int *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_4xi64(const signed long int *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_ext_8xf32(const float *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_ext_4xf64(const double *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_64xi8(const signed char *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_32xi16(const signed short int *address,
                                            unsigned long int flags,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_16xi32(const signed int *address,
                                            unsigned long int flags,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_8xi64(const signed long int *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_ext_16xf32(const float *address,
                                            unsigned long int flags,
                                            unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_ext_8xf64(const double *address,
                                          unsigned long int flags,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8 __builtin_epi_vload_ext_8xi8_mask(__epi_8xi8 merge,
                                             const signed char *address,
                                             unsigned long int flags,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_4xi16_mask(__epi_4xi16 merge,
                                               const signed short int *address,
                                               unsigned long int flags,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_2xi32_mask(__epi_2xi32 merge,
                                               const signed int *address,
                                               unsigned long int flags,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_1xi64_mask(__epi_1xi64 merge,
                                               const signed long int *address,
                                               unsigned long int flags,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_ext_2xf32_mask(__epi_2xf32 merge,
                                               const float *address,
                                               unsigned long int flags,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_ext_1xf64_mask(__epi_1xf64 merge,
                                               const double *address,
                                               unsigned long int flags,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_16xi8_mask(__epi_16xi8 merge,
                                               const signed char *address,
                                               unsigned long int flags,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_8xi16_mask(__epi_8xi16 merge,
                                               const signed short int *address,
                                               unsigned long int flags,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_4xi32_mask(__epi_4xi32 merge,
                                               const signed int *address,
                                               unsigned long int flags,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_2xi64_mask(__epi_2xi64 merge,
                                               const signed long int *address,
                                               unsigned long int flags,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_ext_4xf32_mask(__epi_4xf32 merge,
                                               const float *address,
                                               unsigned long int flags,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_ext_2xf64_mask(__epi_2xf64 merge,
                                               const double *address,
                                               unsigned long int flags,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_32xi8_mask(__epi_32xi8 merge,
                                               const signed char *address,
                                               unsigned long int flags,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_16xi16_mask(
    __epi_16xi16 merge, const signed short int *address,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_8xi32_mask(__epi_8xi32 merge,
                                               const signed int *address,
                                               unsigned long int flags,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_4xi64_mask(__epi_4xi64 merge,
                                               const signed long int *address,
                                               unsigned long int flags,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_ext_8xf32_mask(__epi_8xf32 merge,
                                               const float *address,
                                               unsigned long int flags,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_ext_4xf64_mask(__epi_4xf64 merge,
                                               const double *address,
                                               unsigned long int flags,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_64xi8_mask(__epi_64xi8 merge,
                                               const signed char *address,
                                               unsigned long int flags,
                                               __epi_64xi1 mask,
                                               unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_32xi16_mask(
    __epi_32xi16 merge, const signed short int *address,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_16xi32_mask(__epi_16xi32 merge,
                                                 const signed int *address,
                                                 unsigned long int flags,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_8xi64_mask(__epi_8xi64 merge,
                                               const signed long int *address,
                                               unsigned long int flags,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_ext_16xf32_mask(__epi_16xf32 merge,
                                                 const float *address,
                                                 unsigned long int flags,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_ext_8xf64_mask(__epi_8xf64 merge,
                                               const double *address,
                                               unsigned long int flags,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_element(address)
     address = address + SEW / 8
   else
     result[element] = merge[element]

2.6.3. Load elements from memory into a vector using an index vector (cache-flags)

Description

Use these builtins to load elements into a vector using an index vector specifying the cache behaviour in the flags parameter. This is commonly known as a gather operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vlx.v
Prototypes
__epi_8xi8 __builtin_epi_vload_ext_indexed_8xi8(const signed char *address,
                                                __epi_8xi8 index,
                                                unsigned long int flags,
                                                unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_indexed_4xi16(
    const signed short int *address, __epi_4xi16 index, unsigned long int flags,
    unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_indexed_2xi32(const signed int *address,
                                                  __epi_2xi32 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_indexed_1xi64(
    const signed long int *address, __epi_1xi64 index, unsigned long int flags,
    unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_ext_indexed_2xf32(const float *address,
                                                  __epi_2xi32 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_ext_indexed_1xf64(const double *address,
                                                  __epi_1xi64 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_indexed_16xi8(const signed char *address,
                                                  __epi_16xi8 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_indexed_8xi16(
    const signed short int *address, __epi_8xi16 index, unsigned long int flags,
    unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_indexed_4xi32(const signed int *address,
                                                  __epi_4xi32 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_indexed_2xi64(
    const signed long int *address, __epi_2xi64 index, unsigned long int flags,
    unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_ext_indexed_4xf32(const float *address,
                                                  __epi_4xi32 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_ext_indexed_2xf64(const double *address,
                                                  __epi_2xi64 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_indexed_32xi8(const signed char *address,
                                                  __epi_32xi8 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_indexed_16xi16(
    const signed short int *address, __epi_16xi16 index,
    unsigned long int flags, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_indexed_8xi32(const signed int *address,
                                                  __epi_8xi32 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_indexed_4xi64(
    const signed long int *address, __epi_4xi64 index, unsigned long int flags,
    unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_ext_indexed_8xf32(const float *address,
                                                  __epi_8xi32 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_ext_indexed_4xf64(const double *address,
                                                  __epi_4xi64 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_indexed_64xi8(const signed char *address,
                                                  __epi_64xi8 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_indexed_32xi16(
    const signed short int *address, __epi_32xi16 index,
    unsigned long int flags, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_indexed_16xi32(const signed int *address,
                                                    __epi_16xi32 index,
                                                    unsigned long int flags,
                                                    unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_indexed_8xi64(
    const signed long int *address, __epi_8xi64 index, unsigned long int flags,
    unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_ext_indexed_16xf32(const float *address,
                                                    __epi_16xi32 index,
                                                    unsigned long int flags,
                                                    unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_ext_indexed_8xf64(const double *address,
                                                  __epi_8xi64 index,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_element(address + index[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vload_ext_indexed_8xi8_mask(
    __epi_8xi8 merge, const signed char *address, __epi_8xi8 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_indexed_4xi16_mask(
    __epi_4xi16 merge, const signed short int *address, __epi_4xi16 index,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_indexed_2xi32_mask(
    __epi_2xi32 merge, const signed int *address, __epi_2xi32 index,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_indexed_1xi64_mask(
    __epi_1xi64 merge, const signed long int *address, __epi_1xi64 index,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_ext_indexed_2xf32_mask(
    __epi_2xf32 merge, const float *address, __epi_2xi32 index,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_ext_indexed_1xf64_mask(
    __epi_1xf64 merge, const double *address, __epi_1xi64 index,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_indexed_16xi8_mask(
    __epi_16xi8 merge, const signed char *address, __epi_16xi8 index,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_indexed_8xi16_mask(
    __epi_8xi16 merge, const signed short int *address, __epi_8xi16 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_indexed_4xi32_mask(
    __epi_4xi32 merge, const signed int *address, __epi_4xi32 index,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_indexed_2xi64_mask(
    __epi_2xi64 merge, const signed long int *address, __epi_2xi64 index,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_ext_indexed_4xf32_mask(
    __epi_4xf32 merge, const float *address, __epi_4xi32 index,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_ext_indexed_2xf64_mask(
    __epi_2xf64 merge, const double *address, __epi_2xi64 index,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_indexed_32xi8_mask(
    __epi_32xi8 merge, const signed char *address, __epi_32xi8 index,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_indexed_16xi16_mask(
    __epi_16xi16 merge, const signed short int *address, __epi_16xi16 index,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_indexed_8xi32_mask(
    __epi_8xi32 merge, const signed int *address, __epi_8xi32 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_indexed_4xi64_mask(
    __epi_4xi64 merge, const signed long int *address, __epi_4xi64 index,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_ext_indexed_8xf32_mask(
    __epi_8xf32 merge, const float *address, __epi_8xi32 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_ext_indexed_4xf64_mask(
    __epi_4xf64 merge, const double *address, __epi_4xi64 index,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_indexed_64xi8_mask(
    __epi_64xi8 merge, const signed char *address, __epi_64xi8 index,
    unsigned long int flags, __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_indexed_32xi16_mask(
    __epi_32xi16 merge, const signed short int *address, __epi_32xi16 index,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_indexed_16xi32_mask(
    __epi_16xi32 merge, const signed int *address, __epi_16xi32 index,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_indexed_8xi64_mask(
    __epi_8xi64 merge, const signed long int *address, __epi_8xi64 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_ext_indexed_16xf32_mask(
    __epi_16xf32 merge, const float *address, __epi_16xi32 index,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_ext_indexed_8xf64_mask(
    __epi_8xf64 merge, const double *address, __epi_8xi64 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_element(address + index[element])
   else
     result[element] = merge[element]

2.6.4. Load unsigned integer elements from memory into a vector using an index vector (cache-flags)

Description

Use these builtins to load unsigned integer elements into a vector using an index vector specifying the cache behaviour in the flags parameter. This is commonly known as a gather operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Depending on the types involved, there may not be semantic difference between these builtins and the corresponding __builtin_epi_vload_indexed_ext builtins.

Instruction
vlx.v
Prototypes
__epi_8xi8 __builtin_epi_vload_ext_indexed_unsigned_8xi8(
    const unsigned char *address, __epi_8xi8 index, unsigned long int flags,
    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_indexed_unsigned_4xi16(
    const unsigned short int *address, __epi_4xi16 index,
    unsigned long int flags, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_indexed_unsigned_2xi32(
    const unsigned int *address, __epi_2xi32 index, unsigned long int flags,
    unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_indexed_unsigned_1xi64(
    const unsigned long int *address, __epi_1xi64 index,
    unsigned long int flags, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_indexed_unsigned_16xi8(
    const unsigned char *address, __epi_16xi8 index, unsigned long int flags,
    unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_indexed_unsigned_8xi16(
    const unsigned short int *address, __epi_8xi16 index,
    unsigned long int flags, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_indexed_unsigned_4xi32(
    const unsigned int *address, __epi_4xi32 index, unsigned long int flags,
    unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_indexed_unsigned_2xi64(
    const unsigned long int *address, __epi_2xi64 index,
    unsigned long int flags, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_indexed_unsigned_32xi8(
    const unsigned char *address, __epi_32xi8 index, unsigned long int flags,
    unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_indexed_unsigned_16xi16(
    const unsigned short int *address, __epi_16xi16 index,
    unsigned long int flags, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_indexed_unsigned_8xi32(
    const unsigned int *address, __epi_8xi32 index, unsigned long int flags,
    unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_indexed_unsigned_4xi64(
    const unsigned long int *address, __epi_4xi64 index,
    unsigned long int flags, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_indexed_unsigned_64xi8(
    const unsigned char *address, __epi_64xi8 index, unsigned long int flags,
    unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_indexed_unsigned_32xi16(
    const unsigned short int *address, __epi_32xi16 index,
    unsigned long int flags, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_indexed_unsigned_16xi32(
    const unsigned int *address, __epi_16xi32 index, unsigned long int flags,
    unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_indexed_unsigned_8xi64(
    const unsigned long int *address, __epi_8xi64 index,
    unsigned long int flags, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_unsigned_element(address + index[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vload_ext_indexed_unsigned_8xi8_mask(
    __epi_8xi8 merge, const unsigned char *address, __epi_8xi8 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_indexed_unsigned_4xi16_mask(
    __epi_4xi16 merge, const unsigned short int *address, __epi_4xi16 index,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_indexed_unsigned_2xi32_mask(
    __epi_2xi32 merge, const unsigned int *address, __epi_2xi32 index,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_indexed_unsigned_1xi64_mask(
    __epi_1xi64 merge, const unsigned long int *address, __epi_1xi64 index,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_indexed_unsigned_16xi8_mask(
    __epi_16xi8 merge, const unsigned char *address, __epi_16xi8 index,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_indexed_unsigned_8xi16_mask(
    __epi_8xi16 merge, const unsigned short int *address, __epi_8xi16 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_indexed_unsigned_4xi32_mask(
    __epi_4xi32 merge, const unsigned int *address, __epi_4xi32 index,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_indexed_unsigned_2xi64_mask(
    __epi_2xi64 merge, const unsigned long int *address, __epi_2xi64 index,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_indexed_unsigned_32xi8_mask(
    __epi_32xi8 merge, const unsigned char *address, __epi_32xi8 index,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_indexed_unsigned_16xi16_mask(
    __epi_16xi16 merge, const unsigned short int *address, __epi_16xi16 index,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_indexed_unsigned_8xi32_mask(
    __epi_8xi32 merge, const unsigned int *address, __epi_8xi32 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_indexed_unsigned_4xi64_mask(
    __epi_4xi64 merge, const unsigned long int *address, __epi_4xi64 index,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_indexed_unsigned_64xi8_mask(
    __epi_64xi8 merge, const unsigned char *address, __epi_64xi8 index,
    unsigned long int flags, __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_indexed_unsigned_32xi16_mask(
    __epi_32xi16 merge, const unsigned short int *address, __epi_32xi16 index,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_indexed_unsigned_16xi32_mask(
    __epi_16xi32 merge, const unsigned int *address, __epi_16xi32 index,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_indexed_unsigned_8xi64_mask(
    __epi_8xi64 merge, const unsigned long int *address, __epi_8xi64 index,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_unsigned_element(address + index[element])
   else
     result[element] = merge[element]

2.6.5. Load strided elements from memory into a vector (cache-flags)

Description

Use these builtins to load elements into a vector that are separated in memory by a constant stride value, in bytes, specifying the cache behaviour in the flags parameter.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vls.v
Prototypes
__epi_8xi8 __builtin_epi_vload_ext_strided_8xi8(const signed char *address,
                                                signed long int stride,
                                                unsigned long int flags,
                                                unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_strided_4xi16(
    const signed short int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_strided_2xi32(const signed int *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_strided_1xi64(
    const signed long int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_ext_strided_2xf32(const float *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_ext_strided_1xf64(const double *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_strided_16xi8(const signed char *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_strided_8xi16(
    const signed short int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_strided_4xi32(const signed int *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_strided_2xi64(
    const signed long int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_ext_strided_4xf32(const float *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_ext_strided_2xf64(const double *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_strided_32xi8(const signed char *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_strided_16xi16(
    const signed short int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_strided_8xi32(const signed int *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_strided_4xi64(
    const signed long int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_ext_strided_8xf32(const float *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_ext_strided_4xf64(const double *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_strided_64xi8(const signed char *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_strided_32xi16(
    const signed short int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_strided_16xi32(const signed int *address,
                                                    signed long int stride,
                                                    unsigned long int flags,
                                                    unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_strided_8xi64(
    const signed long int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_ext_strided_16xf32(const float *address,
                                                    signed long int stride,
                                                    unsigned long int flags,
                                                    unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_ext_strided_8xf64(const double *address,
                                                  signed long int stride,
                                                  unsigned long int flags,
                                                  unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_element(address)
  address = address + stride
Masked prototypes
__epi_8xi8 __builtin_epi_vload_ext_strided_8xi8_mask(
    __epi_8xi8 merge, const signed char *address, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_strided_4xi16_mask(
    __epi_4xi16 merge, const signed short int *address, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_strided_2xi32_mask(
    __epi_2xi32 merge, const signed int *address, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_strided_1xi64_mask(
    __epi_1xi64 merge, const signed long int *address, signed long int stride,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_ext_strided_2xf32_mask(
    __epi_2xf32 merge, const float *address, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_ext_strided_1xf64_mask(
    __epi_1xf64 merge, const double *address, signed long int stride,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_strided_16xi8_mask(
    __epi_16xi8 merge, const signed char *address, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_strided_8xi16_mask(
    __epi_8xi16 merge, const signed short int *address, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_strided_4xi32_mask(
    __epi_4xi32 merge, const signed int *address, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_strided_2xi64_mask(
    __epi_2xi64 merge, const signed long int *address, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_ext_strided_4xf32_mask(
    __epi_4xf32 merge, const float *address, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_ext_strided_2xf64_mask(
    __epi_2xf64 merge, const double *address, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_strided_32xi8_mask(
    __epi_32xi8 merge, const signed char *address, signed long int stride,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_strided_16xi16_mask(
    __epi_16xi16 merge, const signed short int *address, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_strided_8xi32_mask(
    __epi_8xi32 merge, const signed int *address, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_strided_4xi64_mask(
    __epi_4xi64 merge, const signed long int *address, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_ext_strided_8xf32_mask(
    __epi_8xf32 merge, const float *address, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_ext_strided_4xf64_mask(
    __epi_4xf64 merge, const double *address, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_strided_64xi8_mask(
    __epi_64xi8 merge, const signed char *address, signed long int stride,
    unsigned long int flags, __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_strided_32xi16_mask(
    __epi_32xi16 merge, const signed short int *address, signed long int stride,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_strided_16xi32_mask(
    __epi_16xi32 merge, const signed int *address, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_strided_8xi64_mask(
    __epi_8xi64 merge, const signed long int *address, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_ext_strided_16xf32_mask(
    __epi_16xf32 merge, const float *address, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_ext_strided_8xf64_mask(
    __epi_8xf64 merge, const double *address, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_element(address)
   else
     result[element] = merge[element]
  address = address + stride

2.6.6. Load unsigned strided elements from memory into a vector (cache-flags)

Description

Use these builtins to load elements into a vector that are separated in memory by a constant stride value, in bytes, specifying the cache behaviour in the flags parameter.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Depending on the types involved, there may not be semantic difference between these builtins and the corresponding __builtin_epi_vload_strided_ext.

Instruction
vls.v
Prototypes
__epi_8xi8 __builtin_epi_vload_ext_strided_unsigned_8xi8(
    const unsigned char *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_strided_unsigned_4xi16(
    const unsigned short int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_strided_unsigned_2xi32(
    const unsigned int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_strided_unsigned_1xi64(
    const unsigned long int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_strided_unsigned_16xi8(
    const unsigned char *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_strided_unsigned_8xi16(
    const unsigned short int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_strided_unsigned_4xi32(
    const unsigned int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_strided_unsigned_2xi64(
    const unsigned long int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_strided_unsigned_32xi8(
    const unsigned char *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_strided_unsigned_16xi16(
    const unsigned short int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_strided_unsigned_8xi32(
    const unsigned int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_strided_unsigned_4xi64(
    const unsigned long int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_strided_unsigned_64xi8(
    const unsigned char *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_strided_unsigned_32xi16(
    const unsigned short int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_strided_unsigned_16xi32(
    const unsigned int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_strided_unsigned_8xi64(
    const unsigned long int *address, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_unsigned_element(address)
  address = address + stride
Masked prototypes
__epi_8xi8 __builtin_epi_vload_ext_strided_unsigned_8xi8_mask(
    __epi_8xi8 merge, const unsigned char *address, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_strided_unsigned_4xi16_mask(
    __epi_4xi16 merge, const unsigned short int *address,
    signed long int stride, unsigned long int flags, __epi_4xi1 mask,
    unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_strided_unsigned_2xi32_mask(
    __epi_2xi32 merge, const unsigned int *address, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_strided_unsigned_1xi64_mask(
    __epi_1xi64 merge, const unsigned long int *address, signed long int stride,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_strided_unsigned_16xi8_mask(
    __epi_16xi8 merge, const unsigned char *address, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_strided_unsigned_8xi16_mask(
    __epi_8xi16 merge, const unsigned short int *address,
    signed long int stride, unsigned long int flags, __epi_8xi1 mask,
    unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_strided_unsigned_4xi32_mask(
    __epi_4xi32 merge, const unsigned int *address, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_strided_unsigned_2xi64_mask(
    __epi_2xi64 merge, const unsigned long int *address, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_strided_unsigned_32xi8_mask(
    __epi_32xi8 merge, const unsigned char *address, signed long int stride,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_strided_unsigned_16xi16_mask(
    __epi_16xi16 merge, const unsigned short int *address,
    signed long int stride, unsigned long int flags, __epi_16xi1 mask,
    unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_strided_unsigned_8xi32_mask(
    __epi_8xi32 merge, const unsigned int *address, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_strided_unsigned_4xi64_mask(
    __epi_4xi64 merge, const unsigned long int *address, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_strided_unsigned_64xi8_mask(
    __epi_64xi8 merge, const unsigned char *address, signed long int stride,
    unsigned long int flags, __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_strided_unsigned_32xi16_mask(
    __epi_32xi16 merge, const unsigned short int *address,
    signed long int stride, unsigned long int flags, __epi_32xi1 mask,
    unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_strided_unsigned_16xi32_mask(
    __epi_16xi32 merge, const unsigned int *address, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_strided_unsigned_8xi64_mask(
    __epi_8xi64 merge, const unsigned long int *address, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_unsigned_element(address)
   else
     result[element] = merge[element]
  address = address + stride

2.6.7. Load unsigned contiguous elements from memory into a vector (cache-flags)

Description

Use these builtins to load elements contiguous in memory into a vector specifying the cache behaviour in the flags parameter.

Depending on the types involved, there may not be semantic difference between these builtins and the corresponding __builtin_epi_vload_ext.

Instruction
vle.v
Prototypes
__epi_8xi8 __builtin_epi_vload_ext_unsigned_8xi8(const unsigned char *address,
                                                 unsigned long int flags,
                                                 unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_ext_unsigned_4xi16(const unsigned short int *address,
                                       unsigned long int flags,
                                       unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_unsigned_2xi32(const unsigned int *address,
                                                   unsigned long int flags,
                                                   unsigned long int gvl);
__epi_1xi64
__builtin_epi_vload_ext_unsigned_1xi64(const unsigned long int *address,
                                       unsigned long int flags,
                                       unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_unsigned_16xi8(const unsigned char *address,
                                                   unsigned long int flags,
                                                   unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_ext_unsigned_8xi16(const unsigned short int *address,
                                       unsigned long int flags,
                                       unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_unsigned_4xi32(const unsigned int *address,
                                                   unsigned long int flags,
                                                   unsigned long int gvl);
__epi_2xi64
__builtin_epi_vload_ext_unsigned_2xi64(const unsigned long int *address,
                                       unsigned long int flags,
                                       unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_unsigned_32xi8(const unsigned char *address,
                                                   unsigned long int flags,
                                                   unsigned long int gvl);
__epi_16xi16
__builtin_epi_vload_ext_unsigned_16xi16(const unsigned short int *address,
                                        unsigned long int flags,
                                        unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_unsigned_8xi32(const unsigned int *address,
                                                   unsigned long int flags,
                                                   unsigned long int gvl);
__epi_4xi64
__builtin_epi_vload_ext_unsigned_4xi64(const unsigned long int *address,
                                       unsigned long int flags,
                                       unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_unsigned_64xi8(const unsigned char *address,
                                                   unsigned long int flags,
                                                   unsigned long int gvl);
__epi_32xi16
__builtin_epi_vload_ext_unsigned_32xi16(const unsigned short int *address,
                                        unsigned long int flags,
                                        unsigned long int gvl);
__epi_16xi32
__builtin_epi_vload_ext_unsigned_16xi32(const unsigned int *address,
                                        unsigned long int flags,
                                        unsigned long int gvl);
__epi_8xi64
__builtin_epi_vload_ext_unsigned_8xi64(const unsigned long int *address,
                                       unsigned long int flags,
                                       unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_unsigned_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8 __builtin_epi_vload_ext_unsigned_8xi8_mask(
    __epi_8xi8 merge, const unsigned char *address, unsigned long int flags,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_ext_unsigned_4xi16_mask(
    __epi_4xi16 merge, const unsigned short int *address,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_ext_unsigned_2xi32_mask(
    __epi_2xi32 merge, const unsigned int *address, unsigned long int flags,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_ext_unsigned_1xi64_mask(
    __epi_1xi64 merge, const unsigned long int *address,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_ext_unsigned_16xi8_mask(
    __epi_16xi8 merge, const unsigned char *address, unsigned long int flags,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_ext_unsigned_8xi16_mask(
    __epi_8xi16 merge, const unsigned short int *address,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_ext_unsigned_4xi32_mask(
    __epi_4xi32 merge, const unsigned int *address, unsigned long int flags,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_ext_unsigned_2xi64_mask(
    __epi_2xi64 merge, const unsigned long int *address,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_ext_unsigned_32xi8_mask(
    __epi_32xi8 merge, const unsigned char *address, unsigned long int flags,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_ext_unsigned_16xi16_mask(
    __epi_16xi16 merge, const unsigned short int *address,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_ext_unsigned_8xi32_mask(
    __epi_8xi32 merge, const unsigned int *address, unsigned long int flags,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_ext_unsigned_4xi64_mask(
    __epi_4xi64 merge, const unsigned long int *address,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_ext_unsigned_64xi8_mask(
    __epi_64xi8 merge, const unsigned char *address, unsigned long int flags,
    __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_ext_unsigned_32xi16_mask(
    __epi_32xi16 merge, const unsigned short int *address,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_ext_unsigned_16xi32_mask(
    __epi_16xi32 merge, const unsigned int *address, unsigned long int flags,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_ext_unsigned_8xi64_mask(
    __epi_8xi64 merge, const unsigned long int *address,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_unsigned_element(address)
     address = address + SEW / 8
   else
     result[element] = merge[element]

2.6.8. Load elements from memory into a vector using an index vector

Description

Use these builtins to load elements into a vector using an index vector. This is commonly known as a gather operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vlx.v
Prototypes
__epi_8xi8 __builtin_epi_vload_indexed_8xi8(const signed char *address,
                                            __epi_8xi8 index,
                                            unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_indexed_4xi16(const signed short int *address,
                                              __epi_4xi16 index,
                                              unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_indexed_2xi32(const signed int *address,
                                              __epi_2xi32 index,
                                              unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_indexed_1xi64(const signed long int *address,
                                              __epi_1xi64 index,
                                              unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_indexed_2xf32(const float *address,
                                              __epi_2xi32 index,
                                              unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_indexed_1xf64(const double *address,
                                              __epi_1xi64 index,
                                              unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_indexed_16xi8(const signed char *address,
                                              __epi_16xi8 index,
                                              unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_indexed_8xi16(const signed short int *address,
                                              __epi_8xi16 index,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_indexed_4xi32(const signed int *address,
                                              __epi_4xi32 index,
                                              unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_indexed_2xi64(const signed long int *address,
                                              __epi_2xi64 index,
                                              unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_indexed_4xf32(const float *address,
                                              __epi_4xi32 index,
                                              unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_indexed_2xf64(const double *address,
                                              __epi_2xi64 index,
                                              unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_indexed_32xi8(const signed char *address,
                                              __epi_32xi8 index,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_indexed_16xi16(const signed short int *address,
                                                __epi_16xi16 index,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_indexed_8xi32(const signed int *address,
                                              __epi_8xi32 index,
                                              unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_indexed_4xi64(const signed long int *address,
                                              __epi_4xi64 index,
                                              unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_indexed_8xf32(const float *address,
                                              __epi_8xi32 index,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_indexed_4xf64(const double *address,
                                              __epi_4xi64 index,
                                              unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_indexed_64xi8(const signed char *address,
                                              __epi_64xi8 index,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_indexed_32xi16(const signed short int *address,
                                                __epi_32xi16 index,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_indexed_16xi32(const signed int *address,
                                                __epi_16xi32 index,
                                                unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_indexed_8xi64(const signed long int *address,
                                              __epi_8xi64 index,
                                              unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_indexed_16xf32(const float *address,
                                                __epi_16xi32 index,
                                                unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_indexed_8xf64(const double *address,
                                              __epi_8xi64 index,
                                              unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_element(address + index[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vload_indexed_8xi8_mask(__epi_8xi8 merge,
                                                 const signed char *address,
                                                 __epi_8xi8 index,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_indexed_4xi16_mask(
    __epi_4xi16 merge, const signed short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_indexed_2xi32_mask(__epi_2xi32 merge,
                                                   const signed int *address,
                                                   __epi_2xi32 index,
                                                   __epi_2xi1 mask,
                                                   unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_indexed_1xi64_mask(
    __epi_1xi64 merge, const signed long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_indexed_2xf32_mask(__epi_2xf32 merge,
                                                   const float *address,
                                                   __epi_2xi32 index,
                                                   __epi_2xi1 mask,
                                                   unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_indexed_1xf64_mask(__epi_1xf64 merge,
                                                   const double *address,
                                                   __epi_1xi64 index,
                                                   __epi_1xi1 mask,
                                                   unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_indexed_16xi8_mask(__epi_16xi8 merge,
                                                   const signed char *address,
                                                   __epi_16xi8 index,
                                                   __epi_16xi1 mask,
                                                   unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_indexed_8xi16_mask(
    __epi_8xi16 merge, const signed short int *address, __epi_8xi16 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_indexed_4xi32_mask(__epi_4xi32 merge,
                                                   const signed int *address,
                                                   __epi_4xi32 index,
                                                   __epi_4xi1 mask,
                                                   unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_indexed_2xi64_mask(
    __epi_2xi64 merge, const signed long int *address, __epi_2xi64 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_indexed_4xf32_mask(__epi_4xf32 merge,
                                                   const float *address,
                                                   __epi_4xi32 index,
                                                   __epi_4xi1 mask,
                                                   unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_indexed_2xf64_mask(__epi_2xf64 merge,
                                                   const double *address,
                                                   __epi_2xi64 index,
                                                   __epi_2xi1 mask,
                                                   unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_indexed_32xi8_mask(__epi_32xi8 merge,
                                                   const signed char *address,
                                                   __epi_32xi8 index,
                                                   __epi_32xi1 mask,
                                                   unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_indexed_16xi16_mask(
    __epi_16xi16 merge, const signed short int *address, __epi_16xi16 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_indexed_8xi32_mask(__epi_8xi32 merge,
                                                   const signed int *address,
                                                   __epi_8xi32 index,
                                                   __epi_8xi1 mask,
                                                   unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_indexed_4xi64_mask(
    __epi_4xi64 merge, const signed long int *address, __epi_4xi64 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_indexed_8xf32_mask(__epi_8xf32 merge,
                                                   const float *address,
                                                   __epi_8xi32 index,
                                                   __epi_8xi1 mask,
                                                   unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_indexed_4xf64_mask(__epi_4xf64 merge,
                                                   const double *address,
                                                   __epi_4xi64 index,
                                                   __epi_4xi1 mask,
                                                   unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_indexed_64xi8_mask(__epi_64xi8 merge,
                                                   const signed char *address,
                                                   __epi_64xi8 index,
                                                   __epi_64xi1 mask,
                                                   unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_indexed_32xi16_mask(
    __epi_32xi16 merge, const signed short int *address, __epi_32xi16 index,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_indexed_16xi32_mask(__epi_16xi32 merge,
                                                     const signed int *address,
                                                     __epi_16xi32 index,
                                                     __epi_16xi1 mask,
                                                     unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_indexed_8xi64_mask(
    __epi_8xi64 merge, const signed long int *address, __epi_8xi64 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_indexed_16xf32_mask(__epi_16xf32 merge,
                                                     const float *address,
                                                     __epi_16xi32 index,
                                                     __epi_16xi1 mask,
                                                     unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_indexed_8xf64_mask(__epi_8xf64 merge,
                                                   const double *address,
                                                   __epi_8xi64 index,
                                                   __epi_8xi1 mask,
                                                   unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1

for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_element(address + index[element])
   else
     result[element] = merge[element]

2.6.9. Load unsigned integer elements from memory into a vector using an index vector

Description

Use these builtins to load unsigned integer elements into a vector using an index vector. This is commonly known as a gather operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Depending on the types involved, there may not be semantic difference between these builtins and the corresponding __builtin_epi_vload_indexed builtins.

Instruction
vlx.v
Prototypes
__epi_8xi8 __builtin_epi_vload_indexed_unsigned_8xi8(
    const unsigned char *address, __epi_8xi8 index, unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_indexed_unsigned_4xi16(const unsigned short int *address,
                                           __epi_4xi16 index,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_indexed_unsigned_2xi32(
    const unsigned int *address, __epi_2xi32 index, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_indexed_unsigned_1xi64(
    const unsigned long int *address, __epi_1xi64 index, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_indexed_unsigned_16xi8(
    const unsigned char *address, __epi_16xi8 index, unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_indexed_unsigned_8xi16(const unsigned short int *address,
                                           __epi_8xi16 index,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_indexed_unsigned_4xi32(
    const unsigned int *address, __epi_4xi32 index, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_indexed_unsigned_2xi64(
    const unsigned long int *address, __epi_2xi64 index, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_indexed_unsigned_32xi8(
    const unsigned char *address, __epi_32xi8 index, unsigned long int gvl);
__epi_16xi16
__builtin_epi_vload_indexed_unsigned_16xi16(const unsigned short int *address,
                                            __epi_16xi16 index,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_indexed_unsigned_8xi32(
    const unsigned int *address, __epi_8xi32 index, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_indexed_unsigned_4xi64(
    const unsigned long int *address, __epi_4xi64 index, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_indexed_unsigned_64xi8(
    const unsigned char *address, __epi_64xi8 index, unsigned long int gvl);
__epi_32xi16
__builtin_epi_vload_indexed_unsigned_32xi16(const unsigned short int *address,
                                            __epi_32xi16 index,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_indexed_unsigned_16xi32(
    const unsigned int *address, __epi_16xi32 index, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_indexed_unsigned_8xi64(
    const unsigned long int *address, __epi_8xi64 index, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_unsigned_element(address + index[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vload_indexed_unsigned_8xi8_mask(
    __epi_8xi8 merge, const unsigned char *address, __epi_8xi8 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_indexed_unsigned_4xi16_mask(
    __epi_4xi16 merge, const unsigned short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_indexed_unsigned_2xi32_mask(
    __epi_2xi32 merge, const unsigned int *address, __epi_2xi32 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_indexed_unsigned_1xi64_mask(
    __epi_1xi64 merge, const unsigned long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_indexed_unsigned_16xi8_mask(
    __epi_16xi8 merge, const unsigned char *address, __epi_16xi8 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_indexed_unsigned_8xi16_mask(
    __epi_8xi16 merge, const unsigned short int *address, __epi_8xi16 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_indexed_unsigned_4xi32_mask(
    __epi_4xi32 merge, const unsigned int *address, __epi_4xi32 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_indexed_unsigned_2xi64_mask(
    __epi_2xi64 merge, const unsigned long int *address, __epi_2xi64 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_indexed_unsigned_32xi8_mask(
    __epi_32xi8 merge, const unsigned char *address, __epi_32xi8 index,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_indexed_unsigned_16xi16_mask(
    __epi_16xi16 merge, const unsigned short int *address, __epi_16xi16 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_indexed_unsigned_8xi32_mask(
    __epi_8xi32 merge, const unsigned int *address, __epi_8xi32 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_indexed_unsigned_4xi64_mask(
    __epi_4xi64 merge, const unsigned long int *address, __epi_4xi64 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_indexed_unsigned_64xi8_mask(
    __epi_64xi8 merge, const unsigned char *address, __epi_64xi8 index,
    __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_indexed_unsigned_32xi16_mask(
    __epi_32xi16 merge, const unsigned short int *address, __epi_32xi16 index,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_indexed_unsigned_16xi32_mask(
    __epi_16xi32 merge, const unsigned int *address, __epi_16xi32 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_indexed_unsigned_8xi64_mask(
    __epi_8xi64 merge, const unsigned long int *address, __epi_8xi64 index,
    __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1

for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_unsigned_element(address + index[element])
   else
     result[element] = merge[element]

2.6.10. Load elements of a mask vector

Description

Use these builtins to load the elements of a mask vector.

All the elements of the vector, in groups of 8 bits, are loaded.

Instruction
vle.v
Prototypes
__epi_8xi1 __builtin_epi_vload_8xi1(const unsigned char *address);
__epi_4xi1 __builtin_epi_vload_4xi1(const unsigned short int *address);
__epi_2xi1 __builtin_epi_vload_2xi1(const unsigned int *address);
__epi_1xi1 __builtin_epi_vload_1xi1(const unsigned long int *address);
Operation
for element = 0 to VLMAX
  result[element] = load_uint8(address)
  address = address + 1

2.6.11. Load contiguous elements from memory into a vector (non-temporal)

Description

Use these builtins to load elements contiguous in memory into a vector without loading the vector in the cache.

Instruction
vle.v
Prototypes
__epi_8xi8 __builtin_epi_vload_nt_8xi8(const signed char *address,
                                       unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_nt_4xi16(const signed short int *address,
                                         unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_2xi32(const signed int *address,
                                         unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_1xi64(const signed long int *address,
                                         unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_nt_2xf32(const float *address,
                                         unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_nt_1xf64(const double *address,
                                         unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_16xi8(const signed char *address,
                                         unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_nt_8xi16(const signed short int *address,
                                         unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_4xi32(const signed int *address,
                                         unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_2xi64(const signed long int *address,
                                         unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_nt_4xf32(const float *address,
                                         unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_nt_2xf64(const double *address,
                                         unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_32xi8(const signed char *address,
                                         unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_16xi16(const signed short int *address,
                                           unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_8xi32(const signed int *address,
                                         unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_4xi64(const signed long int *address,
                                         unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_nt_8xf32(const float *address,
                                         unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_nt_4xf64(const double *address,
                                         unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_64xi8(const signed char *address,
                                         unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_32xi16(const signed short int *address,
                                           unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_16xi32(const signed int *address,
                                           unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_8xi64(const signed long int *address,
                                         unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_nt_16xf32(const float *address,
                                           unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_nt_8xf64(const double *address,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8 __builtin_epi_vload_nt_8xi8_mask(__epi_8xi8 merge,
                                            const signed char *address,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_nt_4xi16_mask(__epi_4xi16 merge,
                                              const signed short int *address,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_2xi32_mask(__epi_2xi32 merge,
                                              const signed int *address,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_1xi64_mask(__epi_1xi64 merge,
                                              const signed long int *address,
                                              __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_nt_2xf32_mask(__epi_2xf32 merge,
                                              const float *address,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_nt_1xf64_mask(__epi_1xf64 merge,
                                              const double *address,
                                              __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_16xi8_mask(__epi_16xi8 merge,
                                              const signed char *address,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_nt_8xi16_mask(__epi_8xi16 merge,
                                              const signed short int *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_4xi32_mask(__epi_4xi32 merge,
                                              const signed int *address,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_2xi64_mask(__epi_2xi64 merge,
                                              const signed long int *address,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_nt_4xf32_mask(__epi_4xf32 merge,
                                              const float *address,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_nt_2xf64_mask(__epi_2xf64 merge,
                                              const double *address,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_32xi8_mask(__epi_32xi8 merge,
                                              const signed char *address,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_16xi16_mask(__epi_16xi16 merge,
                                                const signed short int *address,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_8xi32_mask(__epi_8xi32 merge,
                                              const signed int *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_4xi64_mask(__epi_4xi64 merge,
                                              const signed long int *address,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_nt_8xf32_mask(__epi_8xf32 merge,
                                              const float *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_nt_4xf64_mask(__epi_4xf64 merge,
                                              const double *address,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_64xi8_mask(__epi_64xi8 merge,
                                              const signed char *address,
                                              __epi_64xi1 mask,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_32xi16_mask(__epi_32xi16 merge,
                                                const signed short int *address,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_16xi32_mask(__epi_16xi32 merge,
                                                const signed int *address,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_8xi64_mask(__epi_8xi64 merge,
                                              const signed long int *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_nt_16xf32_mask(__epi_16xf32 merge,
                                                const float *address,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_nt_8xf64_mask(__epi_8xf64 merge,
                                              const double *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_element(address)
     address = address + SEW / 8
   else
     result[element] = merge[element]

2.6.12. Load elements from memory into a vector using an index vector (non-temporal)

Description

Use these builtins to load elements into a vector using an index vector without loading the vector in the cache. This is commonly known as a gather operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vlx.v
Prototypes
__epi_8xi8 __builtin_epi_vload_nt_indexed_8xi8(const signed char *address,
                                               __epi_8xi8 index,
                                               unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_nt_indexed_4xi16(const signed short int *address,
                                     __epi_4xi16 index, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_indexed_2xi32(const signed int *address,
                                                 __epi_2xi32 index,
                                                 unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_indexed_1xi64(const signed long int *address,
                                                 __epi_1xi64 index,
                                                 unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_nt_indexed_2xf32(const float *address,
                                                 __epi_2xi32 index,
                                                 unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_nt_indexed_1xf64(const double *address,
                                                 __epi_1xi64 index,
                                                 unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_indexed_16xi8(const signed char *address,
                                                 __epi_16xi8 index,
                                                 unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_nt_indexed_8xi16(const signed short int *address,
                                     __epi_8xi16 index, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_indexed_4xi32(const signed int *address,
                                                 __epi_4xi32 index,
                                                 unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_indexed_2xi64(const signed long int *address,
                                                 __epi_2xi64 index,
                                                 unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_nt_indexed_4xf32(const float *address,
                                                 __epi_4xi32 index,
                                                 unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_nt_indexed_2xf64(const double *address,
                                                 __epi_2xi64 index,
                                                 unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_indexed_32xi8(const signed char *address,
                                                 __epi_32xi8 index,
                                                 unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_indexed_16xi16(
    const signed short int *address, __epi_16xi16 index, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_indexed_8xi32(const signed int *address,
                                                 __epi_8xi32 index,
                                                 unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_indexed_4xi64(const signed long int *address,
                                                 __epi_4xi64 index,
                                                 unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_nt_indexed_8xf32(const float *address,
                                                 __epi_8xi32 index,
                                                 unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_nt_indexed_4xf64(const double *address,
                                                 __epi_4xi64 index,
                                                 unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_indexed_64xi8(const signed char *address,
                                                 __epi_64xi8 index,
                                                 unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_indexed_32xi16(
    const signed short int *address, __epi_32xi16 index, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_indexed_16xi32(const signed int *address,
                                                   __epi_16xi32 index,
                                                   unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_indexed_8xi64(const signed long int *address,
                                                 __epi_8xi64 index,
                                                 unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_nt_indexed_16xf32(const float *address,
                                                   __epi_16xi32 index,
                                                   unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_nt_indexed_8xf64(const double *address,
                                                 __epi_8xi64 index,
                                                 unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_element(address + index[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vload_nt_indexed_8xi8_mask(__epi_8xi8 merge,
                                                    const signed char *address,
                                                    __epi_8xi8 index,
                                                    __epi_8xi1 mask,
                                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_nt_indexed_4xi16_mask(
    __epi_4xi16 merge, const signed short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_indexed_2xi32_mask(__epi_2xi32 merge,
                                                      const signed int *address,
                                                      __epi_2xi32 index,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_indexed_1xi64_mask(
    __epi_1xi64 merge, const signed long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_nt_indexed_2xf32_mask(__epi_2xf32 merge,
                                                      const float *address,
                                                      __epi_2xi32 index,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_nt_indexed_1xf64_mask(__epi_1xf64 merge,
                                                      const double *address,
                                                      __epi_1xi64 index,
                                                      __epi_1xi1 mask,
                                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_indexed_16xi8_mask(
    __epi_16xi8 merge, const signed char *address, __epi_16xi8 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_nt_indexed_8xi16_mask(
    __epi_8xi16 merge, const signed short int *address, __epi_8xi16 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_indexed_4xi32_mask(__epi_4xi32 merge,
                                                      const signed int *address,
                                                      __epi_4xi32 index,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_indexed_2xi64_mask(
    __epi_2xi64 merge, const signed long int *address, __epi_2xi64 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_nt_indexed_4xf32_mask(__epi_4xf32 merge,
                                                      const float *address,
                                                      __epi_4xi32 index,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_nt_indexed_2xf64_mask(__epi_2xf64 merge,
                                                      const double *address,
                                                      __epi_2xi64 index,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_indexed_32xi8_mask(
    __epi_32xi8 merge, const signed char *address, __epi_32xi8 index,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_indexed_16xi16_mask(
    __epi_16xi16 merge, const signed short int *address, __epi_16xi16 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_indexed_8xi32_mask(__epi_8xi32 merge,
                                                      const signed int *address,
                                                      __epi_8xi32 index,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_indexed_4xi64_mask(
    __epi_4xi64 merge, const signed long int *address, __epi_4xi64 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_nt_indexed_8xf32_mask(__epi_8xf32 merge,
                                                      const float *address,
                                                      __epi_8xi32 index,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_nt_indexed_4xf64_mask(__epi_4xf64 merge,
                                                      const double *address,
                                                      __epi_4xi64 index,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_indexed_64xi8_mask(
    __epi_64xi8 merge, const signed char *address, __epi_64xi8 index,
    __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_indexed_32xi16_mask(
    __epi_32xi16 merge, const signed short int *address, __epi_32xi16 index,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_indexed_16xi32_mask(
    __epi_16xi32 merge, const signed int *address, __epi_16xi32 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_indexed_8xi64_mask(
    __epi_8xi64 merge, const signed long int *address, __epi_8xi64 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_nt_indexed_16xf32_mask(__epi_16xf32 merge,
                                                        const float *address,
                                                        __epi_16xi32 index,
                                                        __epi_16xi1 mask,
                                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_nt_indexed_8xf64_mask(__epi_8xf64 merge,
                                                      const double *address,
                                                      __epi_8xi64 index,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_element(address + index[element])
   else
     result[element] = merge[element]

2.6.13. Load unsigned integer elements from memory into a vector using an index vector (non-temporal)

Description

Use these builtins to load unsigned integer elements into a vector using an index vector without loading the vector in the cache. This is commonly known as a gather operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Depending on the types involved, there may not be semantic difference between these builtins and the corresponding __builtin_epi_vload_indexed_nt builtins.

Instruction
vlx.v
Prototypes
__epi_8xi8 __builtin_epi_vload_nt_indexed_unsigned_8xi8(
    const unsigned char *address, __epi_8xi8 index, unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_nt_indexed_unsigned_4xi16(const unsigned short int *address,
                                              __epi_4xi16 index,
                                              unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_indexed_unsigned_2xi32(
    const unsigned int *address, __epi_2xi32 index, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_indexed_unsigned_1xi64(
    const unsigned long int *address, __epi_1xi64 index, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_indexed_unsigned_16xi8(
    const unsigned char *address, __epi_16xi8 index, unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_nt_indexed_unsigned_8xi16(const unsigned short int *address,
                                              __epi_8xi16 index,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_indexed_unsigned_4xi32(
    const unsigned int *address, __epi_4xi32 index, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_indexed_unsigned_2xi64(
    const unsigned long int *address, __epi_2xi64 index, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_indexed_unsigned_32xi8(
    const unsigned char *address, __epi_32xi8 index, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_indexed_unsigned_16xi16(
    const unsigned short int *address, __epi_16xi16 index,
    unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_indexed_unsigned_8xi32(
    const unsigned int *address, __epi_8xi32 index, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_indexed_unsigned_4xi64(
    const unsigned long int *address, __epi_4xi64 index, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_indexed_unsigned_64xi8(
    const unsigned char *address, __epi_64xi8 index, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_indexed_unsigned_32xi16(
    const unsigned short int *address, __epi_32xi16 index,
    unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_indexed_unsigned_16xi32(
    const unsigned int *address, __epi_16xi32 index, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_indexed_unsigned_8xi64(
    const unsigned long int *address, __epi_8xi64 index, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_unsigned_element(address + index[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vload_nt_indexed_unsigned_8xi8_mask(
    __epi_8xi8 merge, const unsigned char *address, __epi_8xi8 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_nt_indexed_unsigned_4xi16_mask(
    __epi_4xi16 merge, const unsigned short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_indexed_unsigned_2xi32_mask(
    __epi_2xi32 merge, const unsigned int *address, __epi_2xi32 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_indexed_unsigned_1xi64_mask(
    __epi_1xi64 merge, const unsigned long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_indexed_unsigned_16xi8_mask(
    __epi_16xi8 merge, const unsigned char *address, __epi_16xi8 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_nt_indexed_unsigned_8xi16_mask(
    __epi_8xi16 merge, const unsigned short int *address, __epi_8xi16 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_indexed_unsigned_4xi32_mask(
    __epi_4xi32 merge, const unsigned int *address, __epi_4xi32 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_indexed_unsigned_2xi64_mask(
    __epi_2xi64 merge, const unsigned long int *address, __epi_2xi64 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_indexed_unsigned_32xi8_mask(
    __epi_32xi8 merge, const unsigned char *address, __epi_32xi8 index,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_indexed_unsigned_16xi16_mask(
    __epi_16xi16 merge, const unsigned short int *address, __epi_16xi16 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_indexed_unsigned_8xi32_mask(
    __epi_8xi32 merge, const unsigned int *address, __epi_8xi32 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_indexed_unsigned_4xi64_mask(
    __epi_4xi64 merge, const unsigned long int *address, __epi_4xi64 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_indexed_unsigned_64xi8_mask(
    __epi_64xi8 merge, const unsigned char *address, __epi_64xi8 index,
    __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_indexed_unsigned_32xi16_mask(
    __epi_32xi16 merge, const unsigned short int *address, __epi_32xi16 index,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_indexed_unsigned_16xi32_mask(
    __epi_16xi32 merge, const unsigned int *address, __epi_16xi32 index,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_indexed_unsigned_8xi64_mask(
    __epi_8xi64 merge, const unsigned long int *address, __epi_8xi64 index,
    __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1

for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_unsigned_element(address + index[element])
   else
     result[element] = merge[element]

2.6.14. Load strided elements from memory into a vector (non-temporal)

Description

Use these builtins to load elements into a vector that are separated in memory by a constant stride value, in bytes, without loading the vector in the cache.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vls.v
Prototypes
__epi_8xi8 __builtin_epi_vload_nt_strided_8xi8(const signed char *address,
                                               signed long int stride,
                                               unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_nt_strided_4xi16(const signed short int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_strided_2xi32(const signed int *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_strided_1xi64(const signed long int *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_nt_strided_2xf32(const float *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_nt_strided_1xf64(const double *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_strided_16xi8(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_nt_strided_8xi16(const signed short int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_strided_4xi32(const signed int *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_strided_2xi64(const signed long int *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_nt_strided_4xf32(const float *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_nt_strided_2xf64(const double *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_strided_32xi8(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_16xi16
__builtin_epi_vload_nt_strided_16xi16(const signed short int *address,
                                      signed long int stride,
                                      unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_strided_8xi32(const signed int *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_strided_4xi64(const signed long int *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_nt_strided_8xf32(const float *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_nt_strided_4xf64(const double *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_strided_64xi8(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_32xi16
__builtin_epi_vload_nt_strided_32xi16(const signed short int *address,
                                      signed long int stride,
                                      unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_strided_16xi32(const signed int *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_strided_8xi64(const signed long int *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_nt_strided_16xf32(const float *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_nt_strided_8xf64(const double *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_element(address)
  address = address + stride
Masked prototypes
__epi_8xi8 __builtin_epi_vload_nt_strided_8xi8_mask(__epi_8xi8 merge,
                                                    const signed char *address,
                                                    signed long int stride,
                                                    __epi_8xi1 mask,
                                                    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_nt_strided_4xi16_mask(
    __epi_4xi16 merge, const signed short int *address, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_strided_2xi32_mask(__epi_2xi32 merge,
                                                      const signed int *address,
                                                      signed long int stride,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_strided_1xi64_mask(
    __epi_1xi64 merge, const signed long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_nt_strided_2xf32_mask(__epi_2xf32 merge,
                                                      const float *address,
                                                      signed long int stride,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_nt_strided_1xf64_mask(__epi_1xf64 merge,
                                                      const double *address,
                                                      signed long int stride,
                                                      __epi_1xi1 mask,
                                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_strided_16xi8_mask(
    __epi_16xi8 merge, const signed char *address, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_nt_strided_8xi16_mask(
    __epi_8xi16 merge, const signed short int *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_strided_4xi32_mask(__epi_4xi32 merge,
                                                      const signed int *address,
                                                      signed long int stride,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_strided_2xi64_mask(
    __epi_2xi64 merge, const signed long int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_nt_strided_4xf32_mask(__epi_4xf32 merge,
                                                      const float *address,
                                                      signed long int stride,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_nt_strided_2xf64_mask(__epi_2xf64 merge,
                                                      const double *address,
                                                      signed long int stride,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_strided_32xi8_mask(
    __epi_32xi8 merge, const signed char *address, signed long int stride,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_strided_16xi16_mask(
    __epi_16xi16 merge, const signed short int *address, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_strided_8xi32_mask(__epi_8xi32 merge,
                                                      const signed int *address,
                                                      signed long int stride,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_strided_4xi64_mask(
    __epi_4xi64 merge, const signed long int *address, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_nt_strided_8xf32_mask(__epi_8xf32 merge,
                                                      const float *address,
                                                      signed long int stride,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_nt_strided_4xf64_mask(__epi_4xf64 merge,
                                                      const double *address,
                                                      signed long int stride,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_strided_64xi8_mask(
    __epi_64xi8 merge, const signed char *address, signed long int stride,
    __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_strided_32xi16_mask(
    __epi_32xi16 merge, const signed short int *address, signed long int stride,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_strided_16xi32_mask(
    __epi_16xi32 merge, const signed int *address, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_strided_8xi64_mask(
    __epi_8xi64 merge, const signed long int *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_nt_strided_16xf32_mask(__epi_16xf32 merge,
                                                        const float *address,
                                                        signed long int stride,
                                                        __epi_16xi1 mask,
                                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_nt_strided_8xf64_mask(__epi_8xf64 merge,
                                                      const double *address,
                                                      signed long int stride,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_element(address)
   else
     result[element] = merge[element]
  address = address + stride

2.6.15. Load unsigned strided elements from memory into a vector (non-temporal)

Description

Use these builtins to load elements into a vector that are separated in memory by a constant stride value, in bytes, without loading the vector in the cache.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Depending on the types involved, there may not be semantic difference between these builtins and the corresponding __builtin_epi_vload_strided_nt.

Instruction
vls.v
Prototypes
__epi_8xi8
__builtin_epi_vload_nt_strided_unsigned_8xi8(const unsigned char *address,
                                             signed long int stride,
                                             unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_nt_strided_unsigned_4xi16(const unsigned short int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_strided_unsigned_2xi32(
    const unsigned int *address, signed long int stride, unsigned long int gvl);
__epi_1xi64
__builtin_epi_vload_nt_strided_unsigned_1xi64(const unsigned long int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_16xi8
__builtin_epi_vload_nt_strided_unsigned_16xi8(const unsigned char *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_nt_strided_unsigned_8xi16(const unsigned short int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_strided_unsigned_4xi32(
    const unsigned int *address, signed long int stride, unsigned long int gvl);
__epi_2xi64
__builtin_epi_vload_nt_strided_unsigned_2xi64(const unsigned long int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_32xi8
__builtin_epi_vload_nt_strided_unsigned_32xi8(const unsigned char *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_strided_unsigned_16xi16(
    const unsigned short int *address, signed long int stride,
    unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_strided_unsigned_8xi32(
    const unsigned int *address, signed long int stride, unsigned long int gvl);
__epi_4xi64
__builtin_epi_vload_nt_strided_unsigned_4xi64(const unsigned long int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_64xi8
__builtin_epi_vload_nt_strided_unsigned_64xi8(const unsigned char *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_strided_unsigned_32xi16(
    const unsigned short int *address, signed long int stride,
    unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_strided_unsigned_16xi32(
    const unsigned int *address, signed long int stride, unsigned long int gvl);
__epi_8xi64
__builtin_epi_vload_nt_strided_unsigned_8xi64(const unsigned long int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_unsigned_element(address)
  address = address + stride
Masked prototypes
__epi_8xi8 __builtin_epi_vload_nt_strided_unsigned_8xi8_mask(
    __epi_8xi8 merge, const unsigned char *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_nt_strided_unsigned_4xi16_mask(
    __epi_4xi16 merge, const unsigned short int *address,
    signed long int stride, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_strided_unsigned_2xi32_mask(
    __epi_2xi32 merge, const unsigned int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_strided_unsigned_1xi64_mask(
    __epi_1xi64 merge, const unsigned long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_strided_unsigned_16xi8_mask(
    __epi_16xi8 merge, const unsigned char *address, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_nt_strided_unsigned_8xi16_mask(
    __epi_8xi16 merge, const unsigned short int *address,
    signed long int stride, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_strided_unsigned_4xi32_mask(
    __epi_4xi32 merge, const unsigned int *address, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_strided_unsigned_2xi64_mask(
    __epi_2xi64 merge, const unsigned long int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_strided_unsigned_32xi8_mask(
    __epi_32xi8 merge, const unsigned char *address, signed long int stride,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_strided_unsigned_16xi16_mask(
    __epi_16xi16 merge, const unsigned short int *address,
    signed long int stride, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_strided_unsigned_8xi32_mask(
    __epi_8xi32 merge, const unsigned int *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_strided_unsigned_4xi64_mask(
    __epi_4xi64 merge, const unsigned long int *address, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_strided_unsigned_64xi8_mask(
    __epi_64xi8 merge, const unsigned char *address, signed long int stride,
    __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_strided_unsigned_32xi16_mask(
    __epi_32xi16 merge, const unsigned short int *address,
    signed long int stride, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_strided_unsigned_16xi32_mask(
    __epi_16xi32 merge, const unsigned int *address, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_strided_unsigned_8xi64_mask(
    __epi_8xi64 merge, const unsigned long int *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_unsigned_element(address)
   else
     result[element] = merge[element]
  address = address + stride

2.6.16. Load unsigned contiguous elements from memory into a vector (non-temporal)

Description

Use these builtins to load elements contiguous in memory into a vector without loading the vector in the cache.

Depending on the types involved, there may not be semantic difference between these builtins and the corresponding __builtin_epi_vload_nt.

Instruction
vle.v
Prototypes
__epi_8xi8 __builtin_epi_vload_nt_unsigned_8xi8(const unsigned char *address,
                                                unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_nt_unsigned_4xi16(const unsigned short int *address,
                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_unsigned_2xi32(const unsigned int *address,
                                                  unsigned long int gvl);
__epi_1xi64
__builtin_epi_vload_nt_unsigned_1xi64(const unsigned long int *address,
                                      unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_unsigned_16xi8(const unsigned char *address,
                                                  unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_nt_unsigned_8xi16(const unsigned short int *address,
                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_unsigned_4xi32(const unsigned int *address,
                                                  unsigned long int gvl);
__epi_2xi64
__builtin_epi_vload_nt_unsigned_2xi64(const unsigned long int *address,
                                      unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_unsigned_32xi8(const unsigned char *address,
                                                  unsigned long int gvl);
__epi_16xi16
__builtin_epi_vload_nt_unsigned_16xi16(const unsigned short int *address,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_unsigned_8xi32(const unsigned int *address,
                                                  unsigned long int gvl);
__epi_4xi64
__builtin_epi_vload_nt_unsigned_4xi64(const unsigned long int *address,
                                      unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_unsigned_64xi8(const unsigned char *address,
                                                  unsigned long int gvl);
__epi_32xi16
__builtin_epi_vload_nt_unsigned_32xi16(const unsigned short int *address,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_unsigned_16xi32(const unsigned int *address,
                                                    unsigned long int gvl);
__epi_8xi64
__builtin_epi_vload_nt_unsigned_8xi64(const unsigned long int *address,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_unsigned_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8 __builtin_epi_vload_nt_unsigned_8xi8_mask(
    __epi_8xi8 merge, const unsigned char *address, __epi_8xi1 mask,
    unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_nt_unsigned_4xi16_mask(
    __epi_4xi16 merge, const unsigned short int *address, __epi_4xi1 mask,
    unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_nt_unsigned_2xi32_mask(
    __epi_2xi32 merge, const unsigned int *address, __epi_2xi1 mask,
    unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_nt_unsigned_1xi64_mask(
    __epi_1xi64 merge, const unsigned long int *address, __epi_1xi1 mask,
    unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_nt_unsigned_16xi8_mask(
    __epi_16xi8 merge, const unsigned char *address, __epi_16xi1 mask,
    unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_nt_unsigned_8xi16_mask(
    __epi_8xi16 merge, const unsigned short int *address, __epi_8xi1 mask,
    unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_nt_unsigned_4xi32_mask(
    __epi_4xi32 merge, const unsigned int *address, __epi_4xi1 mask,
    unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_nt_unsigned_2xi64_mask(
    __epi_2xi64 merge, const unsigned long int *address, __epi_2xi1 mask,
    unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_nt_unsigned_32xi8_mask(
    __epi_32xi8 merge, const unsigned char *address, __epi_32xi1 mask,
    unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_nt_unsigned_16xi16_mask(
    __epi_16xi16 merge, const unsigned short int *address, __epi_16xi1 mask,
    unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_nt_unsigned_8xi32_mask(
    __epi_8xi32 merge, const unsigned int *address, __epi_8xi1 mask,
    unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_nt_unsigned_4xi64_mask(
    __epi_4xi64 merge, const unsigned long int *address, __epi_4xi1 mask,
    unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_nt_unsigned_64xi8_mask(
    __epi_64xi8 merge, const unsigned char *address, __epi_64xi1 mask,
    unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_nt_unsigned_32xi16_mask(
    __epi_32xi16 merge, const unsigned short int *address, __epi_32xi1 mask,
    unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_nt_unsigned_16xi32_mask(
    __epi_16xi32 merge, const unsigned int *address, __epi_16xi1 mask,
    unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_nt_unsigned_8xi64_mask(
    __epi_8xi64 merge, const unsigned long int *address, __epi_8xi1 mask,
    unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_unsigned_element(address)
     address = address + SEW / 8
   else
     result[element] = merge[element]

2.6.17. Load strided elements from memory into a vector

Description

Use these builtins to load elements into a vector that are separated in memory by a constant stride value, in bytes.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vls.v
Prototypes
__epi_8xi8 __builtin_epi_vload_strided_8xi8(const signed char *address,
                                            signed long int stride,
                                            unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_strided_4xi16(const signed short int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_strided_2xi32(const signed int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_strided_1xi64(const signed long int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_strided_2xf32(const float *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_strided_1xf64(const double *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_strided_16xi8(const signed char *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_strided_8xi16(const signed short int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_strided_4xi32(const signed int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_strided_2xi64(const signed long int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_strided_4xf32(const float *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_strided_2xf64(const double *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_strided_32xi8(const signed char *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_strided_16xi16(const signed short int *address,
                                                signed long int stride,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_strided_8xi32(const signed int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_strided_4xi64(const signed long int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_strided_8xf32(const float *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_strided_4xf64(const double *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_strided_64xi8(const signed char *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_strided_32xi16(const signed short int *address,
                                                signed long int stride,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_strided_16xi32(const signed int *address,
                                                signed long int stride,
                                                unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_strided_8xi64(const signed long int *address,
                                              signed long int stride,
                                              unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_strided_16xf32(const float *address,
                                                signed long int stride,
                                                unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_strided_8xf64(const double *address,
                                              signed long int stride,
                                              unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_element(address)
  address = address + stride
Masked prototypes
__epi_8xi8 __builtin_epi_vload_strided_8xi8_mask(__epi_8xi8 merge,
                                                 const signed char *address,
                                                 signed long int stride,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_strided_4xi16_mask(
    __epi_4xi16 merge, const signed short int *address, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_strided_2xi32_mask(__epi_2xi32 merge,
                                                   const signed int *address,
                                                   signed long int stride,
                                                   __epi_2xi1 mask,
                                                   unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_strided_1xi64_mask(
    __epi_1xi64 merge, const signed long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32 __builtin_epi_vload_strided_2xf32_mask(__epi_2xf32 merge,
                                                   const float *address,
                                                   signed long int stride,
                                                   __epi_2xi1 mask,
                                                   unsigned long int gvl);
__epi_1xf64 __builtin_epi_vload_strided_1xf64_mask(__epi_1xf64 merge,
                                                   const double *address,
                                                   signed long int stride,
                                                   __epi_1xi1 mask,
                                                   unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_strided_16xi8_mask(__epi_16xi8 merge,
                                                   const signed char *address,
                                                   signed long int stride,
                                                   __epi_16xi1 mask,
                                                   unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_strided_8xi16_mask(
    __epi_8xi16 merge, const signed short int *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_strided_4xi32_mask(__epi_4xi32 merge,
                                                   const signed int *address,
                                                   signed long int stride,
                                                   __epi_4xi1 mask,
                                                   unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_strided_2xi64_mask(
    __epi_2xi64 merge, const signed long int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vload_strided_4xf32_mask(__epi_4xf32 merge,
                                                   const float *address,
                                                   signed long int stride,
                                                   __epi_4xi1 mask,
                                                   unsigned long int gvl);
__epi_2xf64 __builtin_epi_vload_strided_2xf64_mask(__epi_2xf64 merge,
                                                   const double *address,
                                                   signed long int stride,
                                                   __epi_2xi1 mask,
                                                   unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_strided_32xi8_mask(__epi_32xi8 merge,
                                                   const signed char *address,
                                                   signed long int stride,
                                                   __epi_32xi1 mask,
                                                   unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_strided_16xi16_mask(
    __epi_16xi16 merge, const signed short int *address, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_strided_8xi32_mask(__epi_8xi32 merge,
                                                   const signed int *address,
                                                   signed long int stride,
                                                   __epi_8xi1 mask,
                                                   unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_strided_4xi64_mask(
    __epi_4xi64 merge, const signed long int *address, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vload_strided_8xf32_mask(__epi_8xf32 merge,
                                                   const float *address,
                                                   signed long int stride,
                                                   __epi_8xi1 mask,
                                                   unsigned long int gvl);
__epi_4xf64 __builtin_epi_vload_strided_4xf64_mask(__epi_4xf64 merge,
                                                   const double *address,
                                                   signed long int stride,
                                                   __epi_4xi1 mask,
                                                   unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_strided_64xi8_mask(__epi_64xi8 merge,
                                                   const signed char *address,
                                                   signed long int stride,
                                                   __epi_64xi1 mask,
                                                   unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_strided_32xi16_mask(
    __epi_32xi16 merge, const signed short int *address, signed long int stride,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_strided_16xi32_mask(__epi_16xi32 merge,
                                                     const signed int *address,
                                                     signed long int stride,
                                                     __epi_16xi1 mask,
                                                     unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_strided_8xi64_mask(
    __epi_8xi64 merge, const signed long int *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vload_strided_16xf32_mask(__epi_16xf32 merge,
                                                     const float *address,
                                                     signed long int stride,
                                                     __epi_16xi1 mask,
                                                     unsigned long int gvl);
__epi_8xf64 __builtin_epi_vload_strided_8xf64_mask(__epi_8xf64 merge,
                                                   const double *address,
                                                   signed long int stride,
                                                   __epi_8xi1 mask,
                                                   unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_element(address)
   else
     result[element] = merge[element]
  address = address + stride

2.6.18. Load unsigned strided elements from memory into a vector

Description

Use these builtins to load elements into a vector that are separated in memory by a constant stride value, in bytes.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Depending on the types involved, there may not be semantic difference between these builtins and the corresponding __builtin_epi_vload_strided.

Instruction
vls.v
Prototypes
__epi_8xi8
__builtin_epi_vload_strided_unsigned_8xi8(const unsigned char *address,
                                          signed long int stride,
                                          unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_strided_unsigned_4xi16(const unsigned short int *address,
                                           signed long int stride,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_strided_unsigned_2xi32(
    const unsigned int *address, signed long int stride, unsigned long int gvl);
__epi_1xi64
__builtin_epi_vload_strided_unsigned_1xi64(const unsigned long int *address,
                                           signed long int stride,
                                           unsigned long int gvl);
__epi_16xi8
__builtin_epi_vload_strided_unsigned_16xi8(const unsigned char *address,
                                           signed long int stride,
                                           unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_strided_unsigned_8xi16(const unsigned short int *address,
                                           signed long int stride,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_strided_unsigned_4xi32(
    const unsigned int *address, signed long int stride, unsigned long int gvl);
__epi_2xi64
__builtin_epi_vload_strided_unsigned_2xi64(const unsigned long int *address,
                                           signed long int stride,
                                           unsigned long int gvl);
__epi_32xi8
__builtin_epi_vload_strided_unsigned_32xi8(const unsigned char *address,
                                           signed long int stride,
                                           unsigned long int gvl);
__epi_16xi16
__builtin_epi_vload_strided_unsigned_16xi16(const unsigned short int *address,
                                            signed long int stride,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_strided_unsigned_8xi32(
    const unsigned int *address, signed long int stride, unsigned long int gvl);
__epi_4xi64
__builtin_epi_vload_strided_unsigned_4xi64(const unsigned long int *address,
                                           signed long int stride,
                                           unsigned long int gvl);
__epi_64xi8
__builtin_epi_vload_strided_unsigned_64xi8(const unsigned char *address,
                                           signed long int stride,
                                           unsigned long int gvl);
__epi_32xi16
__builtin_epi_vload_strided_unsigned_32xi16(const unsigned short int *address,
                                            signed long int stride,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_strided_unsigned_16xi32(
    const unsigned int *address, signed long int stride, unsigned long int gvl);
__epi_8xi64
__builtin_epi_vload_strided_unsigned_8xi64(const unsigned long int *address,
                                           signed long int stride,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_unsigned_element(address)
  address = address + stride
Masked prototypes
__epi_8xi8 __builtin_epi_vload_strided_unsigned_8xi8_mask(
    __epi_8xi8 merge, const unsigned char *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vload_strided_unsigned_4xi16_mask(
    __epi_4xi16 merge, const unsigned short int *address,
    signed long int stride, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_strided_unsigned_2xi32_mask(
    __epi_2xi32 merge, const unsigned int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_strided_unsigned_1xi64_mask(
    __epi_1xi64 merge, const unsigned long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_strided_unsigned_16xi8_mask(
    __epi_16xi8 merge, const unsigned char *address, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vload_strided_unsigned_8xi16_mask(
    __epi_8xi16 merge, const unsigned short int *address,
    signed long int stride, __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_strided_unsigned_4xi32_mask(
    __epi_4xi32 merge, const unsigned int *address, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_strided_unsigned_2xi64_mask(
    __epi_2xi64 merge, const unsigned long int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_strided_unsigned_32xi8_mask(
    __epi_32xi8 merge, const unsigned char *address, signed long int stride,
    __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_strided_unsigned_16xi16_mask(
    __epi_16xi16 merge, const unsigned short int *address,
    signed long int stride, __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_strided_unsigned_8xi32_mask(
    __epi_8xi32 merge, const unsigned int *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_strided_unsigned_4xi64_mask(
    __epi_4xi64 merge, const unsigned long int *address, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_strided_unsigned_64xi8_mask(
    __epi_64xi8 merge, const unsigned char *address, signed long int stride,
    __epi_64xi1 mask, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_strided_unsigned_32xi16_mask(
    __epi_32xi16 merge, const unsigned short int *address,
    signed long int stride, __epi_32xi1 mask, unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_strided_unsigned_16xi32_mask(
    __epi_16xi32 merge, const unsigned int *address, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_strided_unsigned_8xi64_mask(
    __epi_8xi64 merge, const unsigned long int *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_unsigned_element(address)
   else
     result[element] = merge[element]
  address = address + stride

2.6.19. Load unsigned contiguous elements from memory into a vector

Description

Use these builtins to load elements contiguous in memory into a vector.

Depending on the types involved, there may not be semantic difference between these builtins and the corresponding __builtin_epi_vload.

Instruction
vle.v
Prototypes
__epi_8xi8 __builtin_epi_vload_unsigned_8xi8(const unsigned char *address,
                                             unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_unsigned_4xi16(const unsigned short int *address,
                                   unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_unsigned_2xi32(const unsigned int *address,
                                               unsigned long int gvl);
__epi_1xi64 __builtin_epi_vload_unsigned_1xi64(const unsigned long int *address,
                                               unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_unsigned_16xi8(const unsigned char *address,
                                               unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_unsigned_8xi16(const unsigned short int *address,
                                   unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_unsigned_4xi32(const unsigned int *address,
                                               unsigned long int gvl);
__epi_2xi64 __builtin_epi_vload_unsigned_2xi64(const unsigned long int *address,
                                               unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_unsigned_32xi8(const unsigned char *address,
                                               unsigned long int gvl);
__epi_16xi16
__builtin_epi_vload_unsigned_16xi16(const unsigned short int *address,
                                    unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_unsigned_8xi32(const unsigned int *address,
                                               unsigned long int gvl);
__epi_4xi64 __builtin_epi_vload_unsigned_4xi64(const unsigned long int *address,
                                               unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_unsigned_64xi8(const unsigned char *address,
                                               unsigned long int gvl);
__epi_32xi16
__builtin_epi_vload_unsigned_32xi16(const unsigned short int *address,
                                    unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_unsigned_16xi32(const unsigned int *address,
                                                 unsigned long int gvl);
__epi_8xi64 __builtin_epi_vload_unsigned_8xi64(const unsigned long int *address,
                                               unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = load_unsigned_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8 __builtin_epi_vload_unsigned_8xi8_mask(__epi_8xi8 merge,
                                                  const unsigned char *address,
                                                  __epi_8xi1 mask,
                                                  unsigned long int gvl);
__epi_4xi16
__builtin_epi_vload_unsigned_4xi16_mask(__epi_4xi16 merge,
                                        const unsigned short int *address,
                                        __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vload_unsigned_2xi32_mask(__epi_2xi32 merge,
                                                    const unsigned int *address,
                                                    __epi_2xi1 mask,
                                                    unsigned long int gvl);
__epi_1xi64
__builtin_epi_vload_unsigned_1xi64_mask(__epi_1xi64 merge,
                                        const unsigned long int *address,
                                        __epi_1xi1 mask, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vload_unsigned_16xi8_mask(
    __epi_16xi8 merge, const unsigned char *address, __epi_16xi1 mask,
    unsigned long int gvl);
__epi_8xi16
__builtin_epi_vload_unsigned_8xi16_mask(__epi_8xi16 merge,
                                        const unsigned short int *address,
                                        __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vload_unsigned_4xi32_mask(__epi_4xi32 merge,
                                                    const unsigned int *address,
                                                    __epi_4xi1 mask,
                                                    unsigned long int gvl);
__epi_2xi64
__builtin_epi_vload_unsigned_2xi64_mask(__epi_2xi64 merge,
                                        const unsigned long int *address,
                                        __epi_2xi1 mask, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vload_unsigned_32xi8_mask(
    __epi_32xi8 merge, const unsigned char *address, __epi_32xi1 mask,
    unsigned long int gvl);
__epi_16xi16 __builtin_epi_vload_unsigned_16xi16_mask(
    __epi_16xi16 merge, const unsigned short int *address, __epi_16xi1 mask,
    unsigned long int gvl);
__epi_8xi32 __builtin_epi_vload_unsigned_8xi32_mask(__epi_8xi32 merge,
                                                    const unsigned int *address,
                                                    __epi_8xi1 mask,
                                                    unsigned long int gvl);
__epi_4xi64
__builtin_epi_vload_unsigned_4xi64_mask(__epi_4xi64 merge,
                                        const unsigned long int *address,
                                        __epi_4xi1 mask, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vload_unsigned_64xi8_mask(
    __epi_64xi8 merge, const unsigned char *address, __epi_64xi1 mask,
    unsigned long int gvl);
__epi_32xi16 __builtin_epi_vload_unsigned_32xi16_mask(
    __epi_32xi16 merge, const unsigned short int *address, __epi_32xi1 mask,
    unsigned long int gvl);
__epi_16xi32 __builtin_epi_vload_unsigned_16xi32_mask(
    __epi_16xi32 merge, const unsigned int *address, __epi_16xi1 mask,
    unsigned long int gvl);
__epi_8xi64
__builtin_epi_vload_unsigned_8xi64_mask(__epi_8xi64 merge,
                                        const unsigned long int *address,
                                        __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = load_unsigned_element(address)
     address = address + SEW / 8
   else
     result[element] = merge[element]

2.6.20. Store vector elements into contiguous locations in memory

Description

Use these builtins to store the elements of a vector into contiguous locations in memory.

Instruction
vse.v
Prototypes
void __builtin_epi_vstore_8xi8(signed char *address, __epi_8xi8 value,
                               unsigned long int gvl);
void __builtin_epi_vstore_4xi16(signed short int *address, __epi_4xi16 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_2xi32(signed int *address, __epi_2xi32 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_1xi64(signed long int *address, __epi_1xi64 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_2xf32(float *address, __epi_2xf32 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_1xf64(double *address, __epi_1xf64 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_16xi8(signed char *address, __epi_16xi8 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_8xi16(signed short int *address, __epi_8xi16 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_4xi32(signed int *address, __epi_4xi32 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_2xi64(signed long int *address, __epi_2xi64 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_4xf32(float *address, __epi_4xf32 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_2xf64(double *address, __epi_2xf64 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_32xi8(signed char *address, __epi_32xi8 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_16xi16(signed short int *address, __epi_16xi16 value,
                                 unsigned long int gvl);
void __builtin_epi_vstore_8xi32(signed int *address, __epi_8xi32 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_4xi64(signed long int *address, __epi_4xi64 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_8xf32(float *address, __epi_8xf32 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_4xf64(double *address, __epi_4xf64 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_64xi8(signed char *address, __epi_64xi8 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_32xi16(signed short int *address, __epi_32xi16 value,
                                 unsigned long int gvl);
void __builtin_epi_vstore_16xi32(signed int *address, __epi_16xi32 value,
                                 unsigned long int gvl);
void __builtin_epi_vstore_8xi64(signed long int *address, __epi_8xi64 value,
                                unsigned long int gvl);
void __builtin_epi_vstore_16xf32(float *address, __epi_16xf32 value,
                                 unsigned long int gvl);
void __builtin_epi_vstore_8xf64(double *address, __epi_8xf64 value,
                                unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vstore_8xi8_mask(signed char *address, __epi_8xi8 value,
                                    __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_4xi16_mask(signed short int *address,
                                     __epi_4xi16 value, __epi_4xi1 mask,
                                     unsigned long int gvl);
void __builtin_epi_vstore_2xi32_mask(signed int *address, __epi_2xi32 value,
                                     __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_1xi64_mask(signed long int *address,
                                     __epi_1xi64 value, __epi_1xi1 mask,
                                     unsigned long int gvl);
void __builtin_epi_vstore_2xf32_mask(float *address, __epi_2xf32 value,
                                     __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_1xf64_mask(double *address, __epi_1xf64 value,
                                     __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_16xi8_mask(signed char *address, __epi_16xi8 value,
                                     __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_8xi16_mask(signed short int *address,
                                     __epi_8xi16 value, __epi_8xi1 mask,
                                     unsigned long int gvl);
void __builtin_epi_vstore_4xi32_mask(signed int *address, __epi_4xi32 value,
                                     __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_2xi64_mask(signed long int *address,
                                     __epi_2xi64 value, __epi_2xi1 mask,
                                     unsigned long int gvl);
void __builtin_epi_vstore_4xf32_mask(float *address, __epi_4xf32 value,
                                     __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_2xf64_mask(double *address, __epi_2xf64 value,
                                     __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_32xi8_mask(signed char *address, __epi_32xi8 value,
                                     __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_16xi16_mask(signed short int *address,
                                      __epi_16xi16 value, __epi_16xi1 mask,
                                      unsigned long int gvl);
void __builtin_epi_vstore_8xi32_mask(signed int *address, __epi_8xi32 value,
                                     __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_4xi64_mask(signed long int *address,
                                     __epi_4xi64 value, __epi_4xi1 mask,
                                     unsigned long int gvl);
void __builtin_epi_vstore_8xf32_mask(float *address, __epi_8xf32 value,
                                     __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_4xf64_mask(double *address, __epi_4xf64 value,
                                     __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_64xi8_mask(signed char *address, __epi_64xi8 value,
                                     __epi_64xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_32xi16_mask(signed short int *address,
                                      __epi_32xi16 value, __epi_32xi1 mask,
                                      unsigned long int gvl);
void __builtin_epi_vstore_16xi32_mask(signed int *address, __epi_16xi32 value,
                                      __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_8xi64_mask(signed long int *address,
                                     __epi_8xi64 value, __epi_8xi1 mask,
                                     unsigned long int gvl);
void __builtin_epi_vstore_16xf32_mask(float *address, __epi_16xf32 value,
                                      __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_8xf64_mask(double *address, __epi_8xf64 value,
                                     __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address, value[element])
  address = address + SEW / 8

2.6.21. Store vector elements into contiguous locations in memory (cache-flags)

Description

Use these builtins to store the elements of a vector into contiguous locations in memory specifying the cache behaviour in the flags parameter.

Instruction
vse.v
Prototypes
void __builtin_epi_vstore_ext_8xi8(signed char *address, __epi_8xi8 value,
                                   unsigned long int flags,
                                   unsigned long int gvl);
void __builtin_epi_vstore_ext_4xi16(signed short int *address,
                                    __epi_4xi16 value, unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_2xi32(signed int *address, __epi_2xi32 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_1xi64(signed long int *address, __epi_1xi64 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_2xf32(float *address, __epi_2xf32 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_1xf64(double *address, __epi_1xf64 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_16xi8(signed char *address, __epi_16xi8 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_8xi16(signed short int *address,
                                    __epi_8xi16 value, unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_4xi32(signed int *address, __epi_4xi32 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_2xi64(signed long int *address, __epi_2xi64 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_4xf32(float *address, __epi_4xf32 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_2xf64(double *address, __epi_2xf64 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_32xi8(signed char *address, __epi_32xi8 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_16xi16(signed short int *address,
                                     __epi_16xi16 value,
                                     unsigned long int flags,
                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_8xi32(signed int *address, __epi_8xi32 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_4xi64(signed long int *address, __epi_4xi64 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_8xf32(float *address, __epi_8xf32 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_4xf64(double *address, __epi_4xf64 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_64xi8(signed char *address, __epi_64xi8 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_32xi16(signed short int *address,
                                     __epi_32xi16 value,
                                     unsigned long int flags,
                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_16xi32(signed int *address, __epi_16xi32 value,
                                     unsigned long int flags,
                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_8xi64(signed long int *address, __epi_8xi64 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_16xf32(float *address, __epi_16xf32 value,
                                     unsigned long int flags,
                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_8xf64(double *address, __epi_8xf64 value,
                                    unsigned long int flags,
                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vstore_ext_8xi8_mask(signed char *address, __epi_8xi8 value,
                                        unsigned long int flags,
                                        __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_4xi16_mask(signed short int *address,
                                         __epi_4xi16 value,
                                         unsigned long int flags,
                                         __epi_4xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_2xi32_mask(signed int *address, __epi_2xi32 value,
                                         unsigned long int flags,
                                         __epi_2xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_1xi64_mask(signed long int *address,
                                         __epi_1xi64 value,
                                         unsigned long int flags,
                                         __epi_1xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_2xf32_mask(float *address, __epi_2xf32 value,
                                         unsigned long int flags,
                                         __epi_2xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_1xf64_mask(double *address, __epi_1xf64 value,
                                         unsigned long int flags,
                                         __epi_1xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_16xi8_mask(signed char *address,
                                         __epi_16xi8 value,
                                         unsigned long int flags,
                                         __epi_16xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_8xi16_mask(signed short int *address,
                                         __epi_8xi16 value,
                                         unsigned long int flags,
                                         __epi_8xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_4xi32_mask(signed int *address, __epi_4xi32 value,
                                         unsigned long int flags,
                                         __epi_4xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_2xi64_mask(signed long int *address,
                                         __epi_2xi64 value,
                                         unsigned long int flags,
                                         __epi_2xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_4xf32_mask(float *address, __epi_4xf32 value,
                                         unsigned long int flags,
                                         __epi_4xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_2xf64_mask(double *address, __epi_2xf64 value,
                                         unsigned long int flags,
                                         __epi_2xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_32xi8_mask(signed char *address,
                                         __epi_32xi8 value,
                                         unsigned long int flags,
                                         __epi_32xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_16xi16_mask(signed short int *address,
                                          __epi_16xi16 value,
                                          unsigned long int flags,
                                          __epi_16xi1 mask,
                                          unsigned long int gvl);
void __builtin_epi_vstore_ext_8xi32_mask(signed int *address, __epi_8xi32 value,
                                         unsigned long int flags,
                                         __epi_8xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_4xi64_mask(signed long int *address,
                                         __epi_4xi64 value,
                                         unsigned long int flags,
                                         __epi_4xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_8xf32_mask(float *address, __epi_8xf32 value,
                                         unsigned long int flags,
                                         __epi_8xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_4xf64_mask(double *address, __epi_4xf64 value,
                                         unsigned long int flags,
                                         __epi_4xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_64xi8_mask(signed char *address,
                                         __epi_64xi8 value,
                                         unsigned long int flags,
                                         __epi_64xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_32xi16_mask(signed short int *address,
                                          __epi_32xi16 value,
                                          unsigned long int flags,
                                          __epi_32xi1 mask,
                                          unsigned long int gvl);
void __builtin_epi_vstore_ext_16xi32_mask(signed int *address,
                                          __epi_16xi32 value,
                                          unsigned long int flags,
                                          __epi_16xi1 mask,
                                          unsigned long int gvl);
void __builtin_epi_vstore_ext_8xi64_mask(signed long int *address,
                                         __epi_8xi64 value,
                                         unsigned long int flags,
                                         __epi_8xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_ext_16xf32_mask(float *address, __epi_16xf32 value,
                                          unsigned long int flags,
                                          __epi_16xi1 mask,
                                          unsigned long int gvl);
void __builtin_epi_vstore_ext_8xf64_mask(double *address, __epi_8xf64 value,
                                         unsigned long int flags,
                                         __epi_8xi1 mask,
                                         unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address, value[element])
  address = address + SEW / 8

2.6.22. Store vector elements into memory using an index vector (cache-flags)

Description

Use these builtins to store the elements of a vector into memory using an index vector specifying the cache behaviour in the flags parameter. This is commonly known as a scatter operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vsx.v
Prototypes
void __builtin_epi_vstore_ext_indexed_8xi8(signed char *address,
                                           __epi_8xi8 value, __epi_8xi8 indexes,
                                           unsigned long int flags,
                                           unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xi16(signed short int *address,
                                            __epi_4xi16 value,
                                            __epi_4xi16 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_2xi32(signed int *address,
                                            __epi_2xi32 value,
                                            __epi_2xi32 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_1xi64(signed long int *address,
                                            __epi_1xi64 value,
                                            __epi_1xi64 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_2xf32(float *address, __epi_2xf32 value,
                                            __epi_2xi32 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_1xf64(double *address, __epi_1xf64 value,
                                            __epi_1xi64 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_16xi8(signed char *address,
                                            __epi_16xi8 value,
                                            __epi_16xi8 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xi16(signed short int *address,
                                            __epi_8xi16 value,
                                            __epi_8xi16 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xi32(signed int *address,
                                            __epi_4xi32 value,
                                            __epi_4xi32 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_2xi64(signed long int *address,
                                            __epi_2xi64 value,
                                            __epi_2xi64 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xf32(float *address, __epi_4xf32 value,
                                            __epi_4xi32 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_2xf64(double *address, __epi_2xf64 value,
                                            __epi_2xi64 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_32xi8(signed char *address,
                                            __epi_32xi8 value,
                                            __epi_32xi8 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_16xi16(signed short int *address,
                                             __epi_16xi16 value,
                                             __epi_16xi16 indexes,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xi32(signed int *address,
                                            __epi_8xi32 value,
                                            __epi_8xi32 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xi64(signed long int *address,
                                            __epi_4xi64 value,
                                            __epi_4xi64 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xf32(float *address, __epi_8xf32 value,
                                            __epi_8xi32 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xf64(double *address, __epi_4xf64 value,
                                            __epi_4xi64 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_64xi8(signed char *address,
                                            __epi_64xi8 value,
                                            __epi_64xi8 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_32xi16(signed short int *address,
                                             __epi_32xi16 value,
                                             __epi_32xi16 indexes,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_16xi32(signed int *address,
                                             __epi_16xi32 value,
                                             __epi_16xi32 indexes,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xi64(signed long int *address,
                                            __epi_8xi64 value,
                                            __epi_8xi64 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_16xf32(float *address, __epi_16xf32 value,
                                             __epi_16xi32 indexes,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xf64(double *address, __epi_8xf64 value,
                                            __epi_8xi64 indexes,
                                            unsigned long int flags,
                                            unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address + index[element], value[element])
Masked prototypes
void __builtin_epi_vstore_ext_indexed_8xi8_mask(
    signed char *address, __epi_8xi8 value, __epi_8xi8 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xi16_mask(
    signed short int *address, __epi_4xi16 value, __epi_4xi16 indexes,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_2xi32_mask(
    signed int *address, __epi_2xi32 value, __epi_2xi32 indexes,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_1xi64_mask(
    signed long int *address, __epi_1xi64 value, __epi_1xi64 indexes,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_2xf32_mask(
    float *address, __epi_2xf32 value, __epi_2xi32 indexes,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_1xf64_mask(
    double *address, __epi_1xf64 value, __epi_1xi64 indexes,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_16xi8_mask(
    signed char *address, __epi_16xi8 value, __epi_16xi8 indexes,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xi16_mask(
    signed short int *address, __epi_8xi16 value, __epi_8xi16 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xi32_mask(
    signed int *address, __epi_4xi32 value, __epi_4xi32 indexes,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_2xi64_mask(
    signed long int *address, __epi_2xi64 value, __epi_2xi64 indexes,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xf32_mask(
    float *address, __epi_4xf32 value, __epi_4xi32 indexes,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_2xf64_mask(
    double *address, __epi_2xf64 value, __epi_2xi64 indexes,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_32xi8_mask(
    signed char *address, __epi_32xi8 value, __epi_32xi8 indexes,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_16xi16_mask(
    signed short int *address, __epi_16xi16 value, __epi_16xi16 indexes,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xi32_mask(
    signed int *address, __epi_8xi32 value, __epi_8xi32 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xi64_mask(
    signed long int *address, __epi_4xi64 value, __epi_4xi64 indexes,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xf32_mask(
    float *address, __epi_8xf32 value, __epi_8xi32 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_4xf64_mask(
    double *address, __epi_4xf64 value, __epi_4xi64 indexes,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_64xi8_mask(
    signed char *address, __epi_64xi8 value, __epi_64xi8 indexes,
    unsigned long int flags, __epi_64xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_32xi16_mask(
    signed short int *address, __epi_32xi16 value, __epi_32xi16 indexes,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_16xi32_mask(
    signed int *address, __epi_16xi32 value, __epi_16xi32 indexes,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xi64_mask(
    signed long int *address, __epi_8xi64 value, __epi_8xi64 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_16xf32_mask(
    float *address, __epi_16xf32 value, __epi_16xi32 indexes,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_8xf64_mask(
    double *address, __epi_8xf64 value, __epi_8xi64 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address + index[element], value[element])

2.6.23. Store unsigned vector elements into memory using an index vector (cache-flags)

Description

Use these builtins to store the elements of a vector into memory using an index vector specifying the cache behaviour in the flags parameter. This is commonly known as a scatter operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vsx.v
Prototypes
void __builtin_epi_vstore_ext_indexed_unsigned_8xi8(unsigned char *address,
                                                    __epi_8xi8 value,
                                                    __epi_8xi8 indexes,
                                                    unsigned long int flags,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_4xi16(
    unsigned short int *address, __epi_4xi16 value, __epi_4xi16 indexes,
    unsigned long int flags, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_2xi32(unsigned int *address,
                                                     __epi_2xi32 value,
                                                     __epi_2xi32 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_1xi64(unsigned long int *address,
                                                     __epi_1xi64 value,
                                                     __epi_1xi64 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_16xi8(unsigned char *address,
                                                     __epi_16xi8 value,
                                                     __epi_16xi8 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_8xi16(
    unsigned short int *address, __epi_8xi16 value, __epi_8xi16 indexes,
    unsigned long int flags, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_4xi32(unsigned int *address,
                                                     __epi_4xi32 value,
                                                     __epi_4xi32 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_2xi64(unsigned long int *address,
                                                     __epi_2xi64 value,
                                                     __epi_2xi64 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_32xi8(unsigned char *address,
                                                     __epi_32xi8 value,
                                                     __epi_32xi8 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_16xi16(
    unsigned short int *address, __epi_16xi16 value, __epi_16xi16 indexes,
    unsigned long int flags, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_8xi32(unsigned int *address,
                                                     __epi_8xi32 value,
                                                     __epi_8xi32 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_4xi64(unsigned long int *address,
                                                     __epi_4xi64 value,
                                                     __epi_4xi64 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_64xi8(unsigned char *address,
                                                     __epi_64xi8 value,
                                                     __epi_64xi8 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_32xi16(
    unsigned short int *address, __epi_32xi16 value, __epi_32xi16 indexes,
    unsigned long int flags, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_16xi32(unsigned int *address,
                                                      __epi_16xi32 value,
                                                      __epi_16xi32 indexes,
                                                      unsigned long int flags,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_8xi64(unsigned long int *address,
                                                     __epi_8xi64 value,
                                                     __epi_8xi64 indexes,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address + index[element], value[element])
Masked prototypes
void __builtin_epi_vstore_ext_indexed_unsigned_8xi8_mask(
    unsigned char *address, __epi_8xi8 value, __epi_8xi8 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_4xi16_mask(
    unsigned short int *address, __epi_4xi16 value, __epi_4xi16 indexes,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_2xi32_mask(
    unsigned int *address, __epi_2xi32 value, __epi_2xi32 indexes,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_1xi64_mask(
    unsigned long int *address, __epi_1xi64 value, __epi_1xi64 indexes,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_16xi8_mask(
    unsigned char *address, __epi_16xi8 value, __epi_16xi8 indexes,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_8xi16_mask(
    unsigned short int *address, __epi_8xi16 value, __epi_8xi16 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_4xi32_mask(
    unsigned int *address, __epi_4xi32 value, __epi_4xi32 indexes,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_2xi64_mask(
    unsigned long int *address, __epi_2xi64 value, __epi_2xi64 indexes,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_32xi8_mask(
    unsigned char *address, __epi_32xi8 value, __epi_32xi8 indexes,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_16xi16_mask(
    unsigned short int *address, __epi_16xi16 value, __epi_16xi16 indexes,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_8xi32_mask(
    unsigned int *address, __epi_8xi32 value, __epi_8xi32 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_4xi64_mask(
    unsigned long int *address, __epi_4xi64 value, __epi_4xi64 indexes,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_64xi8_mask(
    unsigned char *address, __epi_64xi8 value, __epi_64xi8 indexes,
    unsigned long int flags, __epi_64xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_32xi16_mask(
    unsigned short int *address, __epi_32xi16 value, __epi_32xi16 indexes,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_16xi32_mask(
    unsigned int *address, __epi_16xi32 value, __epi_16xi32 indexes,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_indexed_unsigned_8xi64_mask(
    unsigned long int *address, __epi_8xi64 value, __epi_8xi64 indexes,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address + index[element], value[element])

2.6.24. Store elements of a vector into strided locations in memory (cache-flags)

Description

Use these builtins to store elements from a vector into memory locations separated by a constant stride value, in bytes, specifying the cache behaviour in the flags parameter.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vss.v
Prototypes
void __builtin_epi_vstore_ext_strided_8xi8(signed char *address,
                                           __epi_8xi8 value,
                                           signed long int stride,
                                           unsigned long int flags,
                                           unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xi16(signed short int *address,
                                            __epi_4xi16 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_2xi32(signed int *address,
                                            __epi_2xi32 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_1xi64(signed long int *address,
                                            __epi_1xi64 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_2xf32(float *address, __epi_2xf32 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_1xf64(double *address, __epi_1xf64 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_16xi8(signed char *address,
                                            __epi_16xi8 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xi16(signed short int *address,
                                            __epi_8xi16 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xi32(signed int *address,
                                            __epi_4xi32 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_2xi64(signed long int *address,
                                            __epi_2xi64 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xf32(float *address, __epi_4xf32 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_2xf64(double *address, __epi_2xf64 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_32xi8(signed char *address,
                                            __epi_32xi8 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_16xi16(signed short int *address,
                                             __epi_16xi16 value,
                                             signed long int stride,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xi32(signed int *address,
                                            __epi_8xi32 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xi64(signed long int *address,
                                            __epi_4xi64 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xf32(float *address, __epi_8xf32 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xf64(double *address, __epi_4xf64 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_64xi8(signed char *address,
                                            __epi_64xi8 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_32xi16(signed short int *address,
                                             __epi_32xi16 value,
                                             signed long int stride,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_16xi32(signed int *address,
                                             __epi_16xi32 value,
                                             signed long int stride,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xi64(signed long int *address,
                                            __epi_8xi64 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_16xf32(float *address, __epi_16xf32 value,
                                             signed long int stride,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xf64(double *address, __epi_8xf64 value,
                                            signed long int stride,
                                            unsigned long int flags,
                                            unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, result[element])
  address = address + stride
Masked prototypes
void __builtin_epi_vstore_ext_strided_8xi8_mask(
    signed char *address, __epi_8xi8 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xi16_mask(
    signed short int *address, __epi_4xi16 value, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_2xi32_mask(
    signed int *address, __epi_2xi32 value, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_1xi64_mask(
    signed long int *address, __epi_1xi64 value, signed long int stride,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_2xf32_mask(
    float *address, __epi_2xf32 value, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_1xf64_mask(
    double *address, __epi_1xf64 value, signed long int stride,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_16xi8_mask(
    signed char *address, __epi_16xi8 value, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xi16_mask(
    signed short int *address, __epi_8xi16 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xi32_mask(
    signed int *address, __epi_4xi32 value, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_2xi64_mask(
    signed long int *address, __epi_2xi64 value, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xf32_mask(
    float *address, __epi_4xf32 value, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_2xf64_mask(
    double *address, __epi_2xf64 value, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_32xi8_mask(
    signed char *address, __epi_32xi8 value, signed long int stride,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_16xi16_mask(
    signed short int *address, __epi_16xi16 value, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xi32_mask(
    signed int *address, __epi_8xi32 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xi64_mask(
    signed long int *address, __epi_4xi64 value, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xf32_mask(
    float *address, __epi_8xf32 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_4xf64_mask(
    double *address, __epi_4xf64 value, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_64xi8_mask(
    signed char *address, __epi_64xi8 value, signed long int stride,
    unsigned long int flags, __epi_64xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_32xi16_mask(
    signed short int *address, __epi_32xi16 value, signed long int stride,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_16xi32_mask(
    signed int *address, __epi_16xi32 value, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xi64_mask(
    signed long int *address, __epi_8xi64 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_16xf32_mask(
    float *address, __epi_16xf32 value, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_8xf64_mask(
    double *address, __epi_8xf64 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     store_element(address, result[element])
  address = address + stride

2.6.25. Store unsigned elements of a vector into strided locations in memory (cache-flags)

Description

Use these builtins to store elements from a vector into memory locations separated by a constant stride value, in bytes, specifying the cache behaviour in the flags parameter.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vss.v
Prototypes
void __builtin_epi_vstore_ext_strided_unsigned_8xi8(unsigned char *address,
                                                    __epi_8xi8 value,
                                                    signed long int stride,
                                                    unsigned long int flags,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_4xi16(
    unsigned short int *address, __epi_4xi16 value, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_2xi32(unsigned int *address,
                                                     __epi_2xi32 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_1xi64(unsigned long int *address,
                                                     __epi_1xi64 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_16xi8(unsigned char *address,
                                                     __epi_16xi8 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_8xi16(
    unsigned short int *address, __epi_8xi16 value, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_4xi32(unsigned int *address,
                                                     __epi_4xi32 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_2xi64(unsigned long int *address,
                                                     __epi_2xi64 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_32xi8(unsigned char *address,
                                                     __epi_32xi8 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_16xi16(
    unsigned short int *address, __epi_16xi16 value, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_8xi32(unsigned int *address,
                                                     __epi_8xi32 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_4xi64(unsigned long int *address,
                                                     __epi_4xi64 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_64xi8(unsigned char *address,
                                                     __epi_64xi8 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_32xi16(
    unsigned short int *address, __epi_32xi16 value, signed long int stride,
    unsigned long int flags, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_16xi32(unsigned int *address,
                                                      __epi_16xi32 value,
                                                      signed long int stride,
                                                      unsigned long int flags,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_8xi64(unsigned long int *address,
                                                     __epi_8xi64 value,
                                                     signed long int stride,
                                                     unsigned long int flags,
                                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, result[element])
  address = address + stride
Masked prototypes
void __builtin_epi_vstore_ext_strided_unsigned_8xi8_mask(
    unsigned char *address, __epi_8xi8 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_4xi16_mask(
    unsigned short int *address, __epi_4xi16 value, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_2xi32_mask(
    unsigned int *address, __epi_2xi32 value, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_1xi64_mask(
    unsigned long int *address, __epi_1xi64 value, signed long int stride,
    unsigned long int flags, __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_16xi8_mask(
    unsigned char *address, __epi_16xi8 value, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_8xi16_mask(
    unsigned short int *address, __epi_8xi16 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_4xi32_mask(
    unsigned int *address, __epi_4xi32 value, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_2xi64_mask(
    unsigned long int *address, __epi_2xi64 value, signed long int stride,
    unsigned long int flags, __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_32xi8_mask(
    unsigned char *address, __epi_32xi8 value, signed long int stride,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_16xi16_mask(
    unsigned short int *address, __epi_16xi16 value, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_8xi32_mask(
    unsigned int *address, __epi_8xi32 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_4xi64_mask(
    unsigned long int *address, __epi_4xi64 value, signed long int stride,
    unsigned long int flags, __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_64xi8_mask(
    unsigned char *address, __epi_64xi8 value, signed long int stride,
    unsigned long int flags, __epi_64xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_32xi16_mask(
    unsigned short int *address, __epi_32xi16 value, signed long int stride,
    unsigned long int flags, __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_16xi32_mask(
    unsigned int *address, __epi_16xi32 value, signed long int stride,
    unsigned long int flags, __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_ext_strided_unsigned_8xi64_mask(
    unsigned long int *address, __epi_8xi64 value, signed long int stride,
    unsigned long int flags, __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     store_element(address, result[element])
  address = address + stride

2.6.26. Store unsigned vector elements into contiguous locations in memory (cache-flags)

Description

Use these builtins to store the elements of a vector into contiguous locations in memory specifying the cache behaviour in the flags parameter.

Instruction
vse.v
Prototypes
void __builtin_epi_vstore_ext_unsigned_8xi8(unsigned char *address,
                                            __epi_8xi8 value,
                                            unsigned long int flags,
                                            unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_4xi16(unsigned short int *address,
                                             __epi_4xi16 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_2xi32(unsigned int *address,
                                             __epi_2xi32 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_1xi64(unsigned long int *address,
                                             __epi_1xi64 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_16xi8(unsigned char *address,
                                             __epi_16xi8 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_8xi16(unsigned short int *address,
                                             __epi_8xi16 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_4xi32(unsigned int *address,
                                             __epi_4xi32 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_2xi64(unsigned long int *address,
                                             __epi_2xi64 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_32xi8(unsigned char *address,
                                             __epi_32xi8 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_16xi16(unsigned short int *address,
                                              __epi_16xi16 value,
                                              unsigned long int flags,
                                              unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_8xi32(unsigned int *address,
                                             __epi_8xi32 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_4xi64(unsigned long int *address,
                                             __epi_4xi64 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_64xi8(unsigned char *address,
                                             __epi_64xi8 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_32xi16(unsigned short int *address,
                                              __epi_32xi16 value,
                                              unsigned long int flags,
                                              unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_16xi32(unsigned int *address,
                                              __epi_16xi32 value,
                                              unsigned long int flags,
                                              unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_8xi64(unsigned long int *address,
                                             __epi_8xi64 value,
                                             unsigned long int flags,
                                             unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vstore_ext_unsigned_8xi8_mask(unsigned char *address,
                                                 __epi_8xi8 value,
                                                 unsigned long int flags,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_4xi16_mask(unsigned short int *address,
                                                  __epi_4xi16 value,
                                                  unsigned long int flags,
                                                  __epi_4xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_2xi32_mask(unsigned int *address,
                                                  __epi_2xi32 value,
                                                  unsigned long int flags,
                                                  __epi_2xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_1xi64_mask(unsigned long int *address,
                                                  __epi_1xi64 value,
                                                  unsigned long int flags,
                                                  __epi_1xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_16xi8_mask(unsigned char *address,
                                                  __epi_16xi8 value,
                                                  unsigned long int flags,
                                                  __epi_16xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_8xi16_mask(unsigned short int *address,
                                                  __epi_8xi16 value,
                                                  unsigned long int flags,
                                                  __epi_8xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_4xi32_mask(unsigned int *address,
                                                  __epi_4xi32 value,
                                                  unsigned long int flags,
                                                  __epi_4xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_2xi64_mask(unsigned long int *address,
                                                  __epi_2xi64 value,
                                                  unsigned long int flags,
                                                  __epi_2xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_32xi8_mask(unsigned char *address,
                                                  __epi_32xi8 value,
                                                  unsigned long int flags,
                                                  __epi_32xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_16xi16_mask(unsigned short int *address,
                                                   __epi_16xi16 value,
                                                   unsigned long int flags,
                                                   __epi_16xi1 mask,
                                                   unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_8xi32_mask(unsigned int *address,
                                                  __epi_8xi32 value,
                                                  unsigned long int flags,
                                                  __epi_8xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_4xi64_mask(unsigned long int *address,
                                                  __epi_4xi64 value,
                                                  unsigned long int flags,
                                                  __epi_4xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_64xi8_mask(unsigned char *address,
                                                  __epi_64xi8 value,
                                                  unsigned long int flags,
                                                  __epi_64xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_32xi16_mask(unsigned short int *address,
                                                   __epi_32xi16 value,
                                                   unsigned long int flags,
                                                   __epi_32xi1 mask,
                                                   unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_16xi32_mask(unsigned int *address,
                                                   __epi_16xi32 value,
                                                   unsigned long int flags,
                                                   __epi_16xi1 mask,
                                                   unsigned long int gvl);
void __builtin_epi_vstore_ext_unsigned_8xi64_mask(unsigned long int *address,
                                                  __epi_8xi64 value,
                                                  unsigned long int flags,
                                                  __epi_8xi1 mask,
                                                  unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address, value[element])
  address = address + SEW / 8

2.6.27. Store vector elements into memory using an index vector

Description

Use these builtins to store the elements of a vector into memory using an index vector. This is commonly known as a scatter operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vsx.v
Prototypes
void __builtin_epi_vstore_indexed_8xi8(signed char *address, __epi_8xi8 value,
                                       __epi_8xi8 indexes,
                                       unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xi16(signed short int *address,
                                        __epi_4xi16 value, __epi_4xi16 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_2xi32(signed int *address, __epi_2xi32 value,
                                        __epi_2xi32 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_1xi64(signed long int *address,
                                        __epi_1xi64 value, __epi_1xi64 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_2xf32(float *address, __epi_2xf32 value,
                                        __epi_2xi32 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_1xf64(double *address, __epi_1xf64 value,
                                        __epi_1xi64 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_16xi8(signed char *address, __epi_16xi8 value,
                                        __epi_16xi8 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xi16(signed short int *address,
                                        __epi_8xi16 value, __epi_8xi16 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xi32(signed int *address, __epi_4xi32 value,
                                        __epi_4xi32 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_2xi64(signed long int *address,
                                        __epi_2xi64 value, __epi_2xi64 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xf32(float *address, __epi_4xf32 value,
                                        __epi_4xi32 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_2xf64(double *address, __epi_2xf64 value,
                                        __epi_2xi64 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_32xi8(signed char *address, __epi_32xi8 value,
                                        __epi_32xi8 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_16xi16(signed short int *address,
                                         __epi_16xi16 value,
                                         __epi_16xi16 indexes,
                                         unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xi32(signed int *address, __epi_8xi32 value,
                                        __epi_8xi32 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xi64(signed long int *address,
                                        __epi_4xi64 value, __epi_4xi64 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xf32(float *address, __epi_8xf32 value,
                                        __epi_8xi32 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xf64(double *address, __epi_4xf64 value,
                                        __epi_4xi64 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_64xi8(signed char *address, __epi_64xi8 value,
                                        __epi_64xi8 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_32xi16(signed short int *address,
                                         __epi_32xi16 value,
                                         __epi_32xi16 indexes,
                                         unsigned long int gvl);
void __builtin_epi_vstore_indexed_16xi32(signed int *address,
                                         __epi_16xi32 value,
                                         __epi_16xi32 indexes,
                                         unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xi64(signed long int *address,
                                        __epi_8xi64 value, __epi_8xi64 indexes,
                                        unsigned long int gvl);
void __builtin_epi_vstore_indexed_16xf32(float *address, __epi_16xf32 value,
                                         __epi_16xi32 indexes,
                                         unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xf64(double *address, __epi_8xf64 value,
                                        __epi_8xi64 indexes,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address + index[element], value[element])
Masked prototypes
void __builtin_epi_vstore_indexed_8xi8_mask(signed char *address,
                                            __epi_8xi8 value,
                                            __epi_8xi8 indexes, __epi_8xi1 mask,
                                            unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xi16_mask(signed short int *address,
                                             __epi_4xi16 value,
                                             __epi_4xi16 indexes,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_2xi32_mask(signed int *address,
                                             __epi_2xi32 value,
                                             __epi_2xi32 indexes,
                                             __epi_2xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_1xi64_mask(signed long int *address,
                                             __epi_1xi64 value,
                                             __epi_1xi64 indexes,
                                             __epi_1xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_2xf32_mask(float *address, __epi_2xf32 value,
                                             __epi_2xi32 indexes,
                                             __epi_2xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_1xf64_mask(double *address, __epi_1xf64 value,
                                             __epi_1xi64 indexes,
                                             __epi_1xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_16xi8_mask(signed char *address,
                                             __epi_16xi8 value,
                                             __epi_16xi8 indexes,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xi16_mask(signed short int *address,
                                             __epi_8xi16 value,
                                             __epi_8xi16 indexes,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xi32_mask(signed int *address,
                                             __epi_4xi32 value,
                                             __epi_4xi32 indexes,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_2xi64_mask(signed long int *address,
                                             __epi_2xi64 value,
                                             __epi_2xi64 indexes,
                                             __epi_2xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xf32_mask(float *address, __epi_4xf32 value,
                                             __epi_4xi32 indexes,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_2xf64_mask(double *address, __epi_2xf64 value,
                                             __epi_2xi64 indexes,
                                             __epi_2xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_32xi8_mask(signed char *address,
                                             __epi_32xi8 value,
                                             __epi_32xi8 indexes,
                                             __epi_32xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_16xi16_mask(signed short int *address,
                                              __epi_16xi16 value,
                                              __epi_16xi16 indexes,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xi32_mask(signed int *address,
                                             __epi_8xi32 value,
                                             __epi_8xi32 indexes,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xi64_mask(signed long int *address,
                                             __epi_4xi64 value,
                                             __epi_4xi64 indexes,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xf32_mask(float *address, __epi_8xf32 value,
                                             __epi_8xi32 indexes,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_4xf64_mask(double *address, __epi_4xf64 value,
                                             __epi_4xi64 indexes,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_64xi8_mask(signed char *address,
                                             __epi_64xi8 value,
                                             __epi_64xi8 indexes,
                                             __epi_64xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_32xi16_mask(signed short int *address,
                                              __epi_32xi16 value,
                                              __epi_32xi16 indexes,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_indexed_16xi32_mask(signed int *address,
                                              __epi_16xi32 value,
                                              __epi_16xi32 indexes,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xi64_mask(signed long int *address,
                                             __epi_8xi64 value,
                                             __epi_8xi64 indexes,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_indexed_16xf32_mask(float *address,
                                              __epi_16xf32 value,
                                              __epi_16xi32 indexes,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_indexed_8xf64_mask(double *address, __epi_8xf64 value,
                                             __epi_8xi64 indexes,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address + index[element], value[element])

2.6.28. Store unsigned vector elements into memory using an index vector

Description

Use these builtins to store the elements of a vector into memory using an index vector. This is commonly known as a scatter operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vsx.v
Prototypes
void __builtin_epi_vstore_indexed_unsigned_8xi8(unsigned char *address,
                                                __epi_8xi8 value,
                                                __epi_8xi8 indexes,
                                                unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_4xi16(unsigned short int *address,
                                                 __epi_4xi16 value,
                                                 __epi_4xi16 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_2xi32(unsigned int *address,
                                                 __epi_2xi32 value,
                                                 __epi_2xi32 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_1xi64(unsigned long int *address,
                                                 __epi_1xi64 value,
                                                 __epi_1xi64 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_16xi8(unsigned char *address,
                                                 __epi_16xi8 value,
                                                 __epi_16xi8 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_8xi16(unsigned short int *address,
                                                 __epi_8xi16 value,
                                                 __epi_8xi16 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_4xi32(unsigned int *address,
                                                 __epi_4xi32 value,
                                                 __epi_4xi32 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_2xi64(unsigned long int *address,
                                                 __epi_2xi64 value,
                                                 __epi_2xi64 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_32xi8(unsigned char *address,
                                                 __epi_32xi8 value,
                                                 __epi_32xi8 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_16xi16(unsigned short int *address,
                                                  __epi_16xi16 value,
                                                  __epi_16xi16 indexes,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_8xi32(unsigned int *address,
                                                 __epi_8xi32 value,
                                                 __epi_8xi32 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_4xi64(unsigned long int *address,
                                                 __epi_4xi64 value,
                                                 __epi_4xi64 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_64xi8(unsigned char *address,
                                                 __epi_64xi8 value,
                                                 __epi_64xi8 indexes,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_32xi16(unsigned short int *address,
                                                  __epi_32xi16 value,
                                                  __epi_32xi16 indexes,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_16xi32(unsigned int *address,
                                                  __epi_16xi32 value,
                                                  __epi_16xi32 indexes,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_8xi64(unsigned long int *address,
                                                 __epi_8xi64 value,
                                                 __epi_8xi64 indexes,
                                                 unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address + index[element], value[element])
Masked prototypes
void __builtin_epi_vstore_indexed_unsigned_8xi8_mask(unsigned char *address,
                                                     __epi_8xi8 value,
                                                     __epi_8xi8 indexes,
                                                     __epi_8xi1 mask,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_4xi16_mask(
    unsigned short int *address, __epi_4xi16 value, __epi_4xi16 indexes,
    __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_2xi32_mask(unsigned int *address,
                                                      __epi_2xi32 value,
                                                      __epi_2xi32 indexes,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_1xi64_mask(
    unsigned long int *address, __epi_1xi64 value, __epi_1xi64 indexes,
    __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_16xi8_mask(unsigned char *address,
                                                      __epi_16xi8 value,
                                                      __epi_16xi8 indexes,
                                                      __epi_16xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_8xi16_mask(
    unsigned short int *address, __epi_8xi16 value, __epi_8xi16 indexes,
    __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_4xi32_mask(unsigned int *address,
                                                      __epi_4xi32 value,
                                                      __epi_4xi32 indexes,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_2xi64_mask(
    unsigned long int *address, __epi_2xi64 value, __epi_2xi64 indexes,
    __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_32xi8_mask(unsigned char *address,
                                                      __epi_32xi8 value,
                                                      __epi_32xi8 indexes,
                                                      __epi_32xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_16xi16_mask(
    unsigned short int *address, __epi_16xi16 value, __epi_16xi16 indexes,
    __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_8xi32_mask(unsigned int *address,
                                                      __epi_8xi32 value,
                                                      __epi_8xi32 indexes,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_4xi64_mask(
    unsigned long int *address, __epi_4xi64 value, __epi_4xi64 indexes,
    __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_64xi8_mask(unsigned char *address,
                                                      __epi_64xi8 value,
                                                      __epi_64xi8 indexes,
                                                      __epi_64xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_32xi16_mask(
    unsigned short int *address, __epi_32xi16 value, __epi_32xi16 indexes,
    __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_16xi32_mask(unsigned int *address,
                                                       __epi_16xi32 value,
                                                       __epi_16xi32 indexes,
                                                       __epi_16xi1 mask,
                                                       unsigned long int gvl);
void __builtin_epi_vstore_indexed_unsigned_8xi64_mask(
    unsigned long int *address, __epi_8xi64 value, __epi_8xi64 indexes,
    __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address + index[element], value[element])

2.6.29. Store elements of a mask vector into memory

Description

Use these builtins to store the elements of a mask vector into memory.

All the elements of the vector, in groups of 8 bits, are stored.

Instruction
vse.v
Prototypes
void __builtin_epi_vstore_8xi1(unsigned char *address, __epi_8xi1 value);
void __builtin_epi_vstore_4xi1(unsigned short int *address, __epi_4xi1 value);
void __builtin_epi_vstore_2xi1(unsigned int *address, __epi_2xi1 value);
void __builtin_epi_vstore_1xi1(unsigned long int *address, __epi_1xi1 value);
Operation
for element = 0 to VLMAX
  store_uint8(address, result[element])
  address = address + 1

2.6.30. Store vector elements into contiguous locations in memory (non-temporal)

Description

Use these builtins to store the elements of a vector into contiguous locations in memory without loading the vector in the cache.

Instruction
vse.v
Prototypes
void __builtin_epi_vstore_nt_8xi8(signed char *address, __epi_8xi8 value,
                                  unsigned long int gvl);
void __builtin_epi_vstore_nt_4xi16(signed short int *address, __epi_4xi16 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_2xi32(signed int *address, __epi_2xi32 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_1xi64(signed long int *address, __epi_1xi64 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_2xf32(float *address, __epi_2xf32 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_1xf64(double *address, __epi_1xf64 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_16xi8(signed char *address, __epi_16xi8 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_8xi16(signed short int *address, __epi_8xi16 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_4xi32(signed int *address, __epi_4xi32 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_2xi64(signed long int *address, __epi_2xi64 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_4xf32(float *address, __epi_4xf32 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_2xf64(double *address, __epi_2xf64 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_32xi8(signed char *address, __epi_32xi8 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_16xi16(signed short int *address,
                                    __epi_16xi16 value, unsigned long int gvl);
void __builtin_epi_vstore_nt_8xi32(signed int *address, __epi_8xi32 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_4xi64(signed long int *address, __epi_4xi64 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_8xf32(float *address, __epi_8xf32 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_4xf64(double *address, __epi_4xf64 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_64xi8(signed char *address, __epi_64xi8 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_32xi16(signed short int *address,
                                    __epi_32xi16 value, unsigned long int gvl);
void __builtin_epi_vstore_nt_16xi32(signed int *address, __epi_16xi32 value,
                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_8xi64(signed long int *address, __epi_8xi64 value,
                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_16xf32(float *address, __epi_16xf32 value,
                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_8xf64(double *address, __epi_8xf64 value,
                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vstore_nt_8xi8_mask(signed char *address, __epi_8xi8 value,
                                       __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_4xi16_mask(signed short int *address,
                                        __epi_4xi16 value, __epi_4xi1 mask,
                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_2xi32_mask(signed int *address, __epi_2xi32 value,
                                        __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_1xi64_mask(signed long int *address,
                                        __epi_1xi64 value, __epi_1xi1 mask,
                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_2xf32_mask(float *address, __epi_2xf32 value,
                                        __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_1xf64_mask(double *address, __epi_1xf64 value,
                                        __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_16xi8_mask(signed char *address, __epi_16xi8 value,
                                        __epi_16xi1 mask,
                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_8xi16_mask(signed short int *address,
                                        __epi_8xi16 value, __epi_8xi1 mask,
                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_4xi32_mask(signed int *address, __epi_4xi32 value,
                                        __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_2xi64_mask(signed long int *address,
                                        __epi_2xi64 value, __epi_2xi1 mask,
                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_4xf32_mask(float *address, __epi_4xf32 value,
                                        __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_2xf64_mask(double *address, __epi_2xf64 value,
                                        __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_32xi8_mask(signed char *address, __epi_32xi8 value,
                                        __epi_32xi1 mask,
                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_16xi16_mask(signed short int *address,
                                         __epi_16xi16 value, __epi_16xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_8xi32_mask(signed int *address, __epi_8xi32 value,
                                        __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_4xi64_mask(signed long int *address,
                                        __epi_4xi64 value, __epi_4xi1 mask,
                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_8xf32_mask(float *address, __epi_8xf32 value,
                                        __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_4xf64_mask(double *address, __epi_4xf64 value,
                                        __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_64xi8_mask(signed char *address, __epi_64xi8 value,
                                        __epi_64xi1 mask,
                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_32xi16_mask(signed short int *address,
                                         __epi_32xi16 value, __epi_32xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_16xi32_mask(signed int *address,
                                         __epi_16xi32 value, __epi_16xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_8xi64_mask(signed long int *address,
                                        __epi_8xi64 value, __epi_8xi1 mask,
                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_16xf32_mask(float *address, __epi_16xf32 value,
                                         __epi_16xi1 mask,
                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_8xf64_mask(double *address, __epi_8xf64 value,
                                        __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address, value[element])
  address = address + SEW / 8

2.6.31. Store vector elements into memory using an index vector (non-temporal)

Description

Use these builtins to store the elements of a vector into memory using an index vector without loading the vector in the cache. This is commonly known as a scatter operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vsx.v
Prototypes
void __builtin_epi_vstore_nt_indexed_8xi8(signed char *address,
                                          __epi_8xi8 value, __epi_8xi8 indexes,
                                          unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xi16(signed short int *address,
                                           __epi_4xi16 value,
                                           __epi_4xi16 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_2xi32(signed int *address,
                                           __epi_2xi32 value,
                                           __epi_2xi32 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_1xi64(signed long int *address,
                                           __epi_1xi64 value,
                                           __epi_1xi64 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_2xf32(float *address, __epi_2xf32 value,
                                           __epi_2xi32 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_1xf64(double *address, __epi_1xf64 value,
                                           __epi_1xi64 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_16xi8(signed char *address,
                                           __epi_16xi8 value,
                                           __epi_16xi8 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xi16(signed short int *address,
                                           __epi_8xi16 value,
                                           __epi_8xi16 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xi32(signed int *address,
                                           __epi_4xi32 value,
                                           __epi_4xi32 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_2xi64(signed long int *address,
                                           __epi_2xi64 value,
                                           __epi_2xi64 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xf32(float *address, __epi_4xf32 value,
                                           __epi_4xi32 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_2xf64(double *address, __epi_2xf64 value,
                                           __epi_2xi64 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_32xi8(signed char *address,
                                           __epi_32xi8 value,
                                           __epi_32xi8 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_16xi16(signed short int *address,
                                            __epi_16xi16 value,
                                            __epi_16xi16 indexes,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xi32(signed int *address,
                                           __epi_8xi32 value,
                                           __epi_8xi32 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xi64(signed long int *address,
                                           __epi_4xi64 value,
                                           __epi_4xi64 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xf32(float *address, __epi_8xf32 value,
                                           __epi_8xi32 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xf64(double *address, __epi_4xf64 value,
                                           __epi_4xi64 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_64xi8(signed char *address,
                                           __epi_64xi8 value,
                                           __epi_64xi8 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_32xi16(signed short int *address,
                                            __epi_32xi16 value,
                                            __epi_32xi16 indexes,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_16xi32(signed int *address,
                                            __epi_16xi32 value,
                                            __epi_16xi32 indexes,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xi64(signed long int *address,
                                           __epi_8xi64 value,
                                           __epi_8xi64 indexes,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_16xf32(float *address, __epi_16xf32 value,
                                            __epi_16xi32 indexes,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xf64(double *address, __epi_8xf64 value,
                                           __epi_8xi64 indexes,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address + index[element], value[element])
Masked prototypes
void __builtin_epi_vstore_nt_indexed_8xi8_mask(signed char *address,
                                               __epi_8xi8 value,
                                               __epi_8xi8 indexes,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xi16_mask(signed short int *address,
                                                __epi_4xi16 value,
                                                __epi_4xi16 indexes,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_2xi32_mask(signed int *address,
                                                __epi_2xi32 value,
                                                __epi_2xi32 indexes,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_1xi64_mask(signed long int *address,
                                                __epi_1xi64 value,
                                                __epi_1xi64 indexes,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_2xf32_mask(float *address,
                                                __epi_2xf32 value,
                                                __epi_2xi32 indexes,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_1xf64_mask(double *address,
                                                __epi_1xf64 value,
                                                __epi_1xi64 indexes,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_16xi8_mask(signed char *address,
                                                __epi_16xi8 value,
                                                __epi_16xi8 indexes,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xi16_mask(signed short int *address,
                                                __epi_8xi16 value,
                                                __epi_8xi16 indexes,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xi32_mask(signed int *address,
                                                __epi_4xi32 value,
                                                __epi_4xi32 indexes,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_2xi64_mask(signed long int *address,
                                                __epi_2xi64 value,
                                                __epi_2xi64 indexes,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xf32_mask(float *address,
                                                __epi_4xf32 value,
                                                __epi_4xi32 indexes,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_2xf64_mask(double *address,
                                                __epi_2xf64 value,
                                                __epi_2xi64 indexes,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_32xi8_mask(signed char *address,
                                                __epi_32xi8 value,
                                                __epi_32xi8 indexes,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_16xi16_mask(signed short int *address,
                                                 __epi_16xi16 value,
                                                 __epi_16xi16 indexes,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xi32_mask(signed int *address,
                                                __epi_8xi32 value,
                                                __epi_8xi32 indexes,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xi64_mask(signed long int *address,
                                                __epi_4xi64 value,
                                                __epi_4xi64 indexes,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xf32_mask(float *address,
                                                __epi_8xf32 value,
                                                __epi_8xi32 indexes,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_4xf64_mask(double *address,
                                                __epi_4xf64 value,
                                                __epi_4xi64 indexes,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_64xi8_mask(signed char *address,
                                                __epi_64xi8 value,
                                                __epi_64xi8 indexes,
                                                __epi_64xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_32xi16_mask(signed short int *address,
                                                 __epi_32xi16 value,
                                                 __epi_32xi16 indexes,
                                                 __epi_32xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_16xi32_mask(signed int *address,
                                                 __epi_16xi32 value,
                                                 __epi_16xi32 indexes,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xi64_mask(signed long int *address,
                                                __epi_8xi64 value,
                                                __epi_8xi64 indexes,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_16xf32_mask(float *address,
                                                 __epi_16xf32 value,
                                                 __epi_16xi32 indexes,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_8xf64_mask(double *address,
                                                __epi_8xf64 value,
                                                __epi_8xi64 indexes,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address + index[element], value[element])

2.6.32. Store unsigned vector elements into memory using an index vector (non-temporal)

Description

Use these builtins to store the elements of a vector into memory using an index vector without loading the vector in the cache. This is commonly known as a scatter operation.

The elements of the index vector are added as an offset of bytes to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vsx.v
Prototypes
void __builtin_epi_vstore_nt_indexed_unsigned_8xi8(unsigned char *address,
                                                   __epi_8xi8 value,
                                                   __epi_8xi8 indexes,
                                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_4xi16(unsigned short int *address,
                                                    __epi_4xi16 value,
                                                    __epi_4xi16 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_2xi32(unsigned int *address,
                                                    __epi_2xi32 value,
                                                    __epi_2xi32 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_1xi64(unsigned long int *address,
                                                    __epi_1xi64 value,
                                                    __epi_1xi64 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_16xi8(unsigned char *address,
                                                    __epi_16xi8 value,
                                                    __epi_16xi8 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_8xi16(unsigned short int *address,
                                                    __epi_8xi16 value,
                                                    __epi_8xi16 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_4xi32(unsigned int *address,
                                                    __epi_4xi32 value,
                                                    __epi_4xi32 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_2xi64(unsigned long int *address,
                                                    __epi_2xi64 value,
                                                    __epi_2xi64 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_32xi8(unsigned char *address,
                                                    __epi_32xi8 value,
                                                    __epi_32xi8 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_16xi16(
    unsigned short int *address, __epi_16xi16 value, __epi_16xi16 indexes,
    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_8xi32(unsigned int *address,
                                                    __epi_8xi32 value,
                                                    __epi_8xi32 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_4xi64(unsigned long int *address,
                                                    __epi_4xi64 value,
                                                    __epi_4xi64 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_64xi8(unsigned char *address,
                                                    __epi_64xi8 value,
                                                    __epi_64xi8 indexes,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_32xi16(
    unsigned short int *address, __epi_32xi16 value, __epi_32xi16 indexes,
    unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_16xi32(unsigned int *address,
                                                     __epi_16xi32 value,
                                                     __epi_16xi32 indexes,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_8xi64(unsigned long int *address,
                                                    __epi_8xi64 value,
                                                    __epi_8xi64 indexes,
                                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address + index[element], value[element])
Masked prototypes
void __builtin_epi_vstore_nt_indexed_unsigned_8xi8_mask(unsigned char *address,
                                                        __epi_8xi8 value,
                                                        __epi_8xi8 indexes,
                                                        __epi_8xi1 mask,
                                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_4xi16_mask(
    unsigned short int *address, __epi_4xi16 value, __epi_4xi16 indexes,
    __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_2xi32_mask(unsigned int *address,
                                                         __epi_2xi32 value,
                                                         __epi_2xi32 indexes,
                                                         __epi_2xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_1xi64_mask(
    unsigned long int *address, __epi_1xi64 value, __epi_1xi64 indexes,
    __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_16xi8_mask(unsigned char *address,
                                                         __epi_16xi8 value,
                                                         __epi_16xi8 indexes,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_8xi16_mask(
    unsigned short int *address, __epi_8xi16 value, __epi_8xi16 indexes,
    __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_4xi32_mask(unsigned int *address,
                                                         __epi_4xi32 value,
                                                         __epi_4xi32 indexes,
                                                         __epi_4xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_2xi64_mask(
    unsigned long int *address, __epi_2xi64 value, __epi_2xi64 indexes,
    __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_32xi8_mask(unsigned char *address,
                                                         __epi_32xi8 value,
                                                         __epi_32xi8 indexes,
                                                         __epi_32xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_16xi16_mask(
    unsigned short int *address, __epi_16xi16 value, __epi_16xi16 indexes,
    __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_8xi32_mask(unsigned int *address,
                                                         __epi_8xi32 value,
                                                         __epi_8xi32 indexes,
                                                         __epi_8xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_4xi64_mask(
    unsigned long int *address, __epi_4xi64 value, __epi_4xi64 indexes,
    __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_64xi8_mask(unsigned char *address,
                                                         __epi_64xi8 value,
                                                         __epi_64xi8 indexes,
                                                         __epi_64xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_32xi16_mask(
    unsigned short int *address, __epi_32xi16 value, __epi_32xi16 indexes,
    __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_16xi32_mask(
    unsigned int *address, __epi_16xi32 value, __epi_16xi32 indexes,
    __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_indexed_unsigned_8xi64_mask(
    unsigned long int *address, __epi_8xi64 value, __epi_8xi64 indexes,
    __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address + index[element], value[element])

2.6.33. Store elements of a vector into strided locations in memory (non-temporal)

Description

Use these builtins to store elements from a vector into memory locations separated by a constant stride value, in bytes, without loading the vector in the cache.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vss.v
Prototypes
void __builtin_epi_vstore_nt_strided_8xi8(signed char *address,
                                          __epi_8xi8 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xi16(signed short int *address,
                                           __epi_4xi16 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_2xi32(signed int *address,
                                           __epi_2xi32 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_1xi64(signed long int *address,
                                           __epi_1xi64 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_2xf32(float *address, __epi_2xf32 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_1xf64(double *address, __epi_1xf64 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_16xi8(signed char *address,
                                           __epi_16xi8 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xi16(signed short int *address,
                                           __epi_8xi16 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xi32(signed int *address,
                                           __epi_4xi32 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_2xi64(signed long int *address,
                                           __epi_2xi64 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xf32(float *address, __epi_4xf32 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_2xf64(double *address, __epi_2xf64 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_32xi8(signed char *address,
                                           __epi_32xi8 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_16xi16(signed short int *address,
                                            __epi_16xi16 value,
                                            signed long int stride,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xi32(signed int *address,
                                           __epi_8xi32 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xi64(signed long int *address,
                                           __epi_4xi64 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xf32(float *address, __epi_8xf32 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xf64(double *address, __epi_4xf64 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_64xi8(signed char *address,
                                           __epi_64xi8 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_32xi16(signed short int *address,
                                            __epi_32xi16 value,
                                            signed long int stride,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_16xi32(signed int *address,
                                            __epi_16xi32 value,
                                            signed long int stride,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xi64(signed long int *address,
                                           __epi_8xi64 value,
                                           signed long int stride,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_16xf32(float *address, __epi_16xf32 value,
                                            signed long int stride,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xf64(double *address, __epi_8xf64 value,
                                           signed long int stride,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, result[element])
  address = address + stride
Masked prototypes
void __builtin_epi_vstore_nt_strided_8xi8_mask(signed char *address,
                                               __epi_8xi8 value,
                                               signed long int stride,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xi16_mask(signed short int *address,
                                                __epi_4xi16 value,
                                                signed long int stride,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_2xi32_mask(signed int *address,
                                                __epi_2xi32 value,
                                                signed long int stride,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_1xi64_mask(signed long int *address,
                                                __epi_1xi64 value,
                                                signed long int stride,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_2xf32_mask(float *address,
                                                __epi_2xf32 value,
                                                signed long int stride,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_1xf64_mask(double *address,
                                                __epi_1xf64 value,
                                                signed long int stride,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_16xi8_mask(signed char *address,
                                                __epi_16xi8 value,
                                                signed long int stride,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xi16_mask(signed short int *address,
                                                __epi_8xi16 value,
                                                signed long int stride,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xi32_mask(signed int *address,
                                                __epi_4xi32 value,
                                                signed long int stride,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_2xi64_mask(signed long int *address,
                                                __epi_2xi64 value,
                                                signed long int stride,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xf32_mask(float *address,
                                                __epi_4xf32 value,
                                                signed long int stride,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_2xf64_mask(double *address,
                                                __epi_2xf64 value,
                                                signed long int stride,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_32xi8_mask(signed char *address,
                                                __epi_32xi8 value,
                                                signed long int stride,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_16xi16_mask(signed short int *address,
                                                 __epi_16xi16 value,
                                                 signed long int stride,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xi32_mask(signed int *address,
                                                __epi_8xi32 value,
                                                signed long int stride,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xi64_mask(signed long int *address,
                                                __epi_4xi64 value,
                                                signed long int stride,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xf32_mask(float *address,
                                                __epi_8xf32 value,
                                                signed long int stride,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_4xf64_mask(double *address,
                                                __epi_4xf64 value,
                                                signed long int stride,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_64xi8_mask(signed char *address,
                                                __epi_64xi8 value,
                                                signed long int stride,
                                                __epi_64xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_32xi16_mask(signed short int *address,
                                                 __epi_32xi16 value,
                                                 signed long int stride,
                                                 __epi_32xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_16xi32_mask(signed int *address,
                                                 __epi_16xi32 value,
                                                 signed long int stride,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xi64_mask(signed long int *address,
                                                __epi_8xi64 value,
                                                signed long int stride,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_16xf32_mask(float *address,
                                                 __epi_16xf32 value,
                                                 signed long int stride,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_8xf64_mask(double *address,
                                                __epi_8xf64 value,
                                                signed long int stride,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     store_element(address, result[element])
  address = address + stride

2.6.34. Store unsigned elements of a vector into strided locations in memory (non-temporal)

Description

Use these builtins to store elements from a vector into memory locations separated by a constant stride value, in bytes, without loading the vector in the cache.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vss.v
Prototypes
void __builtin_epi_vstore_nt_strided_unsigned_8xi8(unsigned char *address,
                                                   __epi_8xi8 value,
                                                   signed long int stride,
                                                   unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_4xi16(unsigned short int *address,
                                                    __epi_4xi16 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_2xi32(unsigned int *address,
                                                    __epi_2xi32 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_1xi64(unsigned long int *address,
                                                    __epi_1xi64 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_16xi8(unsigned char *address,
                                                    __epi_16xi8 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_8xi16(unsigned short int *address,
                                                    __epi_8xi16 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_4xi32(unsigned int *address,
                                                    __epi_4xi32 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_2xi64(unsigned long int *address,
                                                    __epi_2xi64 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_32xi8(unsigned char *address,
                                                    __epi_32xi8 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_16xi16(
    unsigned short int *address, __epi_16xi16 value, signed long int stride,
    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_8xi32(unsigned int *address,
                                                    __epi_8xi32 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_4xi64(unsigned long int *address,
                                                    __epi_4xi64 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_64xi8(unsigned char *address,
                                                    __epi_64xi8 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_32xi16(
    unsigned short int *address, __epi_32xi16 value, signed long int stride,
    unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_16xi32(unsigned int *address,
                                                     __epi_16xi32 value,
                                                     signed long int stride,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_8xi64(unsigned long int *address,
                                                    __epi_8xi64 value,
                                                    signed long int stride,
                                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, result[element])
  address = address + stride
Masked prototypes
void __builtin_epi_vstore_nt_strided_unsigned_8xi8_mask(unsigned char *address,
                                                        __epi_8xi8 value,
                                                        signed long int stride,
                                                        __epi_8xi1 mask,
                                                        unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_4xi16_mask(
    unsigned short int *address, __epi_4xi16 value, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_2xi32_mask(unsigned int *address,
                                                         __epi_2xi32 value,
                                                         signed long int stride,
                                                         __epi_2xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_1xi64_mask(
    unsigned long int *address, __epi_1xi64 value, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_16xi8_mask(unsigned char *address,
                                                         __epi_16xi8 value,
                                                         signed long int stride,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_8xi16_mask(
    unsigned short int *address, __epi_8xi16 value, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_4xi32_mask(unsigned int *address,
                                                         __epi_4xi32 value,
                                                         signed long int stride,
                                                         __epi_4xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_2xi64_mask(
    unsigned long int *address, __epi_2xi64 value, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_32xi8_mask(unsigned char *address,
                                                         __epi_32xi8 value,
                                                         signed long int stride,
                                                         __epi_32xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_16xi16_mask(
    unsigned short int *address, __epi_16xi16 value, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_8xi32_mask(unsigned int *address,
                                                         __epi_8xi32 value,
                                                         signed long int stride,
                                                         __epi_8xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_4xi64_mask(
    unsigned long int *address, __epi_4xi64 value, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_64xi8_mask(unsigned char *address,
                                                         __epi_64xi8 value,
                                                         signed long int stride,
                                                         __epi_64xi1 mask,
                                                         unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_32xi16_mask(
    unsigned short int *address, __epi_32xi16 value, signed long int stride,
    __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_16xi32_mask(
    unsigned int *address, __epi_16xi32 value, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_nt_strided_unsigned_8xi64_mask(
    unsigned long int *address, __epi_8xi64 value, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     store_element(address, result[element])
  address = address + stride

2.6.35. Store unsigned vector elements into contiguous locations in memory (non-temporal)

Description

Use these builtins to store the elements of a vector into contiguous locations in memory without loading the vector in the cache.

Instruction
vse.v
Prototypes
void __builtin_epi_vstore_nt_unsigned_8xi8(unsigned char *address,
                                           __epi_8xi8 value,
                                           unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_4xi16(unsigned short int *address,
                                            __epi_4xi16 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_2xi32(unsigned int *address,
                                            __epi_2xi32 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_1xi64(unsigned long int *address,
                                            __epi_1xi64 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_16xi8(unsigned char *address,
                                            __epi_16xi8 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_8xi16(unsigned short int *address,
                                            __epi_8xi16 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_4xi32(unsigned int *address,
                                            __epi_4xi32 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_2xi64(unsigned long int *address,
                                            __epi_2xi64 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_32xi8(unsigned char *address,
                                            __epi_32xi8 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_16xi16(unsigned short int *address,
                                             __epi_16xi16 value,
                                             unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_8xi32(unsigned int *address,
                                            __epi_8xi32 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_4xi64(unsigned long int *address,
                                            __epi_4xi64 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_64xi8(unsigned char *address,
                                            __epi_64xi8 value,
                                            unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_32xi16(unsigned short int *address,
                                             __epi_32xi16 value,
                                             unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_16xi32(unsigned int *address,
                                             __epi_16xi32 value,
                                             unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_8xi64(unsigned long int *address,
                                            __epi_8xi64 value,
                                            unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vstore_nt_unsigned_8xi8_mask(unsigned char *address,
                                                __epi_8xi8 value,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_4xi16_mask(unsigned short int *address,
                                                 __epi_4xi16 value,
                                                 __epi_4xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_2xi32_mask(unsigned int *address,
                                                 __epi_2xi32 value,
                                                 __epi_2xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_1xi64_mask(unsigned long int *address,
                                                 __epi_1xi64 value,
                                                 __epi_1xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_16xi8_mask(unsigned char *address,
                                                 __epi_16xi8 value,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_8xi16_mask(unsigned short int *address,
                                                 __epi_8xi16 value,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_4xi32_mask(unsigned int *address,
                                                 __epi_4xi32 value,
                                                 __epi_4xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_2xi64_mask(unsigned long int *address,
                                                 __epi_2xi64 value,
                                                 __epi_2xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_32xi8_mask(unsigned char *address,
                                                 __epi_32xi8 value,
                                                 __epi_32xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_16xi16_mask(unsigned short int *address,
                                                  __epi_16xi16 value,
                                                  __epi_16xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_8xi32_mask(unsigned int *address,
                                                 __epi_8xi32 value,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_4xi64_mask(unsigned long int *address,
                                                 __epi_4xi64 value,
                                                 __epi_4xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_64xi8_mask(unsigned char *address,
                                                 __epi_64xi8 value,
                                                 __epi_64xi1 mask,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_32xi16_mask(unsigned short int *address,
                                                  __epi_32xi16 value,
                                                  __epi_32xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_16xi32_mask(unsigned int *address,
                                                  __epi_16xi32 value,
                                                  __epi_16xi1 mask,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_nt_unsigned_8xi64_mask(unsigned long int *address,
                                                 __epi_8xi64 value,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address, value[element])
  address = address + SEW / 8

2.6.36. Store elements of a vector into strided locations in memory

Description

Use these builtins to store elements from a vector into memory locations separated by a constant stride value, in bytes.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vss.v
Prototypes
void __builtin_epi_vstore_strided_8xi8(signed char *address, __epi_8xi8 value,
                                       signed long int stride,
                                       unsigned long int gvl);
void __builtin_epi_vstore_strided_4xi16(signed short int *address,
                                        __epi_4xi16 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_2xi32(signed int *address, __epi_2xi32 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_1xi64(signed long int *address,
                                        __epi_1xi64 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_2xf32(float *address, __epi_2xf32 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_1xf64(double *address, __epi_1xf64 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_16xi8(signed char *address, __epi_16xi8 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_8xi16(signed short int *address,
                                        __epi_8xi16 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_4xi32(signed int *address, __epi_4xi32 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_2xi64(signed long int *address,
                                        __epi_2xi64 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_4xf32(float *address, __epi_4xf32 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_2xf64(double *address, __epi_2xf64 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_32xi8(signed char *address, __epi_32xi8 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_16xi16(signed short int *address,
                                         __epi_16xi16 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vstore_strided_8xi32(signed int *address, __epi_8xi32 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_4xi64(signed long int *address,
                                        __epi_4xi64 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_8xf32(float *address, __epi_8xf32 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_4xf64(double *address, __epi_4xf64 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_64xi8(signed char *address, __epi_64xi8 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_32xi16(signed short int *address,
                                         __epi_32xi16 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vstore_strided_16xi32(signed int *address,
                                         __epi_16xi32 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vstore_strided_8xi64(signed long int *address,
                                        __epi_8xi64 value,
                                        signed long int stride,
                                        unsigned long int gvl);
void __builtin_epi_vstore_strided_16xf32(float *address, __epi_16xf32 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vstore_strided_8xf64(double *address, __epi_8xf64 value,
                                        signed long int stride,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, result[element])
  address = address + stride
Masked prototypes
void __builtin_epi_vstore_strided_8xi8_mask(signed char *address,
                                            __epi_8xi8 value,
                                            signed long int stride,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
void __builtin_epi_vstore_strided_4xi16_mask(signed short int *address,
                                             __epi_4xi16 value,
                                             signed long int stride,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_2xi32_mask(signed int *address,
                                             __epi_2xi32 value,
                                             signed long int stride,
                                             __epi_2xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_1xi64_mask(signed long int *address,
                                             __epi_1xi64 value,
                                             signed long int stride,
                                             __epi_1xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_2xf32_mask(float *address, __epi_2xf32 value,
                                             signed long int stride,
                                             __epi_2xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_1xf64_mask(double *address, __epi_1xf64 value,
                                             signed long int stride,
                                             __epi_1xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_16xi8_mask(signed char *address,
                                             __epi_16xi8 value,
                                             signed long int stride,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_8xi16_mask(signed short int *address,
                                             __epi_8xi16 value,
                                             signed long int stride,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_4xi32_mask(signed int *address,
                                             __epi_4xi32 value,
                                             signed long int stride,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_2xi64_mask(signed long int *address,
                                             __epi_2xi64 value,
                                             signed long int stride,
                                             __epi_2xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_4xf32_mask(float *address, __epi_4xf32 value,
                                             signed long int stride,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_2xf64_mask(double *address, __epi_2xf64 value,
                                             signed long int stride,
                                             __epi_2xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_32xi8_mask(signed char *address,
                                             __epi_32xi8 value,
                                             signed long int stride,
                                             __epi_32xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_16xi16_mask(signed short int *address,
                                              __epi_16xi16 value,
                                              signed long int stride,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_strided_8xi32_mask(signed int *address,
                                             __epi_8xi32 value,
                                             signed long int stride,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_4xi64_mask(signed long int *address,
                                             __epi_4xi64 value,
                                             signed long int stride,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_8xf32_mask(float *address, __epi_8xf32 value,
                                             signed long int stride,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_4xf64_mask(double *address, __epi_4xf64 value,
                                             signed long int stride,
                                             __epi_4xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_64xi8_mask(signed char *address,
                                             __epi_64xi8 value,
                                             signed long int stride,
                                             __epi_64xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_32xi16_mask(signed short int *address,
                                              __epi_32xi16 value,
                                              signed long int stride,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_strided_16xi32_mask(signed int *address,
                                              __epi_16xi32 value,
                                              signed long int stride,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_strided_8xi64_mask(signed long int *address,
                                             __epi_8xi64 value,
                                             signed long int stride,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_strided_16xf32_mask(float *address,
                                              __epi_16xf32 value,
                                              signed long int stride,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_strided_8xf64_mask(double *address, __epi_8xf64 value,
                                             signed long int stride,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     store_element(address, result[element])
  address = address + stride

2.6.37. Store unsigned elements of a vector into strided locations in memory

Description

Use these builtins to store elements from a vector into memory locations separated by a constant stride value, in bytes.

The stride value is repeatedly added as an offset to the address parameter to yield the effective address that loads the element of the vector.

Instruction
vss.v
Prototypes
void __builtin_epi_vstore_strided_unsigned_8xi8(unsigned char *address,
                                                __epi_8xi8 value,
                                                signed long int stride,
                                                unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_4xi16(unsigned short int *address,
                                                 __epi_4xi16 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_2xi32(unsigned int *address,
                                                 __epi_2xi32 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_1xi64(unsigned long int *address,
                                                 __epi_1xi64 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_16xi8(unsigned char *address,
                                                 __epi_16xi8 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_8xi16(unsigned short int *address,
                                                 __epi_8xi16 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_4xi32(unsigned int *address,
                                                 __epi_4xi32 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_2xi64(unsigned long int *address,
                                                 __epi_2xi64 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_32xi8(unsigned char *address,
                                                 __epi_32xi8 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_16xi16(unsigned short int *address,
                                                  __epi_16xi16 value,
                                                  signed long int stride,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_8xi32(unsigned int *address,
                                                 __epi_8xi32 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_4xi64(unsigned long int *address,
                                                 __epi_4xi64 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_64xi8(unsigned char *address,
                                                 __epi_64xi8 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_32xi16(unsigned short int *address,
                                                  __epi_32xi16 value,
                                                  signed long int stride,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_16xi32(unsigned int *address,
                                                  __epi_16xi32 value,
                                                  signed long int stride,
                                                  unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_8xi64(unsigned long int *address,
                                                 __epi_8xi64 value,
                                                 signed long int stride,
                                                 unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, result[element])
  address = address + stride
Masked prototypes
void __builtin_epi_vstore_strided_unsigned_8xi8_mask(unsigned char *address,
                                                     __epi_8xi8 value,
                                                     signed long int stride,
                                                     __epi_8xi1 mask,
                                                     unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_4xi16_mask(
    unsigned short int *address, __epi_4xi16 value, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_2xi32_mask(unsigned int *address,
                                                      __epi_2xi32 value,
                                                      signed long int stride,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_1xi64_mask(
    unsigned long int *address, __epi_1xi64 value, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_16xi8_mask(unsigned char *address,
                                                      __epi_16xi8 value,
                                                      signed long int stride,
                                                      __epi_16xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_8xi16_mask(
    unsigned short int *address, __epi_8xi16 value, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_4xi32_mask(unsigned int *address,
                                                      __epi_4xi32 value,
                                                      signed long int stride,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_2xi64_mask(
    unsigned long int *address, __epi_2xi64 value, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_32xi8_mask(unsigned char *address,
                                                      __epi_32xi8 value,
                                                      signed long int stride,
                                                      __epi_32xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_16xi16_mask(
    unsigned short int *address, __epi_16xi16 value, signed long int stride,
    __epi_16xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_8xi32_mask(unsigned int *address,
                                                      __epi_8xi32 value,
                                                      signed long int stride,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_4xi64_mask(
    unsigned long int *address, __epi_4xi64 value, signed long int stride,
    __epi_4xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_64xi8_mask(unsigned char *address,
                                                      __epi_64xi8 value,
                                                      signed long int stride,
                                                      __epi_64xi1 mask,
                                                      unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_32xi16_mask(
    unsigned short int *address, __epi_32xi16 value, signed long int stride,
    __epi_32xi1 mask, unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_16xi32_mask(unsigned int *address,
                                                       __epi_16xi32 value,
                                                       signed long int stride,
                                                       __epi_16xi1 mask,
                                                       unsigned long int gvl);
void __builtin_epi_vstore_strided_unsigned_8xi64_mask(
    unsigned long int *address, __epi_8xi64 value, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     store_element(address, result[element])
  address = address + stride

2.6.38. Store unsigned vector elements into contiguous locations in memory

Description

Use these builtins to store the elements of a vector into contiguous locations in memory.

Instruction
vse.v
Prototypes
void __builtin_epi_vstore_unsigned_8xi8(unsigned char *address,
                                        __epi_8xi8 value,
                                        unsigned long int gvl);
void __builtin_epi_vstore_unsigned_4xi16(unsigned short int *address,
                                         __epi_4xi16 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_2xi32(unsigned int *address,
                                         __epi_2xi32 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_1xi64(unsigned long int *address,
                                         __epi_1xi64 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_16xi8(unsigned char *address,
                                         __epi_16xi8 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_8xi16(unsigned short int *address,
                                         __epi_8xi16 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_4xi32(unsigned int *address,
                                         __epi_4xi32 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_2xi64(unsigned long int *address,
                                         __epi_2xi64 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_32xi8(unsigned char *address,
                                         __epi_32xi8 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_16xi16(unsigned short int *address,
                                          __epi_16xi16 value,
                                          unsigned long int gvl);
void __builtin_epi_vstore_unsigned_8xi32(unsigned int *address,
                                         __epi_8xi32 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_4xi64(unsigned long int *address,
                                         __epi_4xi64 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_64xi8(unsigned char *address,
                                         __epi_64xi8 value,
                                         unsigned long int gvl);
void __builtin_epi_vstore_unsigned_32xi16(unsigned short int *address,
                                          __epi_32xi16 value,
                                          unsigned long int gvl);
void __builtin_epi_vstore_unsigned_16xi32(unsigned int *address,
                                          __epi_16xi32 value,
                                          unsigned long int gvl);
void __builtin_epi_vstore_unsigned_8xi64(unsigned long int *address,
                                         __epi_8xi64 value,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vstore_unsigned_8xi8_mask(unsigned char *address,
                                             __epi_8xi8 value, __epi_8xi1 mask,
                                             unsigned long int gvl);
void __builtin_epi_vstore_unsigned_4xi16_mask(unsigned short int *address,
                                              __epi_4xi16 value,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_2xi32_mask(unsigned int *address,
                                              __epi_2xi32 value,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_1xi64_mask(unsigned long int *address,
                                              __epi_1xi64 value,
                                              __epi_1xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_16xi8_mask(unsigned char *address,
                                              __epi_16xi8 value,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_8xi16_mask(unsigned short int *address,
                                              __epi_8xi16 value,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_4xi32_mask(unsigned int *address,
                                              __epi_4xi32 value,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_2xi64_mask(unsigned long int *address,
                                              __epi_2xi64 value,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_32xi8_mask(unsigned char *address,
                                              __epi_32xi8 value,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_16xi16_mask(unsigned short int *address,
                                               __epi_16xi16 value,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vstore_unsigned_8xi32_mask(unsigned int *address,
                                              __epi_8xi32 value,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_4xi64_mask(unsigned long int *address,
                                              __epi_4xi64 value,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_64xi8_mask(unsigned char *address,
                                              __epi_64xi8 value,
                                              __epi_64xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vstore_unsigned_32xi16_mask(unsigned short int *address,
                                               __epi_32xi16 value,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vstore_unsigned_16xi32_mask(unsigned int *address,
                                               __epi_16xi32 value,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vstore_unsigned_8xi64_mask(unsigned long int *address,
                                              __epi_8xi64 value,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    store_element(address, value[element])
  address = address + SEW / 8

2.7. Vector elements manipulation

2.7.1. Pack elements contiguously

Description

Use these builtins to create a vector where elements selected by a mask vector are contiguosly packed.

Instruction
vcompress.vm
Prototypes
__epi_8xi8 __builtin_epi_vcompress_8xi8(__epi_8xi8 a, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vcompress_4xi16(__epi_4xi16 a, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vcompress_2xi32(__epi_2xi32 a, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vcompress_1xi64(__epi_1xi64 a, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_2xf32 __builtin_epi_vcompress_2xf32(__epi_2xf32 a, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xf64 __builtin_epi_vcompress_1xf64(__epi_1xf64 a, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vcompress_16xi8(__epi_16xi8 a, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vcompress_8xi16(__epi_8xi16 a, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vcompress_4xi32(__epi_4xi32 a, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vcompress_2xi64(__epi_2xi64 a, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_4xf32 __builtin_epi_vcompress_4xf32(__epi_4xf32 a, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xf64 __builtin_epi_vcompress_2xf64(__epi_2xf64 a, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vcompress_32xi8(__epi_32xi8 a, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vcompress_16xi16(__epi_16xi16 a, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vcompress_8xi32(__epi_8xi32 a, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vcompress_4xi64(__epi_4xi64 a, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_8xf32 __builtin_epi_vcompress_8xf32(__epi_8xf32 a, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xf64 __builtin_epi_vcompress_4xf64(__epi_4xf64 a, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vcompress_64xi8(__epi_64xi8 a, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vcompress_32xi16(__epi_32xi16 a, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vcompress_16xi32(__epi_16xi32 a, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vcompress_8xi64(__epi_8xi64 a, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_16xf32 __builtin_epi_vcompress_16xf32(__epi_16xf32 a, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xf64 __builtin_epi_vcompress_8xf64(__epi_8xf64 a, __epi_8xi1 mask,
                                          unsigned long int gvl);
Operation
next_index = 0
for element = 0 to gvl - 1
   if mask[element] then
      result[next_index] = a[element]
      next_index = next_index + 1

2.7.2. Elementwise floating-point merge

Description

Use these builtins to merge two floating-point vectors using a mask vector

Instruction
vfmerge.vv
Prototypes
__epi_2xf32 __builtin_epi_vfmerge_2xf32(__epi_2xf32 a, __epi_2xf32 b,
                                        __epi_2xi1 merge,
                                        unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmerge_1xf64(__epi_1xf64 a, __epi_1xf64 b,
                                        __epi_1xi1 merge,
                                        unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmerge_4xf32(__epi_4xf32 a, __epi_4xf32 b,
                                        __epi_4xi1 merge,
                                        unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmerge_2xf64(__epi_2xf64 a, __epi_2xf64 b,
                                        __epi_2xi1 merge,
                                        unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmerge_8xf32(__epi_8xf32 a, __epi_8xf32 b,
                                        __epi_8xi1 merge,
                                        unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmerge_4xf64(__epi_4xf64 a, __epi_4xf64 b,
                                        __epi_4xi1 merge,
                                        unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmerge_16xf32(__epi_16xf32 a, __epi_16xf32 b,
                                          __epi_16xi1 merge,
                                          unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmerge_8xf64(__epi_8xf64 a, __epi_8xf64 b,
                                        __epi_8xi1 merge,
                                        unsigned long int gvl);
Operation
for element = 0 to VLMAX
   if mask[element] then
     result[element] = b[element]
   else
     result[element] = a[element]

2.7.3. Extract first element of a floating-point vector

Description

Use these builtins to extract the first element of a vector.

This is useful when the result of some operation, like a reduction, is found in the first element of a vector.

Instruction
vfmv.f.s
Prototypes
float __builtin_epi_vfmv_f_s_2xf32(__epi_2xf32 a);
double __builtin_epi_vfmv_f_s_1xf64(__epi_1xf64 a);
float __builtin_epi_vfmv_f_s_4xf32(__epi_4xf32 a);
double __builtin_epi_vfmv_f_s_2xf64(__epi_2xf64 a);
float __builtin_epi_vfmv_f_s_8xf32(__epi_8xf32 a);
double __builtin_epi_vfmv_f_s_4xf64(__epi_4xf64 a);
float __builtin_epi_vfmv_f_s_16xf32(__epi_16xf32 a);
double __builtin_epi_vfmv_f_s_8xf64(__epi_8xf64 a);
Operation
result = a[0];

2.7.4. Set first floating-point element of floating-point vector

Description

Use these builtins to set the first element of a vector to a given value.

Instruction
vfmv.s.f
Prototypes
__epi_2xf32 __builtin_epi_vfmv_s_f_2xf32(__epi_2xf32 a, float b,
                                         unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmv_s_f_1xf64(__epi_1xf64 a, double b,
                                         unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmv_s_f_4xf32(__epi_4xf32 a, float b,
                                         unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmv_s_f_2xf64(__epi_2xf64 a, double b,
                                         unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmv_s_f_8xf32(__epi_8xf32 a, float b,
                                         unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmv_s_f_4xf64(__epi_4xf64 a, double b,
                                         unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmv_s_f_16xf32(__epi_16xf32 a, float b,
                                           unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmv_s_f_8xf64(__epi_8xf64 a, double b,
                                         unsigned long int gvl);
Operation
result[0] = b
result[1 : gvl - 1]= a[1:gvl]

2.7.5. Broadcast a floating-point scalar to all the elements of a vector

Description

Use these builtins to create a vector where all the elements have the value of a given scalar.

Instruction
vfmv.v.f
Prototypes
__epi_2xf32 __builtin_epi_vfmv_v_f_2xf32(float a, unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfmv_v_f_1xf64(double a, unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfmv_v_f_4xf32(float a, unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfmv_v_f_2xf64(double a, unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfmv_v_f_8xf32(float a, unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfmv_v_f_4xf64(double a, unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfmv_v_f_16xf32(float a, unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfmv_v_f_8xf64(double a, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a

2.7.6. Compute index vector

Description

Use these builtins to compute an index vector.

An index vector is useful for indexed loads and stores and for register gathers.

Instruction
vid.v
Prototypes
__epi_8xi8 __builtin_epi_vid_8xi8(unsigned long int gvl);
__epi_4xi16 __builtin_epi_vid_4xi16(unsigned long int gvl);
__epi_2xi32 __builtin_epi_vid_2xi32(unsigned long int gvl);
__epi_1xi64 __builtin_epi_vid_1xi64(unsigned long int gvl);
__epi_16xi8 __builtin_epi_vid_16xi8(unsigned long int gvl);
__epi_8xi16 __builtin_epi_vid_8xi16(unsigned long int gvl);
__epi_4xi32 __builtin_epi_vid_4xi32(unsigned long int gvl);
__epi_2xi64 __builtin_epi_vid_2xi64(unsigned long int gvl);
__epi_32xi8 __builtin_epi_vid_32xi8(unsigned long int gvl);
__epi_16xi16 __builtin_epi_vid_16xi16(unsigned long int gvl);
__epi_8xi32 __builtin_epi_vid_8xi32(unsigned long int gvl);
__epi_4xi64 __builtin_epi_vid_4xi64(unsigned long int gvl);
__epi_64xi8 __builtin_epi_vid_64xi8(unsigned long int gvl);
__epi_32xi16 __builtin_epi_vid_32xi16(unsigned long int gvl);
__epi_16xi32 __builtin_epi_vid_16xi32(unsigned long int gvl);
__epi_8xi64 __builtin_epi_vid_8xi64(unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = element
Masked prototypes
__epi_8xi8 __builtin_epi_vid_8xi8_mask(__epi_8xi8 merge, __epi_8xi1 mask,
                                       unsigned long int gvl);
__epi_4xi16 __builtin_epi_vid_4xi16_mask(__epi_4xi16 merge, __epi_4xi1 mask,
                                         unsigned long int gvl);
__epi_2xi32 __builtin_epi_vid_2xi32_mask(__epi_2xi32 merge, __epi_2xi1 mask,
                                         unsigned long int gvl);
__epi_1xi64 __builtin_epi_vid_1xi64_mask(__epi_1xi64 merge, __epi_1xi1 mask,
                                         unsigned long int gvl);
__epi_16xi8 __builtin_epi_vid_16xi8_mask(__epi_16xi8 merge, __epi_16xi1 mask,
                                         unsigned long int gvl);
__epi_8xi16 __builtin_epi_vid_8xi16_mask(__epi_8xi16 merge, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi32 __builtin_epi_vid_4xi32_mask(__epi_4xi32 merge, __epi_4xi1 mask,
                                         unsigned long int gvl);
__epi_2xi64 __builtin_epi_vid_2xi64_mask(__epi_2xi64 merge, __epi_2xi1 mask,
                                         unsigned long int gvl);
__epi_32xi8 __builtin_epi_vid_32xi8_mask(__epi_32xi8 merge, __epi_32xi1 mask,
                                         unsigned long int gvl);
__epi_16xi16 __builtin_epi_vid_16xi16_mask(__epi_16xi16 merge, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi32 __builtin_epi_vid_8xi32_mask(__epi_8xi32 merge, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi64 __builtin_epi_vid_4xi64_mask(__epi_4xi64 merge, __epi_4xi1 mask,
                                         unsigned long int gvl);
__epi_64xi8 __builtin_epi_vid_64xi8_mask(__epi_64xi8 merge, __epi_64xi1 mask,
                                         unsigned long int gvl);
__epi_32xi16 __builtin_epi_vid_32xi16_mask(__epi_32xi16 merge, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi32 __builtin_epi_vid_16xi32_mask(__epi_16xi32 merge, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi64 __builtin_epi_vid_8xi64_mask(__epi_8xi64 merge, __epi_8xi1 mask,
                                         unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = element
   else
     result[element] = merge[element]

2.7.7. Compute a prefix sum of a mask

Description

Use these builtins to compute the prefix sum given a mask vector. For the purpose of the sum, elements enabled by the mask count as one, otherwise they count as zero.

Instruction
viota.m
Prototypes
__epi_8xi8 __builtin_epi_viota_8xi8(__epi_8xi1 a, unsigned long int gvl);
__epi_4xi16 __builtin_epi_viota_4xi16(__epi_4xi1 a, unsigned long int gvl);
__epi_2xi32 __builtin_epi_viota_2xi32(__epi_2xi1 a, unsigned long int gvl);
__epi_1xi64 __builtin_epi_viota_1xi64(__epi_1xi1 a, unsigned long int gvl);
__epi_16xi8 __builtin_epi_viota_16xi8(__epi_16xi1 a, unsigned long int gvl);
__epi_8xi16 __builtin_epi_viota_8xi16(__epi_8xi1 a, unsigned long int gvl);
__epi_4xi32 __builtin_epi_viota_4xi32(__epi_4xi1 a, unsigned long int gvl);
__epi_2xi64 __builtin_epi_viota_2xi64(__epi_2xi1 a, unsigned long int gvl);
__epi_32xi8 __builtin_epi_viota_32xi8(__epi_32xi1 a, unsigned long int gvl);
__epi_16xi16 __builtin_epi_viota_16xi16(__epi_16xi1 a, unsigned long int gvl);
__epi_8xi32 __builtin_epi_viota_8xi32(__epi_8xi1 a, unsigned long int gvl);
__epi_4xi64 __builtin_epi_viota_4xi64(__epi_4xi1 a, unsigned long int gvl);
__epi_64xi8 __builtin_epi_viota_64xi8(__epi_64xi1 a, unsigned long int gvl);
__epi_32xi16 __builtin_epi_viota_32xi16(__epi_32xi1 a, unsigned long int gvl);
__epi_16xi32 __builtin_epi_viota_16xi32(__epi_16xi1 a, unsigned long int gvl);
__epi_8xi64 __builtin_epi_viota_8xi64(__epi_8xi1 a, unsigned long int gvl);
Operation
prefix_sum = 0
for element = 0 to gvl - 1
   result[element] = prefix_sum
   if a[element] then
      prefix_sum = prefix_sum + 1
Masked prototypes
__epi_8xi8 __builtin_epi_viota_8xi8_mask(__epi_8xi8 merge, __epi_8xi1 a,
                                         __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_viota_4xi16_mask(__epi_4xi16 merge, __epi_4xi1 a,
                                           __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_viota_2xi32_mask(__epi_2xi32 merge, __epi_2xi1 a,
                                           __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_1xi64 __builtin_epi_viota_1xi64_mask(__epi_1xi64 merge, __epi_1xi1 a,
                                           __epi_1xi1 mask,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_viota_16xi8_mask(__epi_16xi8 merge, __epi_16xi1 a,
                                           __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_viota_8xi16_mask(__epi_8xi16 merge, __epi_8xi1 a,
                                           __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_viota_4xi32_mask(__epi_4xi32 merge, __epi_4xi1 a,
                                           __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_viota_2xi64_mask(__epi_2xi64 merge, __epi_2xi1 a,
                                           __epi_2xi1 mask,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_viota_32xi8_mask(__epi_32xi8 merge, __epi_32xi1 a,
                                           __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_viota_16xi16_mask(__epi_16xi16 merge, __epi_16xi1 a,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_viota_8xi32_mask(__epi_8xi32 merge, __epi_8xi1 a,
                                           __epi_8xi1 mask,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_viota_4xi64_mask(__epi_4xi64 merge, __epi_4xi1 a,
                                           __epi_4xi1 mask,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_viota_64xi8_mask(__epi_64xi8 merge, __epi_64xi1 a,
                                           __epi_64xi1 mask,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_viota_32xi16_mask(__epi_32xi16 merge, __epi_32xi1 a,
                                             __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_viota_16xi32_mask(__epi_16xi32 merge, __epi_16xi1 a,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_viota_8xi64_mask(__epi_8xi64 merge, __epi_8xi1 a,
                                           __epi_8xi1 mask,
                                           unsigned long int gvl);
Masked operation
prefix_sum = 0
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = prefix_sum
     if a[element] then
        prefix_sum = prefix_sum + 1
   else
     result[element] = merge[element]

2.7.8. Elementwise integer merge

Description

Use these builtins to merge two integer vectors using a mask vector

Instruction
vmerge.vv
Prototypes
__epi_8xi8 __builtin_epi_vmerge_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                     __epi_8xi1 merge, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmerge_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                       __epi_4xi1 merge, unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmerge_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                       __epi_2xi1 merge, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmerge_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                       __epi_1xi1 merge, unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmerge_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                       __epi_16xi1 merge,
                                       unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmerge_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                       __epi_8xi1 merge, unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmerge_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                       __epi_4xi1 merge, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmerge_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                       __epi_2xi1 merge, unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmerge_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                       __epi_32xi1 merge,
                                       unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmerge_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                         __epi_16xi1 merge,
                                         unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmerge_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                       __epi_8xi1 merge, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmerge_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                       __epi_4xi1 merge, unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmerge_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                       __epi_64xi1 merge,
                                       unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmerge_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                         __epi_32xi1 merge,
                                         unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmerge_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                         __epi_16xi1 merge,
                                         unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmerge_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                       __epi_8xi1 merge, unsigned long int gvl);
Operation
for element = 0 to VLMAX
   if mask[element] then
     result[element] = b[element]
   else
     result[element] = a[element]

2.7.9. Set first integer element of integer vector

Description

Use these builtins to set the first element of a vector to a given value.

Instruction
vmv.s.x
Prototypes
__epi_8xi8 __builtin_epi_vmv_s_x_8xi8(__epi_8xi8 a, signed char b,
                                      unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmv_s_x_4xi16(__epi_4xi16 a, signed short int b,
                                        unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmv_s_x_2xi32(__epi_2xi32 a, signed int b,
                                        unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmv_s_x_1xi64(__epi_1xi64 a, signed long int b,
                                        unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmv_s_x_16xi8(__epi_16xi8 a, signed char b,
                                        unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmv_s_x_8xi16(__epi_8xi16 a, signed short int b,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmv_s_x_4xi32(__epi_4xi32 a, signed int b,
                                        unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmv_s_x_2xi64(__epi_2xi64 a, signed long int b,
                                        unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmv_s_x_32xi8(__epi_32xi8 a, signed char b,
                                        unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmv_s_x_16xi16(__epi_16xi16 a, signed short int b,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmv_s_x_8xi32(__epi_8xi32 a, signed int b,
                                        unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmv_s_x_4xi64(__epi_4xi64 a, signed long int b,
                                        unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmv_s_x_64xi8(__epi_64xi8 a, signed char b,
                                        unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmv_s_x_32xi16(__epi_32xi16 a, signed short int b,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmv_s_x_16xi32(__epi_16xi32 a, signed int b,
                                          unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmv_s_x_8xi64(__epi_8xi64 a, signed long int b,
                                        unsigned long int gvl);
Operation
result[0] = b
result[1 : gvl - 1]= a[1:gvl]

2.7.10. Broadcast an integer scalar to all the elements of a vector

Description

Use these builtins to create a vector where all the elements have the value of a given scalar.

Instruction
vmv.v.x
Prototypes
__epi_8xi8 __builtin_epi_vmv_v_x_8xi8(signed char a, unsigned long int gvl);
__epi_4xi16 __builtin_epi_vmv_v_x_4xi16(signed short int a,
                                        unsigned long int gvl);
__epi_2xi32 __builtin_epi_vmv_v_x_2xi32(signed int a, unsigned long int gvl);
__epi_1xi64 __builtin_epi_vmv_v_x_1xi64(signed long int a,
                                        unsigned long int gvl);
__epi_16xi8 __builtin_epi_vmv_v_x_16xi8(signed char a, unsigned long int gvl);
__epi_8xi16 __builtin_epi_vmv_v_x_8xi16(signed short int a,
                                        unsigned long int gvl);
__epi_4xi32 __builtin_epi_vmv_v_x_4xi32(signed int a, unsigned long int gvl);
__epi_2xi64 __builtin_epi_vmv_v_x_2xi64(signed long int a,
                                        unsigned long int gvl);
__epi_32xi8 __builtin_epi_vmv_v_x_32xi8(signed char a, unsigned long int gvl);
__epi_16xi16 __builtin_epi_vmv_v_x_16xi16(signed short int a,
                                          unsigned long int gvl);
__epi_8xi32 __builtin_epi_vmv_v_x_8xi32(signed int a, unsigned long int gvl);
__epi_4xi64 __builtin_epi_vmv_v_x_4xi64(signed long int a,
                                        unsigned long int gvl);
__epi_64xi8 __builtin_epi_vmv_v_x_64xi8(signed char a, unsigned long int gvl);
__epi_32xi16 __builtin_epi_vmv_v_x_32xi16(signed short int a,
                                          unsigned long int gvl);
__epi_16xi32 __builtin_epi_vmv_v_x_16xi32(signed int a, unsigned long int gvl);
__epi_8xi64 __builtin_epi_vmv_v_x_8xi64(signed long int a,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = a

2.7.11. Extract first element of an integer vector

Description

Use these builtins to extract the first element of a vector.

This is useful when the result of some operation, like a reduction, is found in the first element of a vector.

Instruction
vmv.x.s
Prototypes
signed char __builtin_epi_vmv_x_s_8xi8(__epi_8xi8 a);
signed short int __builtin_epi_vmv_x_s_4xi16(__epi_4xi16 a);
signed int __builtin_epi_vmv_x_s_2xi32(__epi_2xi32 a);
signed long int __builtin_epi_vmv_x_s_1xi64(__epi_1xi64 a);
signed char __builtin_epi_vmv_x_s_16xi8(__epi_16xi8 a);
signed short int __builtin_epi_vmv_x_s_8xi16(__epi_8xi16 a);
signed int __builtin_epi_vmv_x_s_4xi32(__epi_4xi32 a);
signed long int __builtin_epi_vmv_x_s_2xi64(__epi_2xi64 a);
signed char __builtin_epi_vmv_x_s_32xi8(__epi_32xi8 a);
signed short int __builtin_epi_vmv_x_s_16xi16(__epi_16xi16 a);
signed int __builtin_epi_vmv_x_s_8xi32(__epi_8xi32 a);
signed long int __builtin_epi_vmv_x_s_4xi64(__epi_4xi64 a);
signed char __builtin_epi_vmv_x_s_64xi8(__epi_64xi8 a);
signed short int __builtin_epi_vmv_x_s_32xi16(__epi_32xi16 a);
signed int __builtin_epi_vmv_x_s_16xi32(__epi_16xi32 a);
signed long int __builtin_epi_vmv_x_s_8xi64(__epi_8xi64 a);
Operation
result = a[0];

2.7.12. Register gather

Description

Use these builtins to permute the elements of a vector based on a vector of indices.

Instruction
vrgather.vv
Prototypes
__epi_8xi8 __builtin_epi_vrgather_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                       unsigned long int gvl);
__epi_4xi16 __builtin_epi_vrgather_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                         unsigned long int gvl);
__epi_2xi32 __builtin_epi_vrgather_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                         unsigned long int gvl);
__epi_1xi64 __builtin_epi_vrgather_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                         unsigned long int gvl);
__epi_2xf32 __builtin_epi_vrgather_2xf32(__epi_2xf32 a, __epi_2xi32 b,
                                         unsigned long int gvl);
__epi_1xf64 __builtin_epi_vrgather_1xf64(__epi_1xf64 a, __epi_1xi64 b,
                                         unsigned long int gvl);
__epi_16xi8 __builtin_epi_vrgather_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                         unsigned long int gvl);
__epi_8xi16 __builtin_epi_vrgather_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                         unsigned long int gvl);
__epi_4xi32 __builtin_epi_vrgather_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                         unsigned long int gvl);
__epi_2xi64 __builtin_epi_vrgather_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                         unsigned long int gvl);
__epi_4xf32 __builtin_epi_vrgather_4xf32(__epi_4xf32 a, __epi_4xi32 b,
                                         unsigned long int gvl);
__epi_2xf64 __builtin_epi_vrgather_2xf64(__epi_2xf64 a, __epi_2xi64 b,
                                         unsigned long int gvl);
__epi_32xi8 __builtin_epi_vrgather_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                         unsigned long int gvl);
__epi_16xi16 __builtin_epi_vrgather_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                           unsigned long int gvl);
__epi_8xi32 __builtin_epi_vrgather_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                         unsigned long int gvl);
__epi_4xi64 __builtin_epi_vrgather_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                         unsigned long int gvl);
__epi_8xf32 __builtin_epi_vrgather_8xf32(__epi_8xf32 a, __epi_8xi32 b,
                                         unsigned long int gvl);
__epi_4xf64 __builtin_epi_vrgather_4xf64(__epi_4xf64 a, __epi_4xi64 b,
                                         unsigned long int gvl);
__epi_64xi8 __builtin_epi_vrgather_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                         unsigned long int gvl);
__epi_32xi16 __builtin_epi_vrgather_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                           unsigned long int gvl);
__epi_16xi32 __builtin_epi_vrgather_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                           unsigned long int gvl);
__epi_8xi64 __builtin_epi_vrgather_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                         unsigned long int gvl);
__epi_16xf32 __builtin_epi_vrgather_16xf32(__epi_16xf32 a, __epi_16xi32 b,
                                           unsigned long int gvl);
__epi_8xf64 __builtin_epi_vrgather_8xf64(__epi_8xf64 a, __epi_8xi64 b,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   if b[element] > VLMAX then
     result[element] = 0
   else
     result[element] = a[b[element]]
Masked prototypes
__epi_8xi8 __builtin_epi_vrgather_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                            __epi_8xi8 b, __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi16 __builtin_epi_vrgather_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                              __epi_4xi16 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi32 __builtin_epi_vrgather_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                              __epi_2xi32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xi64 __builtin_epi_vrgather_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                              __epi_1xi64 b, __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_2xf32 __builtin_epi_vrgather_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                              __epi_2xi32 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xf64 __builtin_epi_vrgather_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                              __epi_1xi64 b, __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_16xi8 __builtin_epi_vrgather_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                              __epi_16xi8 b, __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi16 __builtin_epi_vrgather_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                              __epi_8xi16 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vrgather_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                              __epi_4xi32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi64 __builtin_epi_vrgather_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                              __epi_2xi64 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_4xf32 __builtin_epi_vrgather_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                              __epi_4xi32 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xf64 __builtin_epi_vrgather_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                              __epi_2xi64 b, __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_32xi8 __builtin_epi_vrgather_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                              __epi_32xi8 b, __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vrgather_16xi16_mask(__epi_16xi16 merge,
                                                __epi_16xi16 a, __epi_16xi16 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vrgather_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                              __epi_8xi32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi64 __builtin_epi_vrgather_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                              __epi_4xi64 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_8xf32 __builtin_epi_vrgather_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                              __epi_8xi32 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vrgather_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                              __epi_4xi64 b, __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_64xi8 __builtin_epi_vrgather_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                              __epi_64xi8 b, __epi_64xi1 mask,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vrgather_32xi16_mask(__epi_32xi16 merge,
                                                __epi_32xi16 a, __epi_32xi16 b,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vrgather_16xi32_mask(__epi_16xi32 merge,
                                                __epi_16xi32 a, __epi_16xi32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi64 __builtin_epi_vrgather_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                              __epi_8xi64 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_16xf32 __builtin_epi_vrgather_16xf32_mask(__epi_16xf32 merge,
                                                __epi_16xf32 a, __epi_16xi32 b,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xf64 __builtin_epi_vrgather_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                              __epi_8xi64 b, __epi_8xi1 mask,
                                              unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     if b[element] > VLMAX then
       result[element] = 0
     else
       result[element] = a[b[element]]
   else
     result[element] = merge[element]

2.7.13. Slide down elements of a vector one position

Description

Use these builtins to "slide down" the elements of a vector one position.

Instruction
vslide1down.vx
Prototypes
__epi_8xi8 __builtin_epi_vslide1down_8xi8(__epi_8xi8 a, unsigned long int value,
                                          unsigned long int gvl);
__epi_4xi16 __builtin_epi_vslide1down_4xi16(__epi_4xi16 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_2xi32 __builtin_epi_vslide1down_2xi32(__epi_2xi32 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_1xi64 __builtin_epi_vslide1down_1xi64(__epi_1xi64 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_2xf32 __builtin_epi_vslide1down_2xf32(__epi_2xf32 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_1xf64 __builtin_epi_vslide1down_1xf64(__epi_1xf64 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_16xi8 __builtin_epi_vslide1down_16xi8(__epi_16xi8 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_8xi16 __builtin_epi_vslide1down_8xi16(__epi_8xi16 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_4xi32 __builtin_epi_vslide1down_4xi32(__epi_4xi32 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_2xi64 __builtin_epi_vslide1down_2xi64(__epi_2xi64 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_4xf32 __builtin_epi_vslide1down_4xf32(__epi_4xf32 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_2xf64 __builtin_epi_vslide1down_2xf64(__epi_2xf64 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_32xi8 __builtin_epi_vslide1down_32xi8(__epi_32xi8 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_16xi16 __builtin_epi_vslide1down_16xi16(__epi_16xi16 a,
                                              unsigned long int value,
                                              unsigned long int gvl);
__epi_8xi32 __builtin_epi_vslide1down_8xi32(__epi_8xi32 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_4xi64 __builtin_epi_vslide1down_4xi64(__epi_4xi64 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_8xf32 __builtin_epi_vslide1down_8xf32(__epi_8xf32 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vslide1down_4xf64(__epi_4xf64 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_64xi8 __builtin_epi_vslide1down_64xi8(__epi_64xi8 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_32xi16 __builtin_epi_vslide1down_32xi16(__epi_32xi16 a,
                                              unsigned long int value,
                                              unsigned long int gvl);
__epi_16xi32 __builtin_epi_vslide1down_16xi32(__epi_16xi32 a,
                                              unsigned long int value,
                                              unsigned long int gvl);
__epi_8xi64 __builtin_epi_vslide1down_8xi64(__epi_8xi64 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_16xf32 __builtin_epi_vslide1down_16xf32(__epi_16xf32 a,
                                              unsigned long int value,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vslide1down_8xf64(__epi_8xf64 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
Operation
for element = 0 to gvl - 2
     result[element] = a[element + 1]
result[gvl - 1] = value
Masked prototypes
__epi_8xi8 __builtin_epi_vslide1down_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                               unsigned long int value,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_4xi16 __builtin_epi_vslide1down_4xi16_mask(__epi_4xi16 merge,
                                                 __epi_4xi16 a,
                                                 unsigned long int value,
                                                 __epi_4xi1 mask,
                                                 unsigned long int gvl);
__epi_2xi32 __builtin_epi_vslide1down_2xi32_mask(__epi_2xi32 merge,
                                                 __epi_2xi32 a,
                                                 unsigned long int value,
                                                 __epi_2xi1 mask,
                                                 unsigned long int gvl);
__epi_1xi64 __builtin_epi_vslide1down_1xi64_mask(__epi_1xi64 merge,
                                                 __epi_1xi64 a,
                                                 unsigned long int value,
                                                 __epi_1xi1 mask,
                                                 unsigned long int gvl);
__epi_2xf32 __builtin_epi_vslide1down_2xf32_mask(__epi_2xf32 merge,
                                                 __epi_2xf32 a,
                                                 unsigned long int value,
                                                 __epi_2xi1 mask,
                                                 unsigned long int gvl);
__epi_1xf64 __builtin_epi_vslide1down_1xf64_mask(__epi_1xf64 merge,
                                                 __epi_1xf64 a,
                                                 unsigned long int value,
                                                 __epi_1xi1 mask,
                                                 unsigned long int gvl);
__epi_16xi8 __builtin_epi_vslide1down_16xi8_mask(__epi_16xi8 merge,
                                                 __epi_16xi8 a,
                                                 unsigned long int value,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
__epi_8xi16 __builtin_epi_vslide1down_8xi16_mask(__epi_8xi16 merge,
                                                 __epi_8xi16 a,
                                                 unsigned long int value,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
__epi_4xi32 __builtin_epi_vslide1down_4xi32_mask(__epi_4xi32 merge,
                                                 __epi_4xi32 a,
                                                 unsigned long int value,
                                                 __epi_4xi1 mask,
                                                 unsigned long int gvl);
__epi_2xi64 __builtin_epi_vslide1down_2xi64_mask(__epi_2xi64 merge,
                                                 __epi_2xi64 a,
                                                 unsigned long int value,
                                                 __epi_2xi1 mask,
                                                 unsigned long int gvl);
__epi_4xf32 __builtin_epi_vslide1down_4xf32_mask(__epi_4xf32 merge,
                                                 __epi_4xf32 a,
                                                 unsigned long int value,
                                                 __epi_4xi1 mask,
                                                 unsigned long int gvl);
__epi_2xf64 __builtin_epi_vslide1down_2xf64_mask(__epi_2xf64 merge,
                                                 __epi_2xf64 a,
                                                 unsigned long int value,
                                                 __epi_2xi1 mask,
                                                 unsigned long int gvl);
__epi_32xi8 __builtin_epi_vslide1down_32xi8_mask(__epi_32xi8 merge,
                                                 __epi_32xi8 a,
                                                 unsigned long int value,
                                                 __epi_32xi1 mask,
                                                 unsigned long int gvl);
__epi_16xi16 __builtin_epi_vslide1down_16xi16_mask(__epi_16xi16 merge,
                                                   __epi_16xi16 a,
                                                   unsigned long int value,
                                                   __epi_16xi1 mask,
                                                   unsigned long int gvl);
__epi_8xi32 __builtin_epi_vslide1down_8xi32_mask(__epi_8xi32 merge,
                                                 __epi_8xi32 a,
                                                 unsigned long int value,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
__epi_4xi64 __builtin_epi_vslide1down_4xi64_mask(__epi_4xi64 merge,
                                                 __epi_4xi64 a,
                                                 unsigned long int value,
                                                 __epi_4xi1 mask,
                                                 unsigned long int gvl);
__epi_8xf32 __builtin_epi_vslide1down_8xf32_mask(__epi_8xf32 merge,
                                                 __epi_8xf32 a,
                                                 unsigned long int value,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
__epi_4xf64 __builtin_epi_vslide1down_4xf64_mask(__epi_4xf64 merge,
                                                 __epi_4xf64 a,
                                                 unsigned long int value,
                                                 __epi_4xi1 mask,
                                                 unsigned long int gvl);
__epi_64xi8 __builtin_epi_vslide1down_64xi8_mask(__epi_64xi8 merge,
                                                 __epi_64xi8 a,
                                                 unsigned long int value,
                                                 __epi_64xi1 mask,
                                                 unsigned long int gvl);
__epi_32xi16 __builtin_epi_vslide1down_32xi16_mask(__epi_32xi16 merge,
                                                   __epi_32xi16 a,
                                                   unsigned long int value,
                                                   __epi_32xi1 mask,
                                                   unsigned long int gvl);
__epi_16xi32 __builtin_epi_vslide1down_16xi32_mask(__epi_16xi32 merge,
                                                   __epi_16xi32 a,
                                                   unsigned long int value,
                                                   __epi_16xi1 mask,
                                                   unsigned long int gvl);
__epi_8xi64 __builtin_epi_vslide1down_8xi64_mask(__epi_8xi64 merge,
                                                 __epi_8xi64 a,
                                                 unsigned long int value,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
__epi_16xf32 __builtin_epi_vslide1down_16xf32_mask(__epi_16xf32 merge,
                                                   __epi_16xf32 a,
                                                   unsigned long int value,
                                                   __epi_16xi1 mask,
                                                   unsigned long int gvl);
__epi_8xf64 __builtin_epi_vslide1down_8xf64_mask(__epi_8xf64 merge,
                                                 __epi_8xf64 a,
                                                 unsigned long int value,
                                                 __epi_8xi1 mask,
                                                 unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    if element < gvl - 1 then
       result[element] = a[element + 1]
    else
       result[element] = value
  else
    result[element] = merge[element]

2.7.14. Slide up elements of a vector one position

Description

Use these builtins to "slide up" the elements of a vector one position.

Instruction
vslide1up.vx
Prototypes
__epi_8xi8 __builtin_epi_vslide1up_8xi8(__epi_8xi8 a, unsigned long int value,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vslide1up_4xi16(__epi_4xi16 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vslide1up_2xi32(__epi_2xi32 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vslide1up_1xi64(__epi_1xi64 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_2xf32 __builtin_epi_vslide1up_2xf32(__epi_2xf32 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_1xf64 __builtin_epi_vslide1up_1xf64(__epi_1xf64 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vslide1up_16xi8(__epi_16xi8 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vslide1up_8xi16(__epi_8xi16 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vslide1up_4xi32(__epi_4xi32 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vslide1up_2xi64(__epi_2xi64 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_4xf32 __builtin_epi_vslide1up_4xf32(__epi_4xf32 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_2xf64 __builtin_epi_vslide1up_2xf64(__epi_2xf64 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vslide1up_32xi8(__epi_32xi8 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vslide1up_16xi16(__epi_16xi16 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vslide1up_8xi32(__epi_8xi32 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vslide1up_4xi64(__epi_4xi64 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_8xf32 __builtin_epi_vslide1up_8xf32(__epi_8xf32 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_4xf64 __builtin_epi_vslide1up_4xf64(__epi_4xf64 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vslide1up_64xi8(__epi_64xi8 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vslide1up_32xi16(__epi_32xi16 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vslide1up_16xi32(__epi_16xi32 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vslide1up_8xi64(__epi_8xi64 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
__epi_16xf32 __builtin_epi_vslide1up_16xf32(__epi_16xf32 a,
                                            unsigned long int value,
                                            unsigned long int gvl);
__epi_8xf64 __builtin_epi_vslide1up_8xf64(__epi_8xf64 a,
                                          unsigned long int value,
                                          unsigned long int gvl);
Operation
result[0] = value
for element = 1 to gvl - 1
  result[element] = a[element - offset]
Masked prototypes
__epi_8xi8 __builtin_epi_vslide1up_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                             unsigned long int value,
                                             __epi_8xi1 mask,
                                             unsigned long int gvl);
__epi_4xi16 __builtin_epi_vslide1up_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                               unsigned long int value,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_2xi32 __builtin_epi_vslide1up_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                               unsigned long int value,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_1xi64 __builtin_epi_vslide1up_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                               unsigned long int value,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
__epi_2xf32 __builtin_epi_vslide1up_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                               unsigned long int value,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_1xf64 __builtin_epi_vslide1up_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                               unsigned long int value,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
__epi_16xi8 __builtin_epi_vslide1up_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                               unsigned long int value,
                                               __epi_16xi1 mask,
                                               unsigned long int gvl);
__epi_8xi16 __builtin_epi_vslide1up_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                               unsigned long int value,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_4xi32 __builtin_epi_vslide1up_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                               unsigned long int value,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_2xi64 __builtin_epi_vslide1up_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                               unsigned long int value,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_4xf32 __builtin_epi_vslide1up_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                               unsigned long int value,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_2xf64 __builtin_epi_vslide1up_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                               unsigned long int value,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
__epi_32xi8 __builtin_epi_vslide1up_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                               unsigned long int value,
                                               __epi_32xi1 mask,
                                               unsigned long int gvl);
__epi_16xi16 __builtin_epi_vslide1up_16xi16_mask(__epi_16xi16 merge,
                                                 __epi_16xi16 a,
                                                 unsigned long int value,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
__epi_8xi32 __builtin_epi_vslide1up_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                               unsigned long int value,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_4xi64 __builtin_epi_vslide1up_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                               unsigned long int value,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_8xf32 __builtin_epi_vslide1up_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                               unsigned long int value,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_4xf64 __builtin_epi_vslide1up_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                               unsigned long int value,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
__epi_64xi8 __builtin_epi_vslide1up_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                               unsigned long int value,
                                               __epi_64xi1 mask,
                                               unsigned long int gvl);
__epi_32xi16 __builtin_epi_vslide1up_32xi16_mask(__epi_32xi16 merge,
                                                 __epi_32xi16 a,
                                                 unsigned long int value,
                                                 __epi_32xi1 mask,
                                                 unsigned long int gvl);
__epi_16xi32 __builtin_epi_vslide1up_16xi32_mask(__epi_16xi32 merge,
                                                 __epi_16xi32 a,
                                                 unsigned long int value,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
__epi_8xi64 __builtin_epi_vslide1up_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                               unsigned long int value,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
__epi_16xf32 __builtin_epi_vslide1up_16xf32_mask(__epi_16xf32 merge,
                                                 __epi_16xf32 a,
                                                 unsigned long int value,
                                                 __epi_16xi1 mask,
                                                 unsigned long int gvl);
__epi_8xf64 __builtin_epi_vslide1up_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                               unsigned long int value,
                                               __epi_8xi1 mask,
                                               unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    if element == 0 then
      result[0] = value
    else
      result[element] = a[element - offset]
  else
    result[element] = merge[element]

2.7.15. Slide down elements of a vector

Description

Use these builtins to "slide down" the elements of a vector.

Instruction
vslidedown.vx
Prototypes
__epi_8xi8 __builtin_epi_vslidedown_8xi8(__epi_8xi8 a, unsigned long int offset,
                                         unsigned long int gvl);
__epi_4xi16 __builtin_epi_vslidedown_4xi16(__epi_4xi16 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_2xi32 __builtin_epi_vslidedown_2xi32(__epi_2xi32 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_1xi64 __builtin_epi_vslidedown_1xi64(__epi_1xi64 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_2xf32 __builtin_epi_vslidedown_2xf32(__epi_2xf32 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_1xf64 __builtin_epi_vslidedown_1xf64(__epi_1xf64 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_16xi8 __builtin_epi_vslidedown_16xi8(__epi_16xi8 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_8xi16 __builtin_epi_vslidedown_8xi16(__epi_8xi16 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_4xi32 __builtin_epi_vslidedown_4xi32(__epi_4xi32 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_2xi64 __builtin_epi_vslidedown_2xi64(__epi_2xi64 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_4xf32 __builtin_epi_vslidedown_4xf32(__epi_4xf32 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_2xf64 __builtin_epi_vslidedown_2xf64(__epi_2xf64 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_32xi8 __builtin_epi_vslidedown_32xi8(__epi_32xi8 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_16xi16 __builtin_epi_vslidedown_16xi16(__epi_16xi16 a,
                                             unsigned long int offset,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vslidedown_8xi32(__epi_8xi32 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_4xi64 __builtin_epi_vslidedown_4xi64(__epi_4xi64 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_8xf32 __builtin_epi_vslidedown_8xf32(__epi_8xf32 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_4xf64 __builtin_epi_vslidedown_4xf64(__epi_4xf64 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_64xi8 __builtin_epi_vslidedown_64xi8(__epi_64xi8 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_32xi16 __builtin_epi_vslidedown_32xi16(__epi_32xi16 a,
                                             unsigned long int offset,
                                             unsigned long int gvl);
__epi_16xi32 __builtin_epi_vslidedown_16xi32(__epi_16xi32 a,
                                             unsigned long int offset,
                                             unsigned long int gvl);
__epi_8xi64 __builtin_epi_vslidedown_8xi64(__epi_8xi64 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_16xf32 __builtin_epi_vslidedown_16xf32(__epi_16xf32 a,
                                             unsigned long int offset,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vslidedown_8xf64(__epi_8xf64 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  if element + offset < VLMAX then
     result[element] = a[element + offset]
  else
     result[element] = 0
Masked prototypes
__epi_8xi8 __builtin_epi_vslidedown_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                              unsigned long int offset,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi16 __builtin_epi_vslidedown_4xi16_mask(__epi_4xi16 merge,
                                                __epi_4xi16 a,
                                                unsigned long int offset,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xi32 __builtin_epi_vslidedown_2xi32_mask(__epi_2xi32 merge,
                                                __epi_2xi32 a,
                                                unsigned long int offset,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xi64 __builtin_epi_vslidedown_1xi64_mask(__epi_1xi64 merge,
                                                __epi_1xi64 a,
                                                unsigned long int offset,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
__epi_2xf32 __builtin_epi_vslidedown_2xf32_mask(__epi_2xf32 merge,
                                                __epi_2xf32 a,
                                                unsigned long int offset,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xf64 __builtin_epi_vslidedown_1xf64_mask(__epi_1xf64 merge,
                                                __epi_1xf64 a,
                                                unsigned long int offset,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
__epi_16xi8 __builtin_epi_vslidedown_16xi8_mask(__epi_16xi8 merge,
                                                __epi_16xi8 a,
                                                unsigned long int offset,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi16 __builtin_epi_vslidedown_8xi16_mask(__epi_8xi16 merge,
                                                __epi_8xi16 a,
                                                unsigned long int offset,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
__epi_4xi32 __builtin_epi_vslidedown_4xi32_mask(__epi_4xi32 merge,
                                                __epi_4xi32 a,
                                                unsigned long int offset,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xi64 __builtin_epi_vslidedown_2xi64_mask(__epi_2xi64 merge,
                                                __epi_2xi64 a,
                                                unsigned long int offset,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_4xf32 __builtin_epi_vslidedown_4xf32_mask(__epi_4xf32 merge,
                                                __epi_4xf32 a,
                                                unsigned long int offset,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xf64 __builtin_epi_vslidedown_2xf64_mask(__epi_2xf64 merge,
                                                __epi_2xf64 a,
                                                unsigned long int offset,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_32xi8 __builtin_epi_vslidedown_32xi8_mask(__epi_32xi8 merge,
                                                __epi_32xi8 a,
                                                unsigned long int offset,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi16 __builtin_epi_vslidedown_16xi16_mask(__epi_16xi16 merge,
                                                  __epi_16xi16 a,
                                                  unsigned long int offset,
                                                  __epi_16xi1 mask,
                                                  unsigned long int gvl);
__epi_8xi32 __builtin_epi_vslidedown_8xi32_mask(__epi_8xi32 merge,
                                                __epi_8xi32 a,
                                                unsigned long int offset,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
__epi_4xi64 __builtin_epi_vslidedown_4xi64_mask(__epi_4xi64 merge,
                                                __epi_4xi64 a,
                                                unsigned long int offset,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_8xf32 __builtin_epi_vslidedown_8xf32_mask(__epi_8xf32 merge,
                                                __epi_8xf32 a,
                                                unsigned long int offset,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
__epi_4xf64 __builtin_epi_vslidedown_4xf64_mask(__epi_4xf64 merge,
                                                __epi_4xf64 a,
                                                unsigned long int offset,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_64xi8 __builtin_epi_vslidedown_64xi8_mask(__epi_64xi8 merge,
                                                __epi_64xi8 a,
                                                unsigned long int offset,
                                                __epi_64xi1 mask,
                                                unsigned long int gvl);
__epi_32xi16 __builtin_epi_vslidedown_32xi16_mask(__epi_32xi16 merge,
                                                  __epi_32xi16 a,
                                                  unsigned long int offset,
                                                  __epi_32xi1 mask,
                                                  unsigned long int gvl);
__epi_16xi32 __builtin_epi_vslidedown_16xi32_mask(__epi_16xi32 merge,
                                                  __epi_16xi32 a,
                                                  unsigned long int offset,
                                                  __epi_16xi1 mask,
                                                  unsigned long int gvl);
__epi_8xi64 __builtin_epi_vslidedown_8xi64_mask(__epi_8xi64 merge,
                                                __epi_8xi64 a,
                                                unsigned long int offset,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
__epi_16xf32 __builtin_epi_vslidedown_16xf32_mask(__epi_16xf32 merge,
                                                  __epi_16xf32 a,
                                                  unsigned long int offset,
                                                  __epi_16xi1 mask,
                                                  unsigned long int gvl);
__epi_8xf64 __builtin_epi_vslidedown_8xf64_mask(__epi_8xf64 merge,
                                                __epi_8xf64 a,
                                                unsigned long int offset,
                                                __epi_8xi1 mask,
                                                unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    if element + offset < VLMAX then
       result[element] = a[element + offset]
    else
       result[element] = 0
  else
    result[element] = merge[element]

2.7.16. Slide up elements of a vector

Description

Use these builtins to "slide up" the elements of a vector.

Instruction
vslideup.vx
Prototypes
__epi_8xi8 __builtin_epi_vslideup_8xi8(__epi_8xi8 a, unsigned long int offset,
                                       unsigned long int gvl);
__epi_4xi16 __builtin_epi_vslideup_4xi16(__epi_4xi16 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_2xi32 __builtin_epi_vslideup_2xi32(__epi_2xi32 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_1xi64 __builtin_epi_vslideup_1xi64(__epi_1xi64 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_2xf32 __builtin_epi_vslideup_2xf32(__epi_2xf32 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_1xf64 __builtin_epi_vslideup_1xf64(__epi_1xf64 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_16xi8 __builtin_epi_vslideup_16xi8(__epi_16xi8 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_8xi16 __builtin_epi_vslideup_8xi16(__epi_8xi16 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_4xi32 __builtin_epi_vslideup_4xi32(__epi_4xi32 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_2xi64 __builtin_epi_vslideup_2xi64(__epi_2xi64 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_4xf32 __builtin_epi_vslideup_4xf32(__epi_4xf32 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_2xf64 __builtin_epi_vslideup_2xf64(__epi_2xf64 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_32xi8 __builtin_epi_vslideup_32xi8(__epi_32xi8 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_16xi16 __builtin_epi_vslideup_16xi16(__epi_16xi16 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_8xi32 __builtin_epi_vslideup_8xi32(__epi_8xi32 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_4xi64 __builtin_epi_vslideup_4xi64(__epi_4xi64 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_8xf32 __builtin_epi_vslideup_8xf32(__epi_8xf32 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_4xf64 __builtin_epi_vslideup_4xf64(__epi_4xf64 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_64xi8 __builtin_epi_vslideup_64xi8(__epi_64xi8 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_32xi16 __builtin_epi_vslideup_32xi16(__epi_32xi16 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_16xi32 __builtin_epi_vslideup_16xi32(__epi_16xi32 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_8xi64 __builtin_epi_vslideup_8xi64(__epi_8xi64 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
__epi_16xf32 __builtin_epi_vslideup_16xf32(__epi_16xf32 a,
                                           unsigned long int offset,
                                           unsigned long int gvl);
__epi_8xf64 __builtin_epi_vslideup_8xf64(__epi_8xf64 a,
                                         unsigned long int offset,
                                         unsigned long int gvl);
Operation
for element = offset to gvl - 1
  result[element] = a[element - offset]
Masked prototypes
__epi_8xi8 __builtin_epi_vslideup_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                            unsigned long int offset,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi16 __builtin_epi_vslideup_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                              unsigned long int offset,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi32 __builtin_epi_vslideup_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                              unsigned long int offset,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xi64 __builtin_epi_vslideup_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                              unsigned long int offset,
                                              __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_2xf32 __builtin_epi_vslideup_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                              unsigned long int offset,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_1xf64 __builtin_epi_vslideup_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                              unsigned long int offset,
                                              __epi_1xi1 mask,
                                              unsigned long int gvl);
__epi_16xi8 __builtin_epi_vslideup_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                              unsigned long int offset,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi16 __builtin_epi_vslideup_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                              unsigned long int offset,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi32 __builtin_epi_vslideup_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                              unsigned long int offset,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xi64 __builtin_epi_vslideup_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                              unsigned long int offset,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_4xf32 __builtin_epi_vslideup_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                              unsigned long int offset,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_2xf64 __builtin_epi_vslideup_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                              unsigned long int offset,
                                              __epi_2xi1 mask,
                                              unsigned long int gvl);
__epi_32xi8 __builtin_epi_vslideup_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                              unsigned long int offset,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi16 __builtin_epi_vslideup_16xi16_mask(__epi_16xi16 merge,
                                                __epi_16xi16 a,
                                                unsigned long int offset,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vslideup_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                              unsigned long int offset,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi64 __builtin_epi_vslideup_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                              unsigned long int offset,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_8xf32 __builtin_epi_vslideup_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                              unsigned long int offset,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xf64 __builtin_epi_vslideup_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                              unsigned long int offset,
                                              __epi_4xi1 mask,
                                              unsigned long int gvl);
__epi_64xi8 __builtin_epi_vslideup_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                              unsigned long int offset,
                                              __epi_64xi1 mask,
                                              unsigned long int gvl);
__epi_32xi16 __builtin_epi_vslideup_32xi16_mask(__epi_32xi16 merge,
                                                __epi_32xi16 a,
                                                unsigned long int offset,
                                                __epi_32xi1 mask,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vslideup_16xi32_mask(__epi_16xi32 merge,
                                                __epi_16xi32 a,
                                                unsigned long int offset,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xi64 __builtin_epi_vslideup_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                              unsigned long int offset,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_16xf32 __builtin_epi_vslideup_16xf32_mask(__epi_16xf32 merge,
                                                __epi_16xf32 a,
                                                unsigned long int offset,
                                                __epi_16xi1 mask,
                                                unsigned long int gvl);
__epi_8xf64 __builtin_epi_vslideup_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                              unsigned long int offset,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
Masked operation

2.7.17. Broadcast the element of a vector to all the elements of a vector

Description

Use these builtins to create a vector where all the elements have the value of an element of another vector

Instruction
vrgather.vx / vrgather.vi
Prototypes
__epi_8xi8 __builtin_epi_vsplat_8xi8(__epi_8xi8 a, unsigned long int b,
                                     unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsplat_4xi16(__epi_4xi16 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsplat_2xi32(__epi_2xi32 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsplat_1xi64(__epi_1xi64 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_2xf32 __builtin_epi_vsplat_2xf32(__epi_2xf32 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_1xf64 __builtin_epi_vsplat_1xf64(__epi_1xf64 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsplat_16xi8(__epi_16xi8 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsplat_8xi16(__epi_8xi16 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsplat_4xi32(__epi_4xi32 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsplat_2xi64(__epi_2xi64 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_4xf32 __builtin_epi_vsplat_4xf32(__epi_4xf32 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_2xf64 __builtin_epi_vsplat_2xf64(__epi_2xf64 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsplat_32xi8(__epi_32xi8 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsplat_16xi16(__epi_16xi16 a, unsigned long int b,
                                         unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsplat_8xi32(__epi_8xi32 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsplat_4xi64(__epi_4xi64 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_8xf32 __builtin_epi_vsplat_8xf32(__epi_8xf32 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_4xf64 __builtin_epi_vsplat_4xf64(__epi_4xf64 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsplat_64xi8(__epi_64xi8 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsplat_32xi16(__epi_32xi16 a, unsigned long int b,
                                         unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsplat_16xi32(__epi_16xi32 a, unsigned long int b,
                                         unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsplat_8xi64(__epi_8xi64 a, unsigned long int b,
                                       unsigned long int gvl);
__epi_16xf32 __builtin_epi_vsplat_16xf32(__epi_16xf32 a, unsigned long int b,
                                         unsigned long int gvl);
__epi_8xf64 __builtin_epi_vsplat_8xf64(__epi_8xf64 a, unsigned long int b,
                                       unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   if b > VLMAX then
     result[element] = 0
   else
     result[element] = a[b]
Masked prototypes
__epi_8xi8 __builtin_epi_vsplat_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                          unsigned long int b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsplat_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                            unsigned long int b,
                                            __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsplat_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                            unsigned long int b,
                                            __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsplat_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                            unsigned long int b,
                                            __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_2xf32 __builtin_epi_vsplat_2xf32_mask(__epi_2xf32 merge, __epi_2xf32 a,
                                            unsigned long int b,
                                            __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_1xf64 __builtin_epi_vsplat_1xf64_mask(__epi_1xf64 merge, __epi_1xf64 a,
                                            unsigned long int b,
                                            __epi_1xi1 mask,
                                            unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsplat_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                            unsigned long int b,
                                            __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsplat_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                            unsigned long int b,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsplat_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                            unsigned long int b,
                                            __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsplat_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                            unsigned long int b,
                                            __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_4xf32 __builtin_epi_vsplat_4xf32_mask(__epi_4xf32 merge, __epi_4xf32 a,
                                            unsigned long int b,
                                            __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_2xf64 __builtin_epi_vsplat_2xf64_mask(__epi_2xf64 merge, __epi_2xf64 a,
                                            unsigned long int b,
                                            __epi_2xi1 mask,
                                            unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsplat_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                            unsigned long int b,
                                            __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsplat_16xi16_mask(__epi_16xi16 merge,
                                              __epi_16xi16 a,
                                              unsigned long int b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsplat_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                            unsigned long int b,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsplat_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                            unsigned long int b,
                                            __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_8xf32 __builtin_epi_vsplat_8xf32_mask(__epi_8xf32 merge, __epi_8xf32 a,
                                            unsigned long int b,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_4xf64 __builtin_epi_vsplat_4xf64_mask(__epi_4xf64 merge, __epi_4xf64 a,
                                            unsigned long int b,
                                            __epi_4xi1 mask,
                                            unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsplat_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                            unsigned long int b,
                                            __epi_64xi1 mask,
                                            unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsplat_32xi16_mask(__epi_32xi16 merge,
                                              __epi_32xi16 a,
                                              unsigned long int b,
                                              __epi_32xi1 mask,
                                              unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsplat_16xi32_mask(__epi_16xi32 merge,
                                              __epi_16xi32 a,
                                              unsigned long int b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsplat_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                            unsigned long int b,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
__epi_16xf32 __builtin_epi_vsplat_16xf32_mask(__epi_16xf32 merge,
                                              __epi_16xf32 a,
                                              unsigned long int b,
                                              __epi_16xi1 mask,
                                              unsigned long int gvl);
__epi_8xf64 __builtin_epi_vsplat_8xf64_mask(__epi_8xf64 merge, __epi_8xf64 a,
                                            unsigned long int b,
                                            __epi_8xi1 mask,
                                            unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     if b > VLMAX then
       result[element] = 0
     else
       result[element] = a[b]
   else
     result[element] = merge[element]

2.8. Segmented load/stores

2.8.1. Segmented load of tuples of two elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

Instruction
vlseg2e.v
Prototypes
__epi_8xi8x2 __builtin_epi_vlseg2_8xi8x2(const signed char *address,
                                         unsigned long int gvl);
__epi_4xi16x2 __builtin_epi_vlseg2_4xi16x2(const signed short int *address,
                                           unsigned long int gvl);
__epi_2xi32x2 __builtin_epi_vlseg2_2xi32x2(const signed int *address,
                                           unsigned long int gvl);
__epi_1xi64x2 __builtin_epi_vlseg2_1xi64x2(const signed long int *address,
                                           unsigned long int gvl);
__epi_2xf32x2 __builtin_epi_vlseg2_2xf32x2(const float *address,
                                           unsigned long int gvl);
__epi_1xf64x2 __builtin_epi_vlseg2_1xf64x2(const double *address,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result.v0[element] = load_element(address)
  address = address + SEW / 8
  result.v1[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8x2 __builtin_epi_vlseg2_8xi8x2_mask(__epi_8xi8x2 merge,
                                              const signed char *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi16x2 __builtin_epi_vlseg2_4xi16x2_mask(__epi_4xi16x2 merge,
                                                const signed short int *address,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xi32x2 __builtin_epi_vlseg2_2xi32x2_mask(__epi_2xi32x2 merge,
                                                const signed int *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xi64x2 __builtin_epi_vlseg2_1xi64x2_mask(__epi_1xi64x2 merge,
                                                const signed long int *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
__epi_2xf32x2 __builtin_epi_vlseg2_2xf32x2_mask(__epi_2xf32x2 merge,
                                                const float *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xf64x2 __builtin_epi_vlseg2_1xf64x2_mask(__epi_1xf64x2 merge,
                                                const double *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);

2.8.2. Indexed segmented load of tuples of two elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

+ The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vlxseg2e.v
Prototypes
__epi_8xi8x2 __builtin_epi_vlseg2_indexed_8xi8x2(const signed char *address,
                                                 __epi_8xi8 index,
                                                 unsigned long int gvl);
__epi_4xi16x2
__builtin_epi_vlseg2_indexed_4xi16x2(const signed short int *address,
                                     __epi_4xi16 index, unsigned long int gvl);
__epi_2xi32x2 __builtin_epi_vlseg2_indexed_2xi32x2(const signed int *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xi64x2
__builtin_epi_vlseg2_indexed_1xi64x2(const signed long int *address,
                                     __epi_1xi64 index, unsigned long int gvl);
__epi_2xf32x2 __builtin_epi_vlseg2_indexed_2xf32x2(const float *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xf64x2 __builtin_epi_vlseg2_indexed_1xf64x2(const double *address,
                                                   __epi_1xi64 index,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x2 __builtin_epi_vlseg2_indexed_8xi8x2_mask(
    __epi_8xi8x2 merge, const signed char *address, __epi_8xi8 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x2 __builtin_epi_vlseg2_indexed_4xi16x2_mask(
    __epi_4xi16x2 merge, const signed short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x2 __builtin_epi_vlseg2_indexed_2xi32x2_mask(
    __epi_2xi32x2 merge, const signed int *address, __epi_2xi32 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x2 __builtin_epi_vlseg2_indexed_1xi64x2_mask(
    __epi_1xi64x2 merge, const signed long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x2 __builtin_epi_vlseg2_indexed_2xf32x2_mask(__epi_2xf32x2 merge,
                                                        const float *address,
                                                        __epi_2xi32 index,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x2 __builtin_epi_vlseg2_indexed_1xf64x2_mask(__epi_1xf64x2 merge,
                                                        const double *address,
                                                        __epi_1xi64 index,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.3. Strided segmented load of tuples of two elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vlsseg2e.v
Prototypes
__epi_8xi8x2 __builtin_epi_vlseg2_strided_8xi8x2(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xi16x2
__builtin_epi_vlseg2_strided_4xi16x2(const signed short int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xi32x2 __builtin_epi_vlseg2_strided_2xi32x2(const signed int *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xi64x2
__builtin_epi_vlseg2_strided_1xi64x2(const signed long int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xf32x2 __builtin_epi_vlseg2_strided_2xf32x2(const float *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xf64x2 __builtin_epi_vlseg2_strided_1xf64x2(const double *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x2 __builtin_epi_vlseg2_strided_8xi8x2_mask(
    __epi_8xi8x2 merge, const signed char *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x2 __builtin_epi_vlseg2_strided_4xi16x2_mask(
    __epi_4xi16x2 merge, const signed short int *address,
    signed long int stride, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x2 __builtin_epi_vlseg2_strided_2xi32x2_mask(
    __epi_2xi32x2 merge, const signed int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x2 __builtin_epi_vlseg2_strided_1xi64x2_mask(
    __epi_1xi64x2 merge, const signed long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x2 __builtin_epi_vlseg2_strided_2xf32x2_mask(__epi_2xf32x2 merge,
                                                        const float *address,
                                                        signed long int stride,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x2 __builtin_epi_vlseg2_strided_1xf64x2_mask(__epi_1xf64x2 merge,
                                                        const double *address,
                                                        signed long int stride,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.4. Segmented load of tuples of three elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

Instruction
vlseg3e.v
Prototypes
__epi_8xi8x3 __builtin_epi_vlseg3_8xi8x3(const signed char *address,
                                         unsigned long int gvl);
__epi_4xi16x3 __builtin_epi_vlseg3_4xi16x3(const signed short int *address,
                                           unsigned long int gvl);
__epi_2xi32x3 __builtin_epi_vlseg3_2xi32x3(const signed int *address,
                                           unsigned long int gvl);
__epi_1xi64x3 __builtin_epi_vlseg3_1xi64x3(const signed long int *address,
                                           unsigned long int gvl);
__epi_2xf32x3 __builtin_epi_vlseg3_2xf32x3(const float *address,
                                           unsigned long int gvl);
__epi_1xf64x3 __builtin_epi_vlseg3_1xf64x3(const double *address,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result.v0[element] = load_element(address)
  address = address + SEW / 8
  result.v1[element] = load_element(address)
  address = address + SEW / 8
  result.v2[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8x3 __builtin_epi_vlseg3_8xi8x3_mask(__epi_8xi8x3 merge,
                                              const signed char *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi16x3 __builtin_epi_vlseg3_4xi16x3_mask(__epi_4xi16x3 merge,
                                                const signed short int *address,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xi32x3 __builtin_epi_vlseg3_2xi32x3_mask(__epi_2xi32x3 merge,
                                                const signed int *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xi64x3 __builtin_epi_vlseg3_1xi64x3_mask(__epi_1xi64x3 merge,
                                                const signed long int *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
__epi_2xf32x3 __builtin_epi_vlseg3_2xf32x3_mask(__epi_2xf32x3 merge,
                                                const float *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xf64x3 __builtin_epi_vlseg3_1xf64x3_mask(__epi_1xf64x3 merge,
                                                const double *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);

2.8.5. Indexed segmented load of tuples of three elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vlxseg3e.v
Prototypes
__epi_8xi8x3 __builtin_epi_vlseg3_indexed_8xi8x3(const signed char *address,
                                                 __epi_8xi8 index,
                                                 unsigned long int gvl);
__epi_4xi16x3
__builtin_epi_vlseg3_indexed_4xi16x3(const signed short int *address,
                                     __epi_4xi16 index, unsigned long int gvl);
__epi_2xi32x3 __builtin_epi_vlseg3_indexed_2xi32x3(const signed int *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xi64x3
__builtin_epi_vlseg3_indexed_1xi64x3(const signed long int *address,
                                     __epi_1xi64 index, unsigned long int gvl);
__epi_2xf32x3 __builtin_epi_vlseg3_indexed_2xf32x3(const float *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xf64x3 __builtin_epi_vlseg3_indexed_1xf64x3(const double *address,
                                                   __epi_1xi64 index,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x3 __builtin_epi_vlseg3_indexed_8xi8x3_mask(
    __epi_8xi8x3 merge, const signed char *address, __epi_8xi8 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x3 __builtin_epi_vlseg3_indexed_4xi16x3_mask(
    __epi_4xi16x3 merge, const signed short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x3 __builtin_epi_vlseg3_indexed_2xi32x3_mask(
    __epi_2xi32x3 merge, const signed int *address, __epi_2xi32 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x3 __builtin_epi_vlseg3_indexed_1xi64x3_mask(
    __epi_1xi64x3 merge, const signed long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x3 __builtin_epi_vlseg3_indexed_2xf32x3_mask(__epi_2xf32x3 merge,
                                                        const float *address,
                                                        __epi_2xi32 index,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x3 __builtin_epi_vlseg3_indexed_1xf64x3_mask(__epi_1xf64x3 merge,
                                                        const double *address,
                                                        __epi_1xi64 index,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.6. Strided segmented load of tuples of three elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vlsseg3e.v
Prototypes
__epi_8xi8x3 __builtin_epi_vlseg3_strided_8xi8x3(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xi16x3
__builtin_epi_vlseg3_strided_4xi16x3(const signed short int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xi32x3 __builtin_epi_vlseg3_strided_2xi32x3(const signed int *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xi64x3
__builtin_epi_vlseg3_strided_1xi64x3(const signed long int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xf32x3 __builtin_epi_vlseg3_strided_2xf32x3(const float *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xf64x3 __builtin_epi_vlseg3_strided_1xf64x3(const double *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x3 __builtin_epi_vlseg3_strided_8xi8x3_mask(
    __epi_8xi8x3 merge, const signed char *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x3 __builtin_epi_vlseg3_strided_4xi16x3_mask(
    __epi_4xi16x3 merge, const signed short int *address,
    signed long int stride, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x3 __builtin_epi_vlseg3_strided_2xi32x3_mask(
    __epi_2xi32x3 merge, const signed int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x3 __builtin_epi_vlseg3_strided_1xi64x3_mask(
    __epi_1xi64x3 merge, const signed long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x3 __builtin_epi_vlseg3_strided_2xf32x3_mask(__epi_2xf32x3 merge,
                                                        const float *address,
                                                        signed long int stride,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x3 __builtin_epi_vlseg3_strided_1xf64x3_mask(__epi_1xf64x3 merge,
                                                        const double *address,
                                                        signed long int stride,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.7. Segmented load of tuples of four elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

Instruction
vlseg4e.v
Prototypes
__epi_8xi8x4 __builtin_epi_vlseg4_8xi8x4(const signed char *address,
                                         unsigned long int gvl);
__epi_4xi16x4 __builtin_epi_vlseg4_4xi16x4(const signed short int *address,
                                           unsigned long int gvl);
__epi_2xi32x4 __builtin_epi_vlseg4_2xi32x4(const signed int *address,
                                           unsigned long int gvl);
__epi_1xi64x4 __builtin_epi_vlseg4_1xi64x4(const signed long int *address,
                                           unsigned long int gvl);
__epi_2xf32x4 __builtin_epi_vlseg4_2xf32x4(const float *address,
                                           unsigned long int gvl);
__epi_1xf64x4 __builtin_epi_vlseg4_1xf64x4(const double *address,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result.v0[element] = load_element(address)
  address = address + SEW / 8
  result.v1[element] = load_element(address)
  address = address + SEW / 8
  result.v2[element] = load_element(address)
  address = address + SEW / 8
  result.v3[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8x4 __builtin_epi_vlseg4_8xi8x4_mask(__epi_8xi8x4 merge,
                                              const signed char *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi16x4 __builtin_epi_vlseg4_4xi16x4_mask(__epi_4xi16x4 merge,
                                                const signed short int *address,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xi32x4 __builtin_epi_vlseg4_2xi32x4_mask(__epi_2xi32x4 merge,
                                                const signed int *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xi64x4 __builtin_epi_vlseg4_1xi64x4_mask(__epi_1xi64x4 merge,
                                                const signed long int *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
__epi_2xf32x4 __builtin_epi_vlseg4_2xf32x4_mask(__epi_2xf32x4 merge,
                                                const float *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xf64x4 __builtin_epi_vlseg4_1xf64x4_mask(__epi_1xf64x4 merge,
                                                const double *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);

2.8.8. Indexed segmented load of tuples of four elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vlxseg4e.v
Prototypes
__epi_8xi8x4 __builtin_epi_vlseg4_indexed_8xi8x4(const signed char *address,
                                                 __epi_8xi8 index,
                                                 unsigned long int gvl);
__epi_4xi16x4
__builtin_epi_vlseg4_indexed_4xi16x4(const signed short int *address,
                                     __epi_4xi16 index, unsigned long int gvl);
__epi_2xi32x4 __builtin_epi_vlseg4_indexed_2xi32x4(const signed int *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xi64x4
__builtin_epi_vlseg4_indexed_1xi64x4(const signed long int *address,
                                     __epi_1xi64 index, unsigned long int gvl);
__epi_2xf32x4 __builtin_epi_vlseg4_indexed_2xf32x4(const float *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xf64x4 __builtin_epi_vlseg4_indexed_1xf64x4(const double *address,
                                                   __epi_1xi64 index,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x4 __builtin_epi_vlseg4_indexed_8xi8x4_mask(
    __epi_8xi8x4 merge, const signed char *address, __epi_8xi8 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x4 __builtin_epi_vlseg4_indexed_4xi16x4_mask(
    __epi_4xi16x4 merge, const signed short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x4 __builtin_epi_vlseg4_indexed_2xi32x4_mask(
    __epi_2xi32x4 merge, const signed int *address, __epi_2xi32 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x4 __builtin_epi_vlseg4_indexed_1xi64x4_mask(
    __epi_1xi64x4 merge, const signed long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x4 __builtin_epi_vlseg4_indexed_2xf32x4_mask(__epi_2xf32x4 merge,
                                                        const float *address,
                                                        __epi_2xi32 index,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x4 __builtin_epi_vlseg4_indexed_1xf64x4_mask(__epi_1xf64x4 merge,
                                                        const double *address,
                                                        __epi_1xi64 index,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.9. Strided segmented load of tuples of four elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vlsseg4e.v
Prototypes
__epi_8xi8x4 __builtin_epi_vlseg4_strided_8xi8x4(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xi16x4
__builtin_epi_vlseg4_strided_4xi16x4(const signed short int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xi32x4 __builtin_epi_vlseg4_strided_2xi32x4(const signed int *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xi64x4
__builtin_epi_vlseg4_strided_1xi64x4(const signed long int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xf32x4 __builtin_epi_vlseg4_strided_2xf32x4(const float *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xf64x4 __builtin_epi_vlseg4_strided_1xf64x4(const double *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x4 __builtin_epi_vlseg4_strided_8xi8x4_mask(
    __epi_8xi8x4 merge, const signed char *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x4 __builtin_epi_vlseg4_strided_4xi16x4_mask(
    __epi_4xi16x4 merge, const signed short int *address,
    signed long int stride, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x4 __builtin_epi_vlseg4_strided_2xi32x4_mask(
    __epi_2xi32x4 merge, const signed int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x4 __builtin_epi_vlseg4_strided_1xi64x4_mask(
    __epi_1xi64x4 merge, const signed long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x4 __builtin_epi_vlseg4_strided_2xf32x4_mask(__epi_2xf32x4 merge,
                                                        const float *address,
                                                        signed long int stride,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x4 __builtin_epi_vlseg4_strided_1xf64x4_mask(__epi_1xf64x4 merge,
                                                        const double *address,
                                                        signed long int stride,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.10. Segmented load of tuples of five elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

Instruction
vlseg5e.v
Prototypes
__epi_8xi8x5 __builtin_epi_vlseg5_8xi8x5(const signed char *address,
                                         unsigned long int gvl);
__epi_4xi16x5 __builtin_epi_vlseg5_4xi16x5(const signed short int *address,
                                           unsigned long int gvl);
__epi_2xi32x5 __builtin_epi_vlseg5_2xi32x5(const signed int *address,
                                           unsigned long int gvl);
__epi_1xi64x5 __builtin_epi_vlseg5_1xi64x5(const signed long int *address,
                                           unsigned long int gvl);
__epi_2xf32x5 __builtin_epi_vlseg5_2xf32x5(const float *address,
                                           unsigned long int gvl);
__epi_1xf64x5 __builtin_epi_vlseg5_1xf64x5(const double *address,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result.v0[element] = load_element(address)
  address = address + SEW / 8
  result.v1[element] = load_element(address)
  address = address + SEW / 8
  result.v2[element] = load_element(address)
  address = address + SEW / 8
  result.v3[element] = load_element(address)
  address = address + SEW / 8
  result.v4[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8x5 __builtin_epi_vlseg5_8xi8x5_mask(__epi_8xi8x5 merge,
                                              const signed char *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi16x5 __builtin_epi_vlseg5_4xi16x5_mask(__epi_4xi16x5 merge,
                                                const signed short int *address,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xi32x5 __builtin_epi_vlseg5_2xi32x5_mask(__epi_2xi32x5 merge,
                                                const signed int *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xi64x5 __builtin_epi_vlseg5_1xi64x5_mask(__epi_1xi64x5 merge,
                                                const signed long int *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
__epi_2xf32x5 __builtin_epi_vlseg5_2xf32x5_mask(__epi_2xf32x5 merge,
                                                const float *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xf64x5 __builtin_epi_vlseg5_1xf64x5_mask(__epi_1xf64x5 merge,
                                                const double *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);

2.8.11. Indexed segmented load of tuples of five elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vlxseg5e.v
Prototypes
__epi_8xi8x5 __builtin_epi_vlseg5_indexed_8xi8x5(const signed char *address,
                                                 __epi_8xi8 index,
                                                 unsigned long int gvl);
__epi_4xi16x5
__builtin_epi_vlseg5_indexed_4xi16x5(const signed short int *address,
                                     __epi_4xi16 index, unsigned long int gvl);
__epi_2xi32x5 __builtin_epi_vlseg5_indexed_2xi32x5(const signed int *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xi64x5
__builtin_epi_vlseg5_indexed_1xi64x5(const signed long int *address,
                                     __epi_1xi64 index, unsigned long int gvl);
__epi_2xf32x5 __builtin_epi_vlseg5_indexed_2xf32x5(const float *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xf64x5 __builtin_epi_vlseg5_indexed_1xf64x5(const double *address,
                                                   __epi_1xi64 index,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v4[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x5 __builtin_epi_vlseg5_indexed_8xi8x5_mask(
    __epi_8xi8x5 merge, const signed char *address, __epi_8xi8 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x5 __builtin_epi_vlseg5_indexed_4xi16x5_mask(
    __epi_4xi16x5 merge, const signed short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x5 __builtin_epi_vlseg5_indexed_2xi32x5_mask(
    __epi_2xi32x5 merge, const signed int *address, __epi_2xi32 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x5 __builtin_epi_vlseg5_indexed_1xi64x5_mask(
    __epi_1xi64x5 merge, const signed long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x5 __builtin_epi_vlseg5_indexed_2xf32x5_mask(__epi_2xf32x5 merge,
                                                        const float *address,
                                                        __epi_2xi32 index,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x5 __builtin_epi_vlseg5_indexed_1xf64x5_mask(__epi_1xf64x5 merge,
                                                        const double *address,
                                                        __epi_1xi64 index,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.12. Strided segmented load of tuples of five elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vlsseg5e.v
Prototypes
__epi_8xi8x5 __builtin_epi_vlseg5_strided_8xi8x5(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xi16x5
__builtin_epi_vlseg5_strided_4xi16x5(const signed short int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xi32x5 __builtin_epi_vlseg5_strided_2xi32x5(const signed int *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xi64x5
__builtin_epi_vlseg5_strided_1xi64x5(const signed long int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xf32x5 __builtin_epi_vlseg5_strided_2xf32x5(const float *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xf64x5 __builtin_epi_vlseg5_strided_1xf64x5(const double *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v4[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x5 __builtin_epi_vlseg5_strided_8xi8x5_mask(
    __epi_8xi8x5 merge, const signed char *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x5 __builtin_epi_vlseg5_strided_4xi16x5_mask(
    __epi_4xi16x5 merge, const signed short int *address,
    signed long int stride, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x5 __builtin_epi_vlseg5_strided_2xi32x5_mask(
    __epi_2xi32x5 merge, const signed int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x5 __builtin_epi_vlseg5_strided_1xi64x5_mask(
    __epi_1xi64x5 merge, const signed long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x5 __builtin_epi_vlseg5_strided_2xf32x5_mask(__epi_2xf32x5 merge,
                                                        const float *address,
                                                        signed long int stride,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x5 __builtin_epi_vlseg5_strided_1xf64x5_mask(__epi_1xf64x5 merge,
                                                        const double *address,
                                                        signed long int stride,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.13. Segmented load of tuples of six elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

Instruction
vlseg6e.v
Prototypes
__epi_8xi8x6 __builtin_epi_vlseg6_8xi8x6(const signed char *address,
                                         unsigned long int gvl);
__epi_4xi16x6 __builtin_epi_vlseg6_4xi16x6(const signed short int *address,
                                           unsigned long int gvl);
__epi_2xi32x6 __builtin_epi_vlseg6_2xi32x6(const signed int *address,
                                           unsigned long int gvl);
__epi_1xi64x6 __builtin_epi_vlseg6_1xi64x6(const signed long int *address,
                                           unsigned long int gvl);
__epi_2xf32x6 __builtin_epi_vlseg6_2xf32x6(const float *address,
                                           unsigned long int gvl);
__epi_1xf64x6 __builtin_epi_vlseg6_1xf64x6(const double *address,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result.v0[element] = load_element(address)
  address = address + SEW / 8
  result.v1[element] = load_element(address)
  address = address + SEW / 8
  result.v2[element] = load_element(address)
  address = address + SEW / 8
  result.v3[element] = load_element(address)
  address = address + SEW / 8
  result.v4[element] = load_element(address)
  address = address + SEW / 8
  result.v5[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8x6 __builtin_epi_vlseg6_8xi8x6_mask(__epi_8xi8x6 merge,
                                              const signed char *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi16x6 __builtin_epi_vlseg6_4xi16x6_mask(__epi_4xi16x6 merge,
                                                const signed short int *address,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xi32x6 __builtin_epi_vlseg6_2xi32x6_mask(__epi_2xi32x6 merge,
                                                const signed int *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xi64x6 __builtin_epi_vlseg6_1xi64x6_mask(__epi_1xi64x6 merge,
                                                const signed long int *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
__epi_2xf32x6 __builtin_epi_vlseg6_2xf32x6_mask(__epi_2xf32x6 merge,
                                                const float *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xf64x6 __builtin_epi_vlseg6_1xf64x6_mask(__epi_1xf64x6 merge,
                                                const double *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);

2.8.14. Indexed segmented load of tuples of six elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vlxseg6e.v
Prototypes
__epi_8xi8x6 __builtin_epi_vlseg6_indexed_8xi8x6(const signed char *address,
                                                 __epi_8xi8 index,
                                                 unsigned long int gvl);
__epi_4xi16x6
__builtin_epi_vlseg6_indexed_4xi16x6(const signed short int *address,
                                     __epi_4xi16 index, unsigned long int gvl);
__epi_2xi32x6 __builtin_epi_vlseg6_indexed_2xi32x6(const signed int *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xi64x6
__builtin_epi_vlseg6_indexed_1xi64x6(const signed long int *address,
                                     __epi_1xi64 index, unsigned long int gvl);
__epi_2xf32x6 __builtin_epi_vlseg6_indexed_2xf32x6(const float *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xf64x6 __builtin_epi_vlseg6_indexed_1xf64x6(const double *address,
                                                   __epi_1xi64 index,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v4[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v5[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x6 __builtin_epi_vlseg6_indexed_8xi8x6_mask(
    __epi_8xi8x6 merge, const signed char *address, __epi_8xi8 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x6 __builtin_epi_vlseg6_indexed_4xi16x6_mask(
    __epi_4xi16x6 merge, const signed short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x6 __builtin_epi_vlseg6_indexed_2xi32x6_mask(
    __epi_2xi32x6 merge, const signed int *address, __epi_2xi32 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x6 __builtin_epi_vlseg6_indexed_1xi64x6_mask(
    __epi_1xi64x6 merge, const signed long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x6 __builtin_epi_vlseg6_indexed_2xf32x6_mask(__epi_2xf32x6 merge,
                                                        const float *address,
                                                        __epi_2xi32 index,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x6 __builtin_epi_vlseg6_indexed_1xf64x6_mask(__epi_1xf64x6 merge,
                                                        const double *address,
                                                        __epi_1xi64 index,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.15. Strided segmented load of tuples of six elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vlsseg6e.v
Prototypes
__epi_8xi8x6 __builtin_epi_vlseg6_strided_8xi8x6(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xi16x6
__builtin_epi_vlseg6_strided_4xi16x6(const signed short int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xi32x6 __builtin_epi_vlseg6_strided_2xi32x6(const signed int *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xi64x6
__builtin_epi_vlseg6_strided_1xi64x6(const signed long int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xf32x6 __builtin_epi_vlseg6_strided_2xf32x6(const float *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xf64x6 __builtin_epi_vlseg6_strided_1xf64x6(const double *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v4[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v5[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x6 __builtin_epi_vlseg6_strided_8xi8x6_mask(
    __epi_8xi8x6 merge, const signed char *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x6 __builtin_epi_vlseg6_strided_4xi16x6_mask(
    __epi_4xi16x6 merge, const signed short int *address,
    signed long int stride, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x6 __builtin_epi_vlseg6_strided_2xi32x6_mask(
    __epi_2xi32x6 merge, const signed int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x6 __builtin_epi_vlseg6_strided_1xi64x6_mask(
    __epi_1xi64x6 merge, const signed long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x6 __builtin_epi_vlseg6_strided_2xf32x6_mask(__epi_2xf32x6 merge,
                                                        const float *address,
                                                        signed long int stride,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x6 __builtin_epi_vlseg6_strided_1xf64x6_mask(__epi_1xf64x6 merge,
                                                        const double *address,
                                                        signed long int stride,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.16. Segmented load of tuples of seven elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

Instruction
vlseg7e.v
Prototypes
__epi_8xi8x7 __builtin_epi_vlseg7_8xi8x7(const signed char *address,
                                         unsigned long int gvl);
__epi_4xi16x7 __builtin_epi_vlseg7_4xi16x7(const signed short int *address,
                                           unsigned long int gvl);
__epi_2xi32x7 __builtin_epi_vlseg7_2xi32x7(const signed int *address,
                                           unsigned long int gvl);
__epi_1xi64x7 __builtin_epi_vlseg7_1xi64x7(const signed long int *address,
                                           unsigned long int gvl);
__epi_2xf32x7 __builtin_epi_vlseg7_2xf32x7(const float *address,
                                           unsigned long int gvl);
__epi_1xf64x7 __builtin_epi_vlseg7_1xf64x7(const double *address,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result.v0[element] = load_element(address)
  address = address + SEW / 8
  result.v1[element] = load_element(address)
  address = address + SEW / 8
  result.v2[element] = load_element(address)
  address = address + SEW / 8
  result.v3[element] = load_element(address)
  address = address + SEW / 8
  result.v4[element] = load_element(address)
  address = address + SEW / 8
  result.v5[element] = load_element(address)
  address = address + SEW / 8
  result.v6[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8x7 __builtin_epi_vlseg7_8xi8x7_mask(__epi_8xi8x7 merge,
                                              const signed char *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi16x7 __builtin_epi_vlseg7_4xi16x7_mask(__epi_4xi16x7 merge,
                                                const signed short int *address,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xi32x7 __builtin_epi_vlseg7_2xi32x7_mask(__epi_2xi32x7 merge,
                                                const signed int *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xi64x7 __builtin_epi_vlseg7_1xi64x7_mask(__epi_1xi64x7 merge,
                                                const signed long int *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
__epi_2xf32x7 __builtin_epi_vlseg7_2xf32x7_mask(__epi_2xf32x7 merge,
                                                const float *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xf64x7 __builtin_epi_vlseg7_1xf64x7_mask(__epi_1xf64x7 merge,
                                                const double *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);

2.8.17. Indexed segmented load of tuples of seven elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vlxseg7e.v
Prototypes
__epi_8xi8x7 __builtin_epi_vlseg7_indexed_8xi8x7(const signed char *address,
                                                 __epi_8xi8 index,
                                                 unsigned long int gvl);
__epi_4xi16x7
__builtin_epi_vlseg7_indexed_4xi16x7(const signed short int *address,
                                     __epi_4xi16 index, unsigned long int gvl);
__epi_2xi32x7 __builtin_epi_vlseg7_indexed_2xi32x7(const signed int *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xi64x7
__builtin_epi_vlseg7_indexed_1xi64x7(const signed long int *address,
                                     __epi_1xi64 index, unsigned long int gvl);
__epi_2xf32x7 __builtin_epi_vlseg7_indexed_2xf32x7(const float *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xf64x7 __builtin_epi_vlseg7_indexed_1xf64x7(const double *address,
                                                   __epi_1xi64 index,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v4[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v5[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v6[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x7 __builtin_epi_vlseg7_indexed_8xi8x7_mask(
    __epi_8xi8x7 merge, const signed char *address, __epi_8xi8 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x7 __builtin_epi_vlseg7_indexed_4xi16x7_mask(
    __epi_4xi16x7 merge, const signed short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x7 __builtin_epi_vlseg7_indexed_2xi32x7_mask(
    __epi_2xi32x7 merge, const signed int *address, __epi_2xi32 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x7 __builtin_epi_vlseg7_indexed_1xi64x7_mask(
    __epi_1xi64x7 merge, const signed long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x7 __builtin_epi_vlseg7_indexed_2xf32x7_mask(__epi_2xf32x7 merge,
                                                        const float *address,
                                                        __epi_2xi32 index,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x7 __builtin_epi_vlseg7_indexed_1xf64x7_mask(__epi_1xf64x7 merge,
                                                        const double *address,
                                                        __epi_1xi64 index,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.18. Strided segmented load of tuples of seven elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vlsseg7e.v
Prototypes
__epi_8xi8x7 __builtin_epi_vlseg7_strided_8xi8x7(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xi16x7
__builtin_epi_vlseg7_strided_4xi16x7(const signed short int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xi32x7 __builtin_epi_vlseg7_strided_2xi32x7(const signed int *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xi64x7
__builtin_epi_vlseg7_strided_1xi64x7(const signed long int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xf32x7 __builtin_epi_vlseg7_strided_2xf32x7(const float *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xf64x7 __builtin_epi_vlseg7_strided_1xf64x7(const double *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v4[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v5[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v6[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x7 __builtin_epi_vlseg7_strided_8xi8x7_mask(
    __epi_8xi8x7 merge, const signed char *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x7 __builtin_epi_vlseg7_strided_4xi16x7_mask(
    __epi_4xi16x7 merge, const signed short int *address,
    signed long int stride, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x7 __builtin_epi_vlseg7_strided_2xi32x7_mask(
    __epi_2xi32x7 merge, const signed int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x7 __builtin_epi_vlseg7_strided_1xi64x7_mask(
    __epi_1xi64x7 merge, const signed long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x7 __builtin_epi_vlseg7_strided_2xf32x7_mask(__epi_2xf32x7 merge,
                                                        const float *address,
                                                        signed long int stride,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x7 __builtin_epi_vlseg7_strided_1xf64x7_mask(__epi_1xf64x7 merge,
                                                        const double *address,
                                                        signed long int stride,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.19. Segmented load of tuples of eight elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

Instruction
vlseg8e.v
Prototypes
__epi_8xi8x8 __builtin_epi_vlseg8_8xi8x8(const signed char *address,
                                         unsigned long int gvl);
__epi_4xi16x8 __builtin_epi_vlseg8_4xi16x8(const signed short int *address,
                                           unsigned long int gvl);
__epi_2xi32x8 __builtin_epi_vlseg8_2xi32x8(const signed int *address,
                                           unsigned long int gvl);
__epi_1xi64x8 __builtin_epi_vlseg8_1xi64x8(const signed long int *address,
                                           unsigned long int gvl);
__epi_2xf32x8 __builtin_epi_vlseg8_2xf32x8(const float *address,
                                           unsigned long int gvl);
__epi_1xf64x8 __builtin_epi_vlseg8_1xf64x8(const double *address,
                                           unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result.v0[element] = load_element(address)
  address = address + SEW / 8
  result.v1[element] = load_element(address)
  address = address + SEW / 8
  result.v2[element] = load_element(address)
  address = address + SEW / 8
  result.v3[element] = load_element(address)
  address = address + SEW / 8
  result.v4[element] = load_element(address)
  address = address + SEW / 8
  result.v5[element] = load_element(address)
  address = address + SEW / 8
  result.v6[element] = load_element(address)
  address = address + SEW / 8
  result.v7[element] = load_element(address)
  address = address + SEW / 8
Masked prototypes
__epi_8xi8x8 __builtin_epi_vlseg8_8xi8x8_mask(__epi_8xi8x8 merge,
                                              const signed char *address,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
__epi_4xi16x8 __builtin_epi_vlseg8_4xi16x8_mask(__epi_4xi16x8 merge,
                                                const signed short int *address,
                                                __epi_4xi1 mask,
                                                unsigned long int gvl);
__epi_2xi32x8 __builtin_epi_vlseg8_2xi32x8_mask(__epi_2xi32x8 merge,
                                                const signed int *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xi64x8 __builtin_epi_vlseg8_1xi64x8_mask(__epi_1xi64x8 merge,
                                                const signed long int *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);
__epi_2xf32x8 __builtin_epi_vlseg8_2xf32x8_mask(__epi_2xf32x8 merge,
                                                const float *address,
                                                __epi_2xi1 mask,
                                                unsigned long int gvl);
__epi_1xf64x8 __builtin_epi_vlseg8_1xf64x8_mask(__epi_1xf64x8 merge,
                                                const double *address,
                                                __epi_1xi1 mask,
                                                unsigned long int gvl);

2.8.20. Indexed segmented load of tuples of seven elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vlxseg8e.v
Prototypes
__epi_8xi8x8 __builtin_epi_vlseg8_indexed_8xi8x8(const signed char *address,
                                                 __epi_8xi8 index,
                                                 unsigned long int gvl);
__epi_4xi16x8
__builtin_epi_vlseg8_indexed_4xi16x8(const signed short int *address,
                                     __epi_4xi16 index, unsigned long int gvl);
__epi_2xi32x8 __builtin_epi_vlseg8_indexed_2xi32x8(const signed int *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xi64x8
__builtin_epi_vlseg8_indexed_1xi64x8(const signed long int *address,
                                     __epi_1xi64 index, unsigned long int gvl);
__epi_2xf32x8 __builtin_epi_vlseg8_indexed_2xf32x8(const float *address,
                                                   __epi_2xi32 index,
                                                   unsigned long int gvl);
__epi_1xf64x8 __builtin_epi_vlseg8_indexed_1xf64x8(const double *address,
                                                   __epi_1xi64 index,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v4[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v5[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v6[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v7[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x8 __builtin_epi_vlseg8_indexed_8xi8x8_mask(
    __epi_8xi8x8 merge, const signed char *address, __epi_8xi8 index,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x8 __builtin_epi_vlseg8_indexed_4xi16x8_mask(
    __epi_4xi16x8 merge, const signed short int *address, __epi_4xi16 index,
    __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x8 __builtin_epi_vlseg8_indexed_2xi32x8_mask(
    __epi_2xi32x8 merge, const signed int *address, __epi_2xi32 index,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x8 __builtin_epi_vlseg8_indexed_1xi64x8_mask(
    __epi_1xi64x8 merge, const signed long int *address, __epi_1xi64 index,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x8 __builtin_epi_vlseg8_indexed_2xf32x8_mask(__epi_2xf32x8 merge,
                                                        const float *address,
                                                        __epi_2xi32 index,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x8 __builtin_epi_vlseg8_indexed_1xf64x8_mask(__epi_1xf64x8 merge,
                                                        const double *address,
                                                        __epi_1xi64 index,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.21. Strided segmented load of tuples of eight elements

Description

Use these builtins to load a vector of tuples of elements from memory such that component of the tuple is loaded into a different vector. This operation is useful to convert a memory representation of Array-of-Structures into a register representation of Structure-of-Arrays.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vlsseg8e.v
Prototypes
__epi_8xi8x8 __builtin_epi_vlseg8_strided_8xi8x8(const signed char *address,
                                                 signed long int stride,
                                                 unsigned long int gvl);
__epi_4xi16x8
__builtin_epi_vlseg8_strided_4xi16x8(const signed short int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xi32x8 __builtin_epi_vlseg8_strided_2xi32x8(const signed int *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xi64x8
__builtin_epi_vlseg8_strided_1xi64x8(const signed long int *address,
                                     signed long int stride,
                                     unsigned long int gvl);
__epi_2xf32x8 __builtin_epi_vlseg8_strided_2xf32x8(const float *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
__epi_1xf64x8 __builtin_epi_vlseg8_strided_1xf64x8(const double *address,
                                                   signed long int stride,
                                                   unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  result.v0[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v1[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v2[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v3[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v4[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v5[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v6[element] = load_element(element_address)
  element_address = element_address + SEW / 8
  result.v7[element] = load_element(element_address)
Masked prototypes
__epi_8xi8x8 __builtin_epi_vlseg8_strided_8xi8x8_mask(
    __epi_8xi8x8 merge, const signed char *address, signed long int stride,
    __epi_8xi1 mask, unsigned long int gvl);
__epi_4xi16x8 __builtin_epi_vlseg8_strided_4xi16x8_mask(
    __epi_4xi16x8 merge, const signed short int *address,
    signed long int stride, __epi_4xi1 mask, unsigned long int gvl);
__epi_2xi32x8 __builtin_epi_vlseg8_strided_2xi32x8_mask(
    __epi_2xi32x8 merge, const signed int *address, signed long int stride,
    __epi_2xi1 mask, unsigned long int gvl);
__epi_1xi64x8 __builtin_epi_vlseg8_strided_1xi64x8_mask(
    __epi_1xi64x8 merge, const signed long int *address, signed long int stride,
    __epi_1xi1 mask, unsigned long int gvl);
__epi_2xf32x8 __builtin_epi_vlseg8_strided_2xf32x8_mask(__epi_2xf32x8 merge,
                                                        const float *address,
                                                        signed long int stride,
                                                        __epi_2xi1 mask,
                                                        unsigned long int gvl);
__epi_1xf64x8 __builtin_epi_vlseg8_strided_1xf64x8_mask(__epi_1xf64x8 merge,
                                                        const double *address,
                                                        signed long int stride,
                                                        __epi_1xi1 mask,
                                                        unsigned long int gvl);

2.8.22. Segmented store of tuples of two elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures

Instruction
vsseg2e.v
Prototypes
void __builtin_epi_vsseg2_8xi8x2(signed char *address, __epi_8xi8x2 value,
                                 unsigned long int gvl);
void __builtin_epi_vsseg2_4xi16x2(signed short int *address,
                                  __epi_4xi16x2 value, unsigned long int gvl);
void __builtin_epi_vsseg2_2xi32x2(signed int *address, __epi_2xi32x2 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg2_1xi64x2(signed long int *address, __epi_1xi64x2 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg2_2xf32x2(float *address, __epi_2xf32x2 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg2_1xf64x2(double *address, __epi_1xf64x2 value,
                                  unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value.v0[element])
  address = address + SEW / 8
  store_element(address, value.v1[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vsseg2_8xi8x2_mask(signed char *address, __epi_8xi8x2 value,
                                      __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg2_4xi16x2_mask(signed short int *address,
                                       __epi_4xi16x2 value, __epi_4xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg2_2xi32x2_mask(signed int *address, __epi_2xi32x2 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg2_1xi64x2_mask(signed long int *address,
                                       __epi_1xi64x2 value, __epi_1xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg2_2xf32x2_mask(float *address, __epi_2xf32x2 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg2_1xf64x2_mask(double *address, __epi_1xf64x2 value,
                                       __epi_1xi1 mask, unsigned long int gvl);

2.8.23. Indexed segmented store of tuples of two elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vsxseg2e.v
Prototypes
void __builtin_epi_vsseg2_indexed_8xi8x2(signed char *address,
                                         __epi_8xi8x2 value, __epi_8xi8 index,
                                         unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_4xi16x2(signed short int *address,
                                          __epi_4xi16x2 value,
                                          __epi_4xi16 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_2xi32x2(signed int *address,
                                          __epi_2xi32x2 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_1xi64x2(signed long int *address,
                                          __epi_1xi64x2 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_2xf32x2(float *address, __epi_2xf32x2 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_1xf64x2(double *address, __epi_1xf64x2 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
Masked prototypes
void __builtin_epi_vsseg2_indexed_8xi8x2_mask(signed char *address,
                                              __epi_8xi8x2 value,
                                              __epi_8xi8 index, __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_4xi16x2_mask(signed short int *address,
                                               __epi_4xi16x2 value,
                                               __epi_4xi16 index,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_2xi32x2_mask(signed int *address,
                                               __epi_2xi32x2 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_1xi64x2_mask(signed long int *address,
                                               __epi_1xi64x2 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_2xf32x2_mask(float *address,
                                               __epi_2xf32x2 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg2_indexed_1xf64x2_mask(double *address,
                                               __epi_1xf64x2 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.24. Strided segmented store of tuples of two elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vssseg2e.v
Prototypes
void __builtin_epi_vsseg2_strided_8xi8x2(signed char *address,
                                         __epi_8xi8x2 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vsseg2_strided_4xi16x2(signed short int *address,
                                          __epi_4xi16x2 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg2_strided_2xi32x2(signed int *address,
                                          __epi_2xi32x2 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg2_strided_1xi64x2(signed long int *address,
                                          __epi_1xi64x2 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg2_strided_2xf32x2(float *address, __epi_2xf32x2 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg2_strided_1xf64x2(double *address, __epi_1xf64x2 value,
                                          signed long int stride,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
Masked prototypes
void __builtin_epi_vsseg2_strided_8xi8x2_mask(signed char *address,
                                              __epi_8xi8x2 value,
                                              signed long int stride,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg2_strided_4xi16x2_mask(signed short int *address,
                                               __epi_4xi16x2 value,
                                               signed long int stride,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg2_strided_2xi32x2_mask(signed int *address,
                                               __epi_2xi32x2 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg2_strided_1xi64x2_mask(signed long int *address,
                                               __epi_1xi64x2 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg2_strided_2xf32x2_mask(float *address,
                                               __epi_2xf32x2 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg2_strided_1xf64x2_mask(double *address,
                                               __epi_1xf64x2 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.25. Segmented store of tuples of three elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures

Instruction
vsseg3e.v
Prototypes
void __builtin_epi_vsseg3_8xi8x3(signed char *address, __epi_8xi8x3 value,
                                 unsigned long int gvl);
void __builtin_epi_vsseg3_4xi16x3(signed short int *address,
                                  __epi_4xi16x3 value, unsigned long int gvl);
void __builtin_epi_vsseg3_2xi32x3(signed int *address, __epi_2xi32x3 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg3_1xi64x3(signed long int *address, __epi_1xi64x3 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg3_2xf32x3(float *address, __epi_2xf32x3 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg3_1xf64x3(double *address, __epi_1xf64x3 value,
                                  unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value.v0[element])
  address = address + SEW / 8
  store_element(address, value.v1[element])
  address = address + SEW / 8
  store_element(address, value.v2[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vsseg3_8xi8x3_mask(signed char *address, __epi_8xi8x3 value,
                                      __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg3_4xi16x3_mask(signed short int *address,
                                       __epi_4xi16x3 value, __epi_4xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg3_2xi32x3_mask(signed int *address, __epi_2xi32x3 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg3_1xi64x3_mask(signed long int *address,
                                       __epi_1xi64x3 value, __epi_1xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg3_2xf32x3_mask(float *address, __epi_2xf32x3 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg3_1xf64x3_mask(double *address, __epi_1xf64x3 value,
                                       __epi_1xi1 mask, unsigned long int gvl);

2.8.26. Indexed segmented store of tuples of three elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vsxseg3e.v
Prototypes
void __builtin_epi_vsseg3_indexed_8xi8x3(signed char *address,
                                         __epi_8xi8x3 value, __epi_8xi8 index,
                                         unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_4xi16x3(signed short int *address,
                                          __epi_4xi16x3 value,
                                          __epi_4xi16 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_2xi32x3(signed int *address,
                                          __epi_2xi32x3 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_1xi64x3(signed long int *address,
                                          __epi_1xi64x3 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_2xf32x3(float *address, __epi_2xf32x3 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_1xf64x3(double *address, __epi_1xf64x3 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
Masked prototypes
void __builtin_epi_vsseg3_indexed_8xi8x3_mask(signed char *address,
                                              __epi_8xi8x3 value,
                                              __epi_8xi8 index, __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_4xi16x3_mask(signed short int *address,
                                               __epi_4xi16x3 value,
                                               __epi_4xi16 index,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_2xi32x3_mask(signed int *address,
                                               __epi_2xi32x3 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_1xi64x3_mask(signed long int *address,
                                               __epi_1xi64x3 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_2xf32x3_mask(float *address,
                                               __epi_2xf32x3 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg3_indexed_1xf64x3_mask(double *address,
                                               __epi_1xf64x3 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.27. Strided segmented store of tuples of three elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vssseg3e.v
Prototypes
void __builtin_epi_vsseg3_strided_8xi8x3(signed char *address,
                                         __epi_8xi8x3 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vsseg3_strided_4xi16x3(signed short int *address,
                                          __epi_4xi16x3 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg3_strided_2xi32x3(signed int *address,
                                          __epi_2xi32x3 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg3_strided_1xi64x3(signed long int *address,
                                          __epi_1xi64x3 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg3_strided_2xf32x3(float *address, __epi_2xf32x3 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg3_strided_1xf64x3(double *address, __epi_1xf64x3 value,
                                          signed long int stride,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
Masked prototypes
void __builtin_epi_vsseg3_strided_8xi8x3_mask(signed char *address,
                                              __epi_8xi8x3 value,
                                              signed long int stride,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg3_strided_4xi16x3_mask(signed short int *address,
                                               __epi_4xi16x3 value,
                                               signed long int stride,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg3_strided_2xi32x3_mask(signed int *address,
                                               __epi_2xi32x3 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg3_strided_1xi64x3_mask(signed long int *address,
                                               __epi_1xi64x3 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg3_strided_2xf32x3_mask(float *address,
                                               __epi_2xf32x3 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg3_strided_1xf64x3_mask(double *address,
                                               __epi_1xf64x3 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.28. Segmented store of tuples of four elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures

Instruction
vsseg4e.v
Prototypes
void __builtin_epi_vsseg4_8xi8x4(signed char *address, __epi_8xi8x4 value,
                                 unsigned long int gvl);
void __builtin_epi_vsseg4_4xi16x4(signed short int *address,
                                  __epi_4xi16x4 value, unsigned long int gvl);
void __builtin_epi_vsseg4_2xi32x4(signed int *address, __epi_2xi32x4 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg4_1xi64x4(signed long int *address, __epi_1xi64x4 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg4_2xf32x4(float *address, __epi_2xf32x4 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg4_1xf64x4(double *address, __epi_1xf64x4 value,
                                  unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value.v0[element])
  address = address + SEW / 8
  store_element(address, value.v1[element])
  address = address + SEW / 8
  store_element(address, value.v2[element])
  address = address + SEW / 8
  store_element(address, value.v3[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vsseg4_8xi8x4_mask(signed char *address, __epi_8xi8x4 value,
                                      __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg4_4xi16x4_mask(signed short int *address,
                                       __epi_4xi16x4 value, __epi_4xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg4_2xi32x4_mask(signed int *address, __epi_2xi32x4 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg4_1xi64x4_mask(signed long int *address,
                                       __epi_1xi64x4 value, __epi_1xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg4_2xf32x4_mask(float *address, __epi_2xf32x4 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg4_1xf64x4_mask(double *address, __epi_1xf64x4 value,
                                       __epi_1xi1 mask, unsigned long int gvl);

2.8.29. Indexed segmented store of tuples of four elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vsxseg4e.v
Prototypes
void __builtin_epi_vsseg4_indexed_8xi8x4(signed char *address,
                                         __epi_8xi8x4 value, __epi_8xi8 index,
                                         unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_4xi16x4(signed short int *address,
                                          __epi_4xi16x4 value,
                                          __epi_4xi16 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_2xi32x4(signed int *address,
                                          __epi_2xi32x4 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_1xi64x4(signed long int *address,
                                          __epi_1xi64x4 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_2xf32x4(float *address, __epi_2xf32x4 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_1xf64x4(double *address, __epi_1xf64x4 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
Masked prototypes
void __builtin_epi_vsseg4_indexed_8xi8x4_mask(signed char *address,
                                              __epi_8xi8x4 value,
                                              __epi_8xi8 index, __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_4xi16x4_mask(signed short int *address,
                                               __epi_4xi16x4 value,
                                               __epi_4xi16 index,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_2xi32x4_mask(signed int *address,
                                               __epi_2xi32x4 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_1xi64x4_mask(signed long int *address,
                                               __epi_1xi64x4 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_2xf32x4_mask(float *address,
                                               __epi_2xf32x4 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg4_indexed_1xf64x4_mask(double *address,
                                               __epi_1xf64x4 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.30. Strided segmented store of tuples of four elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vssseg4e.v
Prototypes
void __builtin_epi_vsseg4_strided_8xi8x4(signed char *address,
                                         __epi_8xi8x4 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vsseg4_strided_4xi16x4(signed short int *address,
                                          __epi_4xi16x4 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg4_strided_2xi32x4(signed int *address,
                                          __epi_2xi32x4 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg4_strided_1xi64x4(signed long int *address,
                                          __epi_1xi64x4 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg4_strided_2xf32x4(float *address, __epi_2xf32x4 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg4_strided_1xf64x4(double *address, __epi_1xf64x4 value,
                                          signed long int stride,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
Masked prototypes
void __builtin_epi_vsseg4_strided_8xi8x4_mask(signed char *address,
                                              __epi_8xi8x4 value,
                                              signed long int stride,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg4_strided_4xi16x4_mask(signed short int *address,
                                               __epi_4xi16x4 value,
                                               signed long int stride,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg4_strided_2xi32x4_mask(signed int *address,
                                               __epi_2xi32x4 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg4_strided_1xi64x4_mask(signed long int *address,
                                               __epi_1xi64x4 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg4_strided_2xf32x4_mask(float *address,
                                               __epi_2xf32x4 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg4_strided_1xf64x4_mask(double *address,
                                               __epi_1xf64x4 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.31. Segmented store of tuples of five elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures

Instruction
vsseg5e.v
Prototypes
void __builtin_epi_vsseg5_8xi8x5(signed char *address, __epi_8xi8x5 value,
                                 unsigned long int gvl);
void __builtin_epi_vsseg5_4xi16x5(signed short int *address,
                                  __epi_4xi16x5 value, unsigned long int gvl);
void __builtin_epi_vsseg5_2xi32x5(signed int *address, __epi_2xi32x5 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg5_1xi64x5(signed long int *address, __epi_1xi64x5 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg5_2xf32x5(float *address, __epi_2xf32x5 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg5_1xf64x5(double *address, __epi_1xf64x5 value,
                                  unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value.v0[element])
  address = address + SEW / 8
  store_element(address, value.v1[element])
  address = address + SEW / 8
  store_element(address, value.v2[element])
  address = address + SEW / 8
  store_element(address, value.v3[element])
  address = address + SEW / 8
  store_element(address, value.v4[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vsseg5_8xi8x5_mask(signed char *address, __epi_8xi8x5 value,
                                      __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg5_4xi16x5_mask(signed short int *address,
                                       __epi_4xi16x5 value, __epi_4xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg5_2xi32x5_mask(signed int *address, __epi_2xi32x5 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg5_1xi64x5_mask(signed long int *address,
                                       __epi_1xi64x5 value, __epi_1xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg5_2xf32x5_mask(float *address, __epi_2xf32x5 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg5_1xf64x5_mask(double *address, __epi_1xf64x5 value,
                                       __epi_1xi1 mask, unsigned long int gvl);

2.8.32. Indexed segmented store of tuples of five elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vsxseg5e.v
Prototypes
void __builtin_epi_vsseg5_indexed_8xi8x5(signed char *address,
                                         __epi_8xi8x5 value, __epi_8xi8 index,
                                         unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_4xi16x5(signed short int *address,
                                          __epi_4xi16x5 value,
                                          __epi_4xi16 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_2xi32x5(signed int *address,
                                          __epi_2xi32x5 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_1xi64x5(signed long int *address,
                                          __epi_1xi64x5 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_2xf32x5(float *address, __epi_2xf32x5 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_1xf64x5(double *address, __epi_1xf64x5 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v4[element])
Masked prototypes
void __builtin_epi_vsseg5_indexed_8xi8x5_mask(signed char *address,
                                              __epi_8xi8x5 value,
                                              __epi_8xi8 index, __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_4xi16x5_mask(signed short int *address,
                                               __epi_4xi16x5 value,
                                               __epi_4xi16 index,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_2xi32x5_mask(signed int *address,
                                               __epi_2xi32x5 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_1xi64x5_mask(signed long int *address,
                                               __epi_1xi64x5 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_2xf32x5_mask(float *address,
                                               __epi_2xf32x5 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg5_indexed_1xf64x5_mask(double *address,
                                               __epi_1xf64x5 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.33. Strided segmented store of tuples of five elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vssseg5e.v
Prototypes
void __builtin_epi_vsseg5_strided_8xi8x5(signed char *address,
                                         __epi_8xi8x5 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vsseg5_strided_4xi16x5(signed short int *address,
                                          __epi_4xi16x5 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg5_strided_2xi32x5(signed int *address,
                                          __epi_2xi32x5 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg5_strided_1xi64x5(signed long int *address,
                                          __epi_1xi64x5 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg5_strided_2xf32x5(float *address, __epi_2xf32x5 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg5_strided_1xf64x5(double *address, __epi_1xf64x5 value,
                                          signed long int stride,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v4[element])
Masked prototypes
void __builtin_epi_vsseg5_strided_8xi8x5_mask(signed char *address,
                                              __epi_8xi8x5 value,
                                              signed long int stride,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg5_strided_4xi16x5_mask(signed short int *address,
                                               __epi_4xi16x5 value,
                                               signed long int stride,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg5_strided_2xi32x5_mask(signed int *address,
                                               __epi_2xi32x5 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg5_strided_1xi64x5_mask(signed long int *address,
                                               __epi_1xi64x5 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg5_strided_2xf32x5_mask(float *address,
                                               __epi_2xf32x5 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg5_strided_1xf64x5_mask(double *address,
                                               __epi_1xf64x5 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.34. Segmented store of tuples of six elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures

Instruction
vsseg6e.v
Prototypes
void __builtin_epi_vsseg6_8xi8x6(signed char *address, __epi_8xi8x6 value,
                                 unsigned long int gvl);
void __builtin_epi_vsseg6_4xi16x6(signed short int *address,
                                  __epi_4xi16x6 value, unsigned long int gvl);
void __builtin_epi_vsseg6_2xi32x6(signed int *address, __epi_2xi32x6 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg6_1xi64x6(signed long int *address, __epi_1xi64x6 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg6_2xf32x6(float *address, __epi_2xf32x6 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg6_1xf64x6(double *address, __epi_1xf64x6 value,
                                  unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value.v0[element])
  address = address + SEW / 8
  store_element(address, value.v1[element])
  address = address + SEW / 8
  store_element(address, value.v2[element])
  address = address + SEW / 8
  store_element(address, value.v3[element])
  address = address + SEW / 8
  store_element(address, value.v4[element])
  address = address + SEW / 8
  store_element(address, value.v5[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vsseg6_8xi8x6_mask(signed char *address, __epi_8xi8x6 value,
                                      __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg6_4xi16x6_mask(signed short int *address,
                                       __epi_4xi16x6 value, __epi_4xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg6_2xi32x6_mask(signed int *address, __epi_2xi32x6 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg6_1xi64x6_mask(signed long int *address,
                                       __epi_1xi64x6 value, __epi_1xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg6_2xf32x6_mask(float *address, __epi_2xf32x6 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg6_1xf64x6_mask(double *address, __epi_1xf64x6 value,
                                       __epi_1xi1 mask, unsigned long int gvl);

2.8.35. Indexed segmented store of tuples of six elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vsxseg6e.v
Prototypes
void __builtin_epi_vsseg6_indexed_8xi8x6(signed char *address,
                                         __epi_8xi8x6 value, __epi_8xi8 index,
                                         unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_4xi16x6(signed short int *address,
                                          __epi_4xi16x6 value,
                                          __epi_4xi16 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_2xi32x6(signed int *address,
                                          __epi_2xi32x6 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_1xi64x6(signed long int *address,
                                          __epi_1xi64x6 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_2xf32x6(float *address, __epi_2xf32x6 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_1xf64x6(double *address, __epi_1xf64x6 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v4[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v5[element])
Masked prototypes
void __builtin_epi_vsseg6_indexed_8xi8x6_mask(signed char *address,
                                              __epi_8xi8x6 value,
                                              __epi_8xi8 index, __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_4xi16x6_mask(signed short int *address,
                                               __epi_4xi16x6 value,
                                               __epi_4xi16 index,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_2xi32x6_mask(signed int *address,
                                               __epi_2xi32x6 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_1xi64x6_mask(signed long int *address,
                                               __epi_1xi64x6 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_2xf32x6_mask(float *address,
                                               __epi_2xf32x6 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg6_indexed_1xf64x6_mask(double *address,
                                               __epi_1xf64x6 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.36. Strided segmented store of tuples of six elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vssseg6e.v
Prototypes
void __builtin_epi_vsseg6_strided_8xi8x6(signed char *address,
                                         __epi_8xi8x6 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vsseg6_strided_4xi16x6(signed short int *address,
                                          __epi_4xi16x6 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg6_strided_2xi32x6(signed int *address,
                                          __epi_2xi32x6 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg6_strided_1xi64x6(signed long int *address,
                                          __epi_1xi64x6 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg6_strided_2xf32x6(float *address, __epi_2xf32x6 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg6_strided_1xf64x6(double *address, __epi_1xf64x6 value,
                                          signed long int stride,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v4[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v5[element])
Masked prototypes
void __builtin_epi_vsseg6_strided_8xi8x6_mask(signed char *address,
                                              __epi_8xi8x6 value,
                                              signed long int stride,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg6_strided_4xi16x6_mask(signed short int *address,
                                               __epi_4xi16x6 value,
                                               signed long int stride,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg6_strided_2xi32x6_mask(signed int *address,
                                               __epi_2xi32x6 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg6_strided_1xi64x6_mask(signed long int *address,
                                               __epi_1xi64x6 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg6_strided_2xf32x6_mask(float *address,
                                               __epi_2xf32x6 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg6_strided_1xf64x6_mask(double *address,
                                               __epi_1xf64x6 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.37. Segmented store of tuples of seven elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures

Instruction
vsseg7e.v
Prototypes
void __builtin_epi_vsseg7_8xi8x7(signed char *address, __epi_8xi8x7 value,
                                 unsigned long int gvl);
void __builtin_epi_vsseg7_4xi16x7(signed short int *address,
                                  __epi_4xi16x7 value, unsigned long int gvl);
void __builtin_epi_vsseg7_2xi32x7(signed int *address, __epi_2xi32x7 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg7_1xi64x7(signed long int *address, __epi_1xi64x7 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg7_2xf32x7(float *address, __epi_2xf32x7 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg7_1xf64x7(double *address, __epi_1xf64x7 value,
                                  unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value.v0[element])
  address = address + SEW / 8
  store_element(address, value.v1[element])
  address = address + SEW / 8
  store_element(address, value.v2[element])
  address = address + SEW / 8
  store_element(address, value.v3[element])
  address = address + SEW / 8
  store_element(address, value.v4[element])
  address = address + SEW / 8
  store_element(address, value.v5[element])
  address = address + SEW / 8
  store_element(address, value.v6[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vsseg7_8xi8x7_mask(signed char *address, __epi_8xi8x7 value,
                                      __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg7_4xi16x7_mask(signed short int *address,
                                       __epi_4xi16x7 value, __epi_4xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg7_2xi32x7_mask(signed int *address, __epi_2xi32x7 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg7_1xi64x7_mask(signed long int *address,
                                       __epi_1xi64x7 value, __epi_1xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg7_2xf32x7_mask(float *address, __epi_2xf32x7 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg7_1xf64x7_mask(double *address, __epi_1xf64x7 value,
                                       __epi_1xi1 mask, unsigned long int gvl);

2.8.38. Indexed segmented store of tuples of seven elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vsxseg7e.v
Prototypes
void __builtin_epi_vsseg7_indexed_8xi8x7(signed char *address,
                                         __epi_8xi8x7 value, __epi_8xi8 index,
                                         unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_4xi16x7(signed short int *address,
                                          __epi_4xi16x7 value,
                                          __epi_4xi16 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_2xi32x7(signed int *address,
                                          __epi_2xi32x7 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_1xi64x7(signed long int *address,
                                          __epi_1xi64x7 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_2xf32x7(float *address, __epi_2xf32x7 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_1xf64x7(double *address, __epi_1xf64x7 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v4[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v5[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v6[element])
Masked prototypes
void __builtin_epi_vsseg7_indexed_8xi8x7_mask(signed char *address,
                                              __epi_8xi8x7 value,
                                              __epi_8xi8 index, __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_4xi16x7_mask(signed short int *address,
                                               __epi_4xi16x7 value,
                                               __epi_4xi16 index,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_2xi32x7_mask(signed int *address,
                                               __epi_2xi32x7 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_1xi64x7_mask(signed long int *address,
                                               __epi_1xi64x7 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_2xf32x7_mask(float *address,
                                               __epi_2xf32x7 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg7_indexed_1xf64x7_mask(double *address,
                                               __epi_1xf64x7 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.39. Strided segmented store of tuples of seven elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vssseg7e.v
Prototypes
void __builtin_epi_vsseg7_strided_8xi8x7(signed char *address,
                                         __epi_8xi8x7 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vsseg7_strided_4xi16x7(signed short int *address,
                                          __epi_4xi16x7 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg7_strided_2xi32x7(signed int *address,
                                          __epi_2xi32x7 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg7_strided_1xi64x7(signed long int *address,
                                          __epi_1xi64x7 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg7_strided_2xf32x7(float *address, __epi_2xf32x7 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg7_strided_1xf64x7(double *address, __epi_1xf64x7 value,
                                          signed long int stride,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v4[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v5[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v6[element])
Masked prototypes
void __builtin_epi_vsseg7_strided_8xi8x7_mask(signed char *address,
                                              __epi_8xi8x7 value,
                                              signed long int stride,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg7_strided_4xi16x7_mask(signed short int *address,
                                               __epi_4xi16x7 value,
                                               signed long int stride,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg7_strided_2xi32x7_mask(signed int *address,
                                               __epi_2xi32x7 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg7_strided_1xi64x7_mask(signed long int *address,
                                               __epi_1xi64x7 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg7_strided_2xf32x7_mask(float *address,
                                               __epi_2xf32x7 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg7_strided_1xf64x7_mask(double *address,
                                               __epi_1xf64x7 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.40. Segmented store of tuples of eight elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures

Instruction
vsseg8e.v
Prototypes
void __builtin_epi_vsseg8_8xi8x8(signed char *address, __epi_8xi8x8 value,
                                 unsigned long int gvl);
void __builtin_epi_vsseg8_4xi16x8(signed short int *address,
                                  __epi_4xi16x8 value, unsigned long int gvl);
void __builtin_epi_vsseg8_2xi32x8(signed int *address, __epi_2xi32x8 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg8_1xi64x8(signed long int *address, __epi_1xi64x8 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg8_2xf32x8(float *address, __epi_2xf32x8 value,
                                  unsigned long int gvl);
void __builtin_epi_vsseg8_1xf64x8(double *address, __epi_1xf64x8 value,
                                  unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  store_element(address, value.v0[element])
  address = address + SEW / 8
  store_element(address, value.v1[element])
  address = address + SEW / 8
  store_element(address, value.v2[element])
  address = address + SEW / 8
  store_element(address, value.v3[element])
  address = address + SEW / 8
  store_element(address, value.v4[element])
  address = address + SEW / 8
  store_element(address, value.v5[element])
  address = address + SEW / 8
  store_element(address, value.v6[element])
  address = address + SEW / 8
  store_element(address, value.v7[element])
  address = address + SEW / 8
Masked prototypes
void __builtin_epi_vsseg8_8xi8x8_mask(signed char *address, __epi_8xi8x8 value,
                                      __epi_8xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg8_4xi16x8_mask(signed short int *address,
                                       __epi_4xi16x8 value, __epi_4xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg8_2xi32x8_mask(signed int *address, __epi_2xi32x8 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg8_1xi64x8_mask(signed long int *address,
                                       __epi_1xi64x8 value, __epi_1xi1 mask,
                                       unsigned long int gvl);
void __builtin_epi_vsseg8_2xf32x8_mask(float *address, __epi_2xf32x8 value,
                                       __epi_2xi1 mask, unsigned long int gvl);
void __builtin_epi_vsseg8_1xf64x8_mask(double *address, __epi_1xf64x8 value,
                                       __epi_1xi1 mask, unsigned long int gvl);

2.8.41. Indexed segmented store of tuples of eight elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The indexed version of these instructions are useful when the tuples are not consecutively found in memory but at an offset of a base address. The offset is represented in bytes using a vector of indices.

Instruction
vsxseg8e.v
Prototypes
void __builtin_epi_vsseg8_indexed_8xi8x8(signed char *address,
                                         __epi_8xi8x8 value, __epi_8xi8 index,
                                         unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_4xi16x8(signed short int *address,
                                          __epi_4xi16x8 value,
                                          __epi_4xi16 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_2xi32x8(signed int *address,
                                          __epi_2xi32x8 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_1xi64x8(signed long int *address,
                                          __epi_1xi64x8 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_2xf32x8(float *address, __epi_2xf32x8 value,
                                          __epi_2xi32 index,
                                          unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_1xf64x8(double *address, __epi_1xf64x8 value,
                                          __epi_1xi64 index,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + index[element]
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v4[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v5[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v6[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v7[element])
Masked prototypes
void __builtin_epi_vsseg8_indexed_8xi8x8_mask(signed char *address,
                                              __epi_8xi8x8 value,
                                              __epi_8xi8 index, __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_4xi16x8_mask(signed short int *address,
                                               __epi_4xi16x8 value,
                                               __epi_4xi16 index,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_2xi32x8_mask(signed int *address,
                                               __epi_2xi32x8 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_1xi64x8_mask(signed long int *address,
                                               __epi_1xi64x8 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_2xf32x8_mask(float *address,
                                               __epi_2xf32x8 value,
                                               __epi_2xi32 index,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg8_indexed_1xf64x8_mask(double *address,
                                               __epi_1xf64x8 value,
                                               __epi_1xi64 index,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.8.42. Strided segmented store of tuples of eight elements

Description

Use these builtins to store a tuple of vectors to memory. The store happens creating a sequence of elements built from grouping the elements of each vector in the tuple. This operation is useful to convert a register representation of Structure-of-Arrays into a memory representation of Array-of-Structures.

The strided version of these instructions are useful when the tuples are not consecutively found in memory but separated between them a constant amount of bytes.

Instruction
vssseg7e.v
Prototypes
void __builtin_epi_vsseg8_strided_8xi8x8(signed char *address,
                                         __epi_8xi8x8 value,
                                         signed long int stride,
                                         unsigned long int gvl);
void __builtin_epi_vsseg8_strided_4xi16x8(signed short int *address,
                                          __epi_4xi16x8 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg8_strided_2xi32x8(signed int *address,
                                          __epi_2xi32x8 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg8_strided_1xi64x8(signed long int *address,
                                          __epi_1xi64x8 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg8_strided_2xf32x8(float *address, __epi_2xf32x8 value,
                                          signed long int stride,
                                          unsigned long int gvl);
void __builtin_epi_vsseg8_strided_1xf64x8(double *address, __epi_1xf64x8 value,
                                          signed long int stride,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  element_address = address + element * stride
  store_element(element_address, value.v0[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v1[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v2[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v3[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v4[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v5[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v6[element])
  element_address = element_address + SEW / 8
  store_element(element_address, value.v7[element])
Masked prototypes
void __builtin_epi_vsseg8_strided_8xi8x8_mask(signed char *address,
                                              __epi_8xi8x8 value,
                                              signed long int stride,
                                              __epi_8xi1 mask,
                                              unsigned long int gvl);
void __builtin_epi_vsseg8_strided_4xi16x8_mask(signed short int *address,
                                               __epi_4xi16x8 value,
                                               signed long int stride,
                                               __epi_4xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg8_strided_2xi32x8_mask(signed int *address,
                                               __epi_2xi32x8 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg8_strided_1xi64x8_mask(signed long int *address,
                                               __epi_1xi64x8 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg8_strided_2xf32x8_mask(float *address,
                                               __epi_2xf32x8 value,
                                               signed long int stride,
                                               __epi_2xi1 mask,
                                               unsigned long int gvl);
void __builtin_epi_vsseg8_strided_1xf64x8_mask(double *address,
                                               __epi_1xf64x8 value,
                                               signed long int stride,
                                               __epi_1xi1 mask,
                                               unsigned long int gvl);

2.9. Operations with masks

2.9.1. Compute the index of the first enabled element

Description

Use these builtins to compute the lowermost index of all the enabled elements of a given mask vector.

Instruction
vfirst.m
Prototypes
signed long int __builtin_epi_vfirst_8xi1(__epi_8xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vfirst_4xi1(__epi_4xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vfirst_2xi1(__epi_2xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vfirst_1xi1(__epi_1xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vfirst_16xi1(__epi_16xi1 a,
                                           unsigned long int gvl);
signed long int __builtin_epi_vfirst_32xi1(__epi_32xi1 a,
                                           unsigned long int gvl);
signed long int __builtin_epi_vfirst_64xi1(__epi_64xi1 a,
                                           unsigned long int gvl);
Operation
result = -1
for element = 0 to gvl - 1
  if a[element] then
    result = element
    break
Masked prototypes
signed long int __builtin_epi_vfirst_8xi1_mask(__epi_8xi1 a, __epi_8xi1 mask,
                                               unsigned long int gvl);
signed long int __builtin_epi_vfirst_4xi1_mask(__epi_4xi1 a, __epi_4xi1 mask,
                                               unsigned long int gvl);
signed long int __builtin_epi_vfirst_2xi1_mask(__epi_2xi1 a, __epi_2xi1 mask,
                                               unsigned long int gvl);
signed long int __builtin_epi_vfirst_1xi1_mask(__epi_1xi1 a, __epi_1xi1 mask,
                                               unsigned long int gvl);
signed long int __builtin_epi_vfirst_16xi1_mask(__epi_16xi1 a, __epi_16xi1 mask,
                                                unsigned long int gvl);
signed long int __builtin_epi_vfirst_32xi1_mask(__epi_32xi1 a, __epi_32xi1 mask,
                                                unsigned long int gvl);
signed long int __builtin_epi_vfirst_64xi1_mask(__epi_64xi1 a, __epi_64xi1 mask,
                                                unsigned long int gvl);
Masked operation
result = -1
for element = 0 to gvl - 1
  if mask[element] and a[element] then
    result = element
    break

2.9.2. Compute elementwise logical and between two masks

Description

Use these builtins to compute a new mask that enables an element if and only if the two mask operands enable that element.

Instruction
vmand.mm
Prototypes
__epi_8xi1 __builtin_epi_vmand_8xi1(__epi_8xi1 a, __epi_8xi1 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmand_4xi1(__epi_4xi1 a, __epi_4xi1 b,
                                    unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmand_2xi1(__epi_2xi1 a, __epi_2xi1 b,
                                    unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmand_1xi1(__epi_1xi1 a, __epi_1xi1 b,
                                    unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmand_16xi1(__epi_16xi1 a, __epi_16xi1 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmand_32xi1(__epi_32xi1 a, __epi_32xi1 b,
                                      unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmand_64xi1(__epi_64xi1 a, __epi_64xi1 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = logical_and(a[element], b[element])

2.9.3. Compute elementwise logical andnot between two masks

Description

Use these builtins to compute a new mask that enables an element if and only if the first mask operand enables the element and the second mask operand does not enable the element.

Instruction
vmandnot.mm
Prototypes
__epi_8xi1 __builtin_epi_vmandnot_8xi1(__epi_8xi1 a, __epi_8xi1 b,
                                       unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmandnot_4xi1(__epi_4xi1 a, __epi_4xi1 b,
                                       unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmandnot_2xi1(__epi_2xi1 a, __epi_2xi1 b,
                                       unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmandnot_1xi1(__epi_1xi1 a, __epi_1xi1 b,
                                       unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmandnot_16xi1(__epi_16xi1 a, __epi_16xi1 b,
                                         unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmandnot_32xi1(__epi_32xi1 a, __epi_32xi1 b,
                                         unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmandnot_64xi1(__epi_64xi1 a, __epi_64xi1 b,
                                         unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = logical_and(a[element], logical_not(b[element]))

2.9.4. Compute elementwise logical negated and between two masks

Description

Use these builtins to compute a new mask that enables an element if and only if the first and second mask operand do not enable the element.

Instruction
vmnand.mm
Prototypes
__epi_8xi1 __builtin_epi_vmnand_8xi1(__epi_8xi1 a, __epi_8xi1 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmnand_4xi1(__epi_4xi1 a, __epi_4xi1 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmnand_2xi1(__epi_2xi1 a, __epi_2xi1 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmnand_1xi1(__epi_1xi1 a, __epi_1xi1 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmnand_16xi1(__epi_16xi1 a, __epi_16xi1 b,
                                       unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmnand_32xi1(__epi_32xi1 a, __epi_32xi1 b,
                                       unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmnand_64xi1(__epi_64xi1 a, __epi_64xi1 b,
                                       unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = logical_not(logical_and(a[element], b[element]))

2.9.5. Compute elementwise logical negated and between two masks

Description

Use these builtins to compute a new mask that enables an element if and only if neither the first or the second mask operand enable the element.

Instruction
vmnor.mm
Prototypes
__epi_8xi1 __builtin_epi_vmnor_8xi1(__epi_8xi1 a, __epi_8xi1 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmnor_4xi1(__epi_4xi1 a, __epi_4xi1 b,
                                    unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmnor_2xi1(__epi_2xi1 a, __epi_2xi1 b,
                                    unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmnor_1xi1(__epi_1xi1 a, __epi_1xi1 b,
                                    unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmnor_16xi1(__epi_16xi1 a, __epi_16xi1 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmnor_32xi1(__epi_32xi1 a, __epi_32xi1 b,
                                      unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmnor_64xi1(__epi_64xi1 a, __epi_64xi1 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = logical_not(logical_or(a[element], b[element]))

2.9.6. Compute elementwise logical or between two masks

Description

Use these builtins to compute a new mask that enables an element if either the first or the second (or both) mask operands enable that element.

Instruction
vmand.mm
Prototypes
__epi_8xi1 __builtin_epi_vmor_8xi1(__epi_8xi1 a, __epi_8xi1 b,
                                   unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmor_4xi1(__epi_4xi1 a, __epi_4xi1 b,
                                   unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmor_2xi1(__epi_2xi1 a, __epi_2xi1 b,
                                   unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmor_1xi1(__epi_1xi1 a, __epi_1xi1 b,
                                   unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmor_16xi1(__epi_16xi1 a, __epi_16xi1 b,
                                     unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmor_32xi1(__epi_32xi1 a, __epi_32xi1 b,
                                     unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmor_64xi1(__epi_64xi1 a, __epi_64xi1 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = logical_or(a[element], b[element])

2.9.7. Compute elementwise logical ornot between two masks

Description

Use these builtins to compute a new mask that enables an element if either the first mask operand enables the element or the second mask operand does not enable that element.

Instruction
vmornot.mm
Prototypes
__epi_8xi1 __builtin_epi_vmornot_8xi1(__epi_8xi1 a, __epi_8xi1 b,
                                      unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmornot_4xi1(__epi_4xi1 a, __epi_4xi1 b,
                                      unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmornot_2xi1(__epi_2xi1 a, __epi_2xi1 b,
                                      unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmornot_1xi1(__epi_1xi1 a, __epi_1xi1 b,
                                      unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmornot_16xi1(__epi_16xi1 a, __epi_16xi1 b,
                                        unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmornot_32xi1(__epi_32xi1 a, __epi_32xi1 b,
                                        unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmornot_64xi1(__epi_64xi1 a, __epi_64xi1 b,
                                        unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = logical_or(a[element], logical_not(b[element]))

2.9.8. Enable elements before the first one enabled

Description

Use these builtins to compute a mask vector given another mask vector. The resulting mask vector will have all the elements enabled up to, but not including, the first element enabled in the given mask.

Every other element in the resulting mask is disabled.

Instruction
vmsbf.m
Prototypes
__epi_8xi1 __builtin_epi_vmsbf_8xi1(__epi_8xi1 a, unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsbf_4xi1(__epi_4xi1 a, unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsbf_2xi1(__epi_2xi1 a, unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsbf_1xi1(__epi_1xi1 a, unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsbf_16xi1(__epi_16xi1 a, unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsbf_32xi1(__epi_32xi1 a, unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsbf_64xi1(__epi_64xi1 a, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  if not a[element] then
    result[element] = 1
  else
    break
Masked prototypes
__epi_8xi1 __builtin_epi_vmsbf_8xi1_mask(__epi_8xi1 a, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsbf_4xi1_mask(__epi_4xi1 a, __epi_4xi1 mask,
                                         unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsbf_2xi1_mask(__epi_2xi1 a, __epi_2xi1 mask,
                                         unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsbf_1xi1_mask(__epi_1xi1 a, __epi_1xi1 mask,
                                         unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsbf_16xi1_mask(__epi_16xi1 a, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsbf_32xi1_mask(__epi_32xi1 a, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsbf_64xi1_mask(__epi_64xi1 a, __epi_64xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    if not a[element] then
      result[element] = 1
    else
      break

2.9.9. Enable elements until the first one enabled

Description

Use these builtins to compute a mask vector given another mask vector. The resulting mask vector will have all the elements enabled up to the first element enabled in the given mask. The enabled element is included.

Every other element in the resulting mask is disabled.

Instruction
vmsif.m
Prototypes
__epi_8xi1 __builtin_epi_vmsif_8xi1(__epi_8xi1 a, unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsif_4xi1(__epi_4xi1 a, unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsif_2xi1(__epi_2xi1 a, unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsif_1xi1(__epi_1xi1 a, unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsif_16xi1(__epi_16xi1 a, unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsif_32xi1(__epi_32xi1 a, unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsif_64xi1(__epi_64xi1 a, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = 1
  if a[element] then
    break
Masked prototypes
__epi_8xi1 __builtin_epi_vmsif_8xi1_mask(__epi_8xi1 a, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsif_4xi1_mask(__epi_4xi1 a, __epi_4xi1 mask,
                                         unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsif_2xi1_mask(__epi_2xi1 a, __epi_2xi1 mask,
                                         unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsif_1xi1_mask(__epi_1xi1 a, __epi_1xi1 mask,
                                         unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsif_16xi1_mask(__epi_16xi1 a, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsif_32xi1_mask(__epi_32xi1 a, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsif_64xi1_mask(__epi_64xi1 a, __epi_64xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    result[element] = 1
    if a[element] then
      break

2.9.10. Enable only the first element enabled

Description

Use these builtins to compute a mask vector given another mask vector. The resulting mask vector will have all the elements disabled except for the lowermost corresponding element enabled in the given mask.

Instruction
vmsof.m
Prototypes
__epi_8xi1 __builtin_epi_vmsof_8xi1(__epi_8xi1 a, unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsof_4xi1(__epi_4xi1 a, unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsof_2xi1(__epi_2xi1 a, unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsof_1xi1(__epi_1xi1 a, unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsof_16xi1(__epi_16xi1 a, unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsof_32xi1(__epi_32xi1 a, unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsof_64xi1(__epi_64xi1 a, unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  if a[element] then
    result[element] = 1
    break
  else
    result[element] = 0
Masked prototypes
__epi_8xi1 __builtin_epi_vmsof_8xi1_mask(__epi_8xi1 a, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmsof_4xi1_mask(__epi_4xi1 a, __epi_4xi1 mask,
                                         unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmsof_2xi1_mask(__epi_2xi1 a, __epi_2xi1 mask,
                                         unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmsof_1xi1_mask(__epi_1xi1 a, __epi_1xi1 mask,
                                         unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmsof_16xi1_mask(__epi_16xi1 a, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmsof_32xi1_mask(__epi_32xi1 a, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmsof_64xi1_mask(__epi_64xi1 a, __epi_64xi1 mask,
                                           unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
  if mask[element] then
    if a[element] then
      result[element] = 1
      break
    else
      result[element] = 0

2.9.11. Compute elementwise logical negated exclusive or between two masks

Description

Use these builtins to compute a new mask that enables an element if and only if the first mask operand and the second mask operand enable or disable that element at the same time.

Instruction
vmxnor.mm
Prototypes
__epi_8xi1 __builtin_epi_vmxnor_8xi1(__epi_8xi1 a, __epi_8xi1 b,
                                     unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmxnor_4xi1(__epi_4xi1 a, __epi_4xi1 b,
                                     unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmxnor_2xi1(__epi_2xi1 a, __epi_2xi1 b,
                                     unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmxnor_1xi1(__epi_1xi1 a, __epi_1xi1 b,
                                     unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmxnor_16xi1(__epi_16xi1 a, __epi_16xi1 b,
                                       unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmxnor_32xi1(__epi_32xi1 a, __epi_32xi1 b,
                                       unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmxnor_64xi1(__epi_64xi1 a, __epi_64xi1 b,
                                       unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = logical_not(logical_xor(a[element], b[element]))

2.9.12. Compute elementwise logical exclusive or between two masks

Description

Use these builtins to compute a new mask that enables an element if and only if the first mask operand and the second mask operand do not enable or disable that element at the same time.

Instruction
vmxor.mm
Prototypes
__epi_8xi1 __builtin_epi_vmxor_8xi1(__epi_8xi1 a, __epi_8xi1 b,
                                    unsigned long int gvl);
__epi_4xi1 __builtin_epi_vmxor_4xi1(__epi_4xi1 a, __epi_4xi1 b,
                                    unsigned long int gvl);
__epi_2xi1 __builtin_epi_vmxor_2xi1(__epi_2xi1 a, __epi_2xi1 b,
                                    unsigned long int gvl);
__epi_1xi1 __builtin_epi_vmxor_1xi1(__epi_1xi1 a, __epi_1xi1 b,
                                    unsigned long int gvl);
__epi_16xi1 __builtin_epi_vmxor_16xi1(__epi_16xi1 a, __epi_16xi1 b,
                                      unsigned long int gvl);
__epi_32xi1 __builtin_epi_vmxor_32xi1(__epi_32xi1 a, __epi_32xi1 b,
                                      unsigned long int gvl);
__epi_64xi1 __builtin_epi_vmxor_64xi1(__epi_64xi1 a, __epi_64xi1 b,
                                      unsigned long int gvl);
Operation
for element = 0 to gvl - 1
  result[element] = logical_xor(a[element], b[element])

2.9.13. Population count of a mask vector

Description

Use these builtins to count the number of elements that are enabled by a mask.

Instruction
vpopc.m
Prototypes
signed long int __builtin_epi_vpopc_8xi1(__epi_8xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vpopc_4xi1(__epi_4xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vpopc_2xi1(__epi_2xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vpopc_1xi1(__epi_1xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vpopc_16xi1(__epi_16xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vpopc_32xi1(__epi_32xi1 a, unsigned long int gvl);
signed long int __builtin_epi_vpopc_64xi1(__epi_64xi1 a, unsigned long int gvl);
Operation
result = 0
for element = 0 to gvl - 1
  if a[element] then
    result = result + 1
Masked prototypes
signed long int __builtin_epi_vpopc_8xi1_mask(__epi_8xi1 a, __epi_8xi1 mask,
                                              unsigned long int gvl);
signed long int __builtin_epi_vpopc_4xi1_mask(__epi_4xi1 a, __epi_4xi1 mask,
                                              unsigned long int gvl);
signed long int __builtin_epi_vpopc_2xi1_mask(__epi_2xi1 a, __epi_2xi1 mask,
                                              unsigned long int gvl);
signed long int __builtin_epi_vpopc_1xi1_mask(__epi_1xi1 a, __epi_1xi1 mask,
                                              unsigned long int gvl);
signed long int __builtin_epi_vpopc_16xi1_mask(__epi_16xi1 a, __epi_16xi1 mask,
                                               unsigned long int gvl);
signed long int __builtin_epi_vpopc_32xi1_mask(__epi_32xi1 a, __epi_32xi1 mask,
                                               unsigned long int gvl);
signed long int __builtin_epi_vpopc_64xi1_mask(__epi_64xi1 a, __epi_64xi1 mask,
                                               unsigned long int gvl);
Masked operation
result = 0
for element = 0 to gvl - 1
  if mask[element] and a[element] then
    result = result + 1

2.10. Bit manipulation

2.10.1. Elementwise bitwise-and

Description

Use these builtins to do an elementwise bitwise and of two integer vectors.

Instruction
vand.vv
Prototypes
__epi_8xi8 __builtin_epi_vand_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vand_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vand_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vand_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vand_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vand_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vand_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vand_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vand_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vand_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vand_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vand_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vand_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vand_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vand_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vand_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = bitwise_and(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vand_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vand_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vand_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vand_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vand_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vand_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vand_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vand_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vand_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vand_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vand_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vand_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vand_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vand_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vand_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vand_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = bitwise_and(a[element], b[element])
   else
     result[element] = merge[element]

2.10.2. Elementwise bitwise-or

Description

Use these builtins to do an elementwise bitwise-or of two integer vectors.

Instruction
vor.vv
Prototypes
__epi_8xi8 __builtin_epi_vor_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                  unsigned long int gvl);
__epi_4xi16 __builtin_epi_vor_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                    unsigned long int gvl);
__epi_2xi32 __builtin_epi_vor_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                    unsigned long int gvl);
__epi_1xi64 __builtin_epi_vor_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                    unsigned long int gvl);
__epi_16xi8 __builtin_epi_vor_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                    unsigned long int gvl);
__epi_8xi16 __builtin_epi_vor_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                    unsigned long int gvl);
__epi_4xi32 __builtin_epi_vor_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                    unsigned long int gvl);
__epi_2xi64 __builtin_epi_vor_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                    unsigned long int gvl);
__epi_32xi8 __builtin_epi_vor_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                    unsigned long int gvl);
__epi_16xi16 __builtin_epi_vor_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                      unsigned long int gvl);
__epi_8xi32 __builtin_epi_vor_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                    unsigned long int gvl);
__epi_4xi64 __builtin_epi_vor_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                    unsigned long int gvl);
__epi_64xi8 __builtin_epi_vor_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                    unsigned long int gvl);
__epi_32xi16 __builtin_epi_vor_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                      unsigned long int gvl);
__epi_16xi32 __builtin_epi_vor_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                      unsigned long int gvl);
__epi_8xi64 __builtin_epi_vor_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = bitwise_or(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vor_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                       __epi_8xi8 b, __epi_8xi1 mask,
                                       unsigned long int gvl);
__epi_4xi16 __builtin_epi_vor_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                         __epi_4xi16 b, __epi_4xi1 mask,
                                         unsigned long int gvl);
__epi_2xi32 __builtin_epi_vor_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                         __epi_2xi32 b, __epi_2xi1 mask,
                                         unsigned long int gvl);
__epi_1xi64 __builtin_epi_vor_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                         __epi_1xi64 b, __epi_1xi1 mask,
                                         unsigned long int gvl);
__epi_16xi8 __builtin_epi_vor_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                         __epi_16xi8 b, __epi_16xi1 mask,
                                         unsigned long int gvl);
__epi_8xi16 __builtin_epi_vor_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                         __epi_8xi16 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi32 __builtin_epi_vor_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                         __epi_4xi32 b, __epi_4xi1 mask,
                                         unsigned long int gvl);
__epi_2xi64 __builtin_epi_vor_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                         __epi_2xi64 b, __epi_2xi1 mask,
                                         unsigned long int gvl);
__epi_32xi8 __builtin_epi_vor_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                         __epi_32xi8 b, __epi_32xi1 mask,
                                         unsigned long int gvl);
__epi_16xi16 __builtin_epi_vor_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                           __epi_16xi16 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi32 __builtin_epi_vor_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                         __epi_8xi32 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
__epi_4xi64 __builtin_epi_vor_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                         __epi_4xi64 b, __epi_4xi1 mask,
                                         unsigned long int gvl);
__epi_64xi8 __builtin_epi_vor_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                         __epi_64xi8 b, __epi_64xi1 mask,
                                         unsigned long int gvl);
__epi_32xi16 __builtin_epi_vor_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                           __epi_32xi16 b, __epi_32xi1 mask,
                                           unsigned long int gvl);
__epi_16xi32 __builtin_epi_vor_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                           __epi_16xi32 b, __epi_16xi1 mask,
                                           unsigned long int gvl);
__epi_8xi64 __builtin_epi_vor_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                         __epi_8xi64 b, __epi_8xi1 mask,
                                         unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = bitwise_or(a[element], b[element])
   else
     result[element] = merge[element]

2.10.3. Elementwise logical shift left

Description

Use these builtins compute the elementwise logical shift left given two integer vector operands.

Instruction
vsll.vv
Prototypes
__epi_8xi8 __builtin_epi_vsll_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsll_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsll_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsll_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsll_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsll_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsll_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsll_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsll_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsll_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsll_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsll_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsll_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsll_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsll_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsll_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = sll(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vsll_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsll_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsll_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsll_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsll_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsll_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsll_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsll_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsll_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsll_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsll_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsll_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsll_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsll_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsll_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsll_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = sll(a[element], b[element])
   else
     result[element] = merge[element]

2.10.4. Elementwise arithmetic shift right

Description

Use these builtins compute the elementwise arithmetic shift right given two integer vector operands.

Instruction
vsra.vv
Prototypes
__epi_8xi8 __builtin_epi_vsra_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsra_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsra_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsra_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsra_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsra_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsra_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsra_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsra_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsra_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsra_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsra_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsra_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsra_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsra_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsra_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = sra(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vsra_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsra_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsra_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsra_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsra_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsra_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsra_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsra_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsra_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsra_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsra_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsra_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsra_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsra_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsra_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsra_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = sra(a[element], b[element])
   else
     result[element] = merge[element]

2.10.5. Elementwise logical shift right

Description

Use these builtins compute the elementwise logical shift right given two integer vector operands.

Instruction
vsrl.vv
Prototypes
__epi_8xi8 __builtin_epi_vsrl_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsrl_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsrl_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsrl_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsrl_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsrl_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsrl_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsrl_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsrl_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsrl_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsrl_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsrl_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsrl_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsrl_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsrl_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsrl_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = srl(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vsrl_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vsrl_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vsrl_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vsrl_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vsrl_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vsrl_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vsrl_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vsrl_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vsrl_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vsrl_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vsrl_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vsrl_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vsrl_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vsrl_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vsrl_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vsrl_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = srl(a[element], b[element])
   else
     result[element] = merge[element]

2.10.6. Elementwise bitwise-or

Description

Use these builtins to do an elementwise bitwise-xor of two integer vectors

Instruction
vxor.vv
Prototypes
__epi_8xi8 __builtin_epi_vxor_8xi8(__epi_8xi8 a, __epi_8xi8 b,
                                   unsigned long int gvl);
__epi_4xi16 __builtin_epi_vxor_4xi16(__epi_4xi16 a, __epi_4xi16 b,
                                     unsigned long int gvl);
__epi_2xi32 __builtin_epi_vxor_2xi32(__epi_2xi32 a, __epi_2xi32 b,
                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vxor_1xi64(__epi_1xi64 a, __epi_1xi64 b,
                                     unsigned long int gvl);
__epi_16xi8 __builtin_epi_vxor_16xi8(__epi_16xi8 a, __epi_16xi8 b,
                                     unsigned long int gvl);
__epi_8xi16 __builtin_epi_vxor_8xi16(__epi_8xi16 a, __epi_8xi16 b,
                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vxor_4xi32(__epi_4xi32 a, __epi_4xi32 b,
                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vxor_2xi64(__epi_2xi64 a, __epi_2xi64 b,
                                     unsigned long int gvl);
__epi_32xi8 __builtin_epi_vxor_32xi8(__epi_32xi8 a, __epi_32xi8 b,
                                     unsigned long int gvl);
__epi_16xi16 __builtin_epi_vxor_16xi16(__epi_16xi16 a, __epi_16xi16 b,
                                       unsigned long int gvl);
__epi_8xi32 __builtin_epi_vxor_8xi32(__epi_8xi32 a, __epi_8xi32 b,
                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vxor_4xi64(__epi_4xi64 a, __epi_4xi64 b,
                                     unsigned long int gvl);
__epi_64xi8 __builtin_epi_vxor_64xi8(__epi_64xi8 a, __epi_64xi8 b,
                                     unsigned long int gvl);
__epi_32xi16 __builtin_epi_vxor_32xi16(__epi_32xi16 a, __epi_32xi16 b,
                                       unsigned long int gvl);
__epi_16xi32 __builtin_epi_vxor_16xi32(__epi_16xi32 a, __epi_16xi32 b,
                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vxor_8xi64(__epi_8xi64 a, __epi_8xi64 b,
                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = bitwise_xor(a[element], b[element])
Masked prototypes
__epi_8xi8 __builtin_epi_vxor_8xi8_mask(__epi_8xi8 merge, __epi_8xi8 a,
                                        __epi_8xi8 b, __epi_8xi1 mask,
                                        unsigned long int gvl);
__epi_4xi16 __builtin_epi_vxor_4xi16_mask(__epi_4xi16 merge, __epi_4xi16 a,
                                          __epi_4xi16 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi32 __builtin_epi_vxor_2xi32_mask(__epi_2xi32 merge, __epi_2xi32 a,
                                          __epi_2xi32 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_1xi64 __builtin_epi_vxor_1xi64_mask(__epi_1xi64 merge, __epi_1xi64 a,
                                          __epi_1xi64 b, __epi_1xi1 mask,
                                          unsigned long int gvl);
__epi_16xi8 __builtin_epi_vxor_16xi8_mask(__epi_16xi8 merge, __epi_16xi8 a,
                                          __epi_16xi8 b, __epi_16xi1 mask,
                                          unsigned long int gvl);
__epi_8xi16 __builtin_epi_vxor_8xi16_mask(__epi_8xi16 merge, __epi_8xi16 a,
                                          __epi_8xi16 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi32 __builtin_epi_vxor_4xi32_mask(__epi_4xi32 merge, __epi_4xi32 a,
                                          __epi_4xi32 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_2xi64 __builtin_epi_vxor_2xi64_mask(__epi_2xi64 merge, __epi_2xi64 a,
                                          __epi_2xi64 b, __epi_2xi1 mask,
                                          unsigned long int gvl);
__epi_32xi8 __builtin_epi_vxor_32xi8_mask(__epi_32xi8 merge, __epi_32xi8 a,
                                          __epi_32xi8 b, __epi_32xi1 mask,
                                          unsigned long int gvl);
__epi_16xi16 __builtin_epi_vxor_16xi16_mask(__epi_16xi16 merge, __epi_16xi16 a,
                                            __epi_16xi16 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi32 __builtin_epi_vxor_8xi32_mask(__epi_8xi32 merge, __epi_8xi32 a,
                                          __epi_8xi32 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
__epi_4xi64 __builtin_epi_vxor_4xi64_mask(__epi_4xi64 merge, __epi_4xi64 a,
                                          __epi_4xi64 b, __epi_4xi1 mask,
                                          unsigned long int gvl);
__epi_64xi8 __builtin_epi_vxor_64xi8_mask(__epi_64xi8 merge, __epi_64xi8 a,
                                          __epi_64xi8 b, __epi_64xi1 mask,
                                          unsigned long int gvl);
__epi_32xi16 __builtin_epi_vxor_32xi16_mask(__epi_32xi16 merge, __epi_32xi16 a,
                                            __epi_32xi16 b, __epi_32xi1 mask,
                                            unsigned long int gvl);
__epi_16xi32 __builtin_epi_vxor_16xi32_mask(__epi_16xi32 merge, __epi_16xi32 a,
                                            __epi_16xi32 b, __epi_16xi1 mask,
                                            unsigned long int gvl);
__epi_8xi64 __builtin_epi_vxor_8xi64_mask(__epi_8xi64 merge, __epi_8xi64 a,
                                          __epi_8xi64 b, __epi_8xi1 mask,
                                          unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = bitwise_xor(a[element], b[element])
   else
     result[element] = merge[element]

2.11. EPI custom extensions

2.11.1. Transpose two vectors into a tuple of two vectors

Description

Use these builtins to "transpose" two vectors into a tuple of vectors. This is similar to 'zip' but first the even-numbered elements of the vectors are zipped followed by the odd-numbered elements.

Given two vectors (a, 0, b, 1, c, 2) and (d, 3, e, 4, f, 5) the first element of the result tuple will be the vector (a, d, b, e, c, f). The second element of the result tuple will be the vector (0, 3, 1, 4, 2, 5).

Instruction
vtrn.vv
Prototypes
__epi_8xi8x2 __builtin_epi_vtrn_8xi8x2(__epi_8xi8 a, __epi_8xi8 b,
                                       unsigned long int gvl);
__epi_4xi16x2 __builtin_epi_vtrn_4xi16x2(__epi_4xi16 a, __epi_4xi16 b,
                                         unsigned long int gvl);
__epi_2xi32x2 __builtin_epi_vtrn_2xi32x2(__epi_2xi32 a, __epi_2xi32 b,
                                         unsigned long int gvl);
__epi_1xi64x2 __builtin_epi_vtrn_1xi64x2(__epi_1xi64 a, __epi_1xi64 b,
                                         unsigned long int gvl);
__epi_1xf64x2 __builtin_epi_vtrn_1xf64x2(__epi_1xf64 a, __epi_1xf64 b,
                                         unsigned long int gvl);
__epi_2xf32x2 __builtin_epi_vtrn_2xf32x2(__epi_2xf32 a, __epi_2xf32 b,
                                         unsigned long int gvl);
Operation
# Even-numbered elements
element = 0
while element < gvl
   dest_element = 2 * element
   if dest_element < VLMAX
      result.v0[dest_element] = a[element]
      result.v0[dest_element + 1] = b[element]
   else
      dest_element = dest_element - VLMAX
      result.v1[dest_element] = a[element]
      result.v1[dest_element + 1] = b[element]
   element = element + 2
# Odd-numbered elements
element = 1
while element < gvl
   dest_element = 2 * element
   if dest_element < VLMAX
      result.v0[dest_element] = a[element]
      result.v0[dest_element + 1] = b[element]
   else
      dest_element = dest_element - VLMAX
      result.v1[dest_element] = a[element]
      result.v1[dest_element + 1] = b[element]
   element = element + 2

2.11.2. Unzip two vectors into a tuple of two vectors

Description

Use these builtins to "unzip" two vectors into a pair of vector registers. This is the dual of the zip operation.

Given two vectors (a, 0, b, 1, c, 2) and (d, 3, e, 4, f, 5) the first element of the result tuple will be the vector (a, b, c, d, e, f). The second element of the result tuple will be the vector (0, 1, 2, 3, 4, 5).

Instruction
vunzip2.vv
Prototypes
__epi_8xi8x2 __builtin_epi_vunzip2_8xi8x2(__epi_8xi8 a, __epi_8xi8 b,
                                          unsigned long int gvl);
__epi_4xi16x2 __builtin_epi_vunzip2_4xi16x2(__epi_4xi16 a, __epi_4xi16 b,
                                            unsigned long int gvl);
__epi_2xi32x2 __builtin_epi_vunzip2_2xi32x2(__epi_2xi32 a, __epi_2xi32 b,
                                            unsigned long int gvl);
__epi_1xi64x2 __builtin_epi_vunzip2_1xi64x2(__epi_1xi64 a, __epi_1xi64 b,
                                            unsigned long int gvl);
__epi_1xf64x2 __builtin_epi_vunzip2_1xf64x2(__epi_1xf64 a, __epi_1xf64 b,
                                            unsigned long int gvl);
__epi_2xf32x2 __builtin_epi_vunzip2_2xf32x2(__epi_2xf32 a, __epi_2xf32 b,
                                            unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   src_element = 2 * element
   if src_element < VLMAX
      result.v0[element] = a[src_element]
      result.v1[element] = a[src_element + 1]
   else
      src_element = src_element - VLMAX
      result.v0[element] = b[src_element]
      result.v1[element] = b[src_element + 1]

2.11.3. Zip two vectors into a tuple of two vectors

Description

Use these builtins to "zip" two vectors into a pair of vector registers. This operation creates a pair of registers whose elements are pairs of the two input registers.

Given two vectors (a, b, c, d, e, f) and (0, 1, 2, 3, 4, 5) the first element of the result tuple will be the vector (a, 0, b, 1, c, 2). The second element of the result tuple will be the vector (d, 3, e, 4, f, 5).

Instruction
vzip2.vv
Prototypes
__epi_8xi8x2 __builtin_epi_vzip2_8xi8x2(__epi_8xi8 a, __epi_8xi8 b,
                                        unsigned long int gvl);
__epi_4xi16x2 __builtin_epi_vzip2_4xi16x2(__epi_4xi16 a, __epi_4xi16 b,
                                          unsigned long int gvl);
__epi_2xi32x2 __builtin_epi_vzip2_2xi32x2(__epi_2xi32 a, __epi_2xi32 b,
                                          unsigned long int gvl);
__epi_1xi64x2 __builtin_epi_vzip2_1xi64x2(__epi_1xi64 a, __epi_1xi64 b,
                                          unsigned long int gvl);
__epi_1xf64x2 __builtin_epi_vzip2_1xf64x2(__epi_1xf64 a, __epi_1xf64 b,
                                          unsigned long int gvl);
__epi_2xf32x2 __builtin_epi_vzip2_2xf32x2(__epi_2xf32 a, __epi_2xf32 b,
                                          unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   dest_element = 2 * element
   if dest_element < VLMAX
      result.v0[dest_element] = a[element]
      result.v0[dest_element + 1] = b[element]
   else
      dest_element = dest_element - VLMAX
      result.v1[dest_element] = a[element]
      result.v1[dest_element + 1] = b[element]

2.12. Conversions between mask and integer vectors

2.12.1. Reinterpret a vector mask as an integer vector

Description

Use these builtins when you need to reinterpret the contents of an mask vector as a integer vector. These builtins are a no-op and exist only to transform the vector types.

Prototypes
__epi_8xi8 __builtin_epi_cast_8xi8_8xi1(__epi_8xi1 a);
__epi_4xi16 __builtin_epi_cast_4xi16_4xi1(__epi_4xi1 a);
__epi_2xi32 __builtin_epi_cast_2xi32_2xi1(__epi_2xi1 a);
__epi_1xi64 __builtin_epi_cast_1xi64_1xi1(__epi_1xi1 a);

2.12.2. Reinterpret an integer vector as a vector mask

Description

Use these builtins when you need to reinterpret the contents of an integer vector as a vector mask. These builtins are a no-op and exist only to transform the vector types.

Prototypes
__epi_8xi1 __builtin_epi_cast_8xi1_8xi8(__epi_8xi8 a);
__epi_4xi1 __builtin_epi_cast_4xi1_4xi16(__epi_4xi16 a);
__epi_2xi1 __builtin_epi_cast_2xi1_2xi32(__epi_2xi32 a);
__epi_1xi1 __builtin_epi_cast_1xi1_1xi64(__epi_1xi64 a);

2.13. Conversions between integer and floating-point vectors

2.13.1. Integer to floating-point conversion

Description

Use these builtins to convert elementwise an integer vector to a floating-point vector.

Instruction
vfcvt.f.x.v
Prototypes
__epi_2xf32 __builtin_epi_vfcvt_f_x_2xf32_2xi32(__epi_2xi32 a,
                                                unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfcvt_f_x_1xf64_1xi64(__epi_1xi64 a,
                                                unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfcvt_f_x_4xf32_4xi32(__epi_4xi32 a,
                                                unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfcvt_f_x_2xf64_2xi64(__epi_2xi64 a,
                                                unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfcvt_f_x_8xf32_8xi32(__epi_8xi32 a,
                                                unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfcvt_f_x_4xf64_4xi64(__epi_4xi64 a,
                                                unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfcvt_f_x_16xf32_16xi32(__epi_16xi32 a,
                                                   unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfcvt_f_x_8xf64_8xi64(__epi_8xi64 a,
                                                unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = int_to_fp(a[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfcvt_f_x_2xf32_2xi32_mask(__epi_2xf32 merge,
                                                     __epi_2xi32 a,
                                                     __epi_2xi1 mask,
                                                     unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfcvt_f_x_1xf64_1xi64_mask(__epi_1xf64 merge,
                                                     __epi_1xi64 a,
                                                     __epi_1xi1 mask,
                                                     unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfcvt_f_x_4xf32_4xi32_mask(__epi_4xf32 merge,
                                                     __epi_4xi32 a,
                                                     __epi_4xi1 mask,
                                                     unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfcvt_f_x_2xf64_2xi64_mask(__epi_2xf64 merge,
                                                     __epi_2xi64 a,
                                                     __epi_2xi1 mask,
                                                     unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfcvt_f_x_8xf32_8xi32_mask(__epi_8xf32 merge,
                                                     __epi_8xi32 a,
                                                     __epi_8xi1 mask,
                                                     unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfcvt_f_x_4xf64_4xi64_mask(__epi_4xf64 merge,
                                                     __epi_4xi64 a,
                                                     __epi_4xi1 mask,
                                                     unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfcvt_f_x_16xf32_16xi32_mask(__epi_16xf32 merge,
                                                        __epi_16xi32 a,
                                                        __epi_16xi1 mask,
                                                        unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfcvt_f_x_8xf64_8xi64_mask(__epi_8xf64 merge,
                                                     __epi_8xi64 a,
                                                     __epi_8xi1 mask,
                                                     unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = int_to_fp(a[element])
   else
     result[element] = merge[element]

2.13.2. Integer to floating-point conversion

Description

Use these builtins to convert elementwise an unsigned integer vector to a floating-point vector.

Instruction
vfcvt.f.xu.v
Prototypes
__epi_2xf32 __builtin_epi_vfcvt_f_xu_2xf32_2xi32(__epi_2xi32 a,
                                                 unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfcvt_f_xu_1xf64_1xi64(__epi_1xi64 a,
                                                 unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfcvt_f_xu_4xf32_4xi32(__epi_4xi32 a,
                                                 unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfcvt_f_xu_2xf64_2xi64(__epi_2xi64 a,
                                                 unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfcvt_f_xu_8xf32_8xi32(__epi_8xi32 a,
                                                 unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfcvt_f_xu_4xf64_4xi64(__epi_4xi64 a,
                                                 unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfcvt_f_xu_16xf32_16xi32(__epi_16xi32 a,
                                                    unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfcvt_f_xu_8xf64_8xi64(__epi_8xi64 a,
                                                 unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = uint_to_fp(a[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfcvt_f_xu_2xf32_2xi32_mask(__epi_2xf32 merge,
                                                      __epi_2xi32 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_1xf64 __builtin_epi_vfcvt_f_xu_1xf64_1xi64_mask(__epi_1xf64 merge,
                                                      __epi_1xi64 a,
                                                      __epi_1xi1 mask,
                                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfcvt_f_xu_4xf32_4xi32_mask(__epi_4xf32 merge,
                                                      __epi_4xi32 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfcvt_f_xu_2xf64_2xi64_mask(__epi_2xf64 merge,
                                                      __epi_2xi64 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfcvt_f_xu_8xf32_8xi32_mask(__epi_8xf32 merge,
                                                      __epi_8xi32 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfcvt_f_xu_4xf64_4xi64_mask(__epi_4xf64 merge,
                                                      __epi_4xi64 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfcvt_f_xu_16xf32_16xi32_mask(__epi_16xf32 merge,
                                                         __epi_16xi32 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfcvt_f_xu_8xf64_8xi64_mask(__epi_8xf64 merge,
                                                      __epi_8xi64 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = uint_to_fp(a[element])
   else
     result[element] = merge[element]

2.13.3. Floating-point to integer conversion

Description

Use these builtins to convert elementwise a floating-point vector to an integer vector.

Instruction
vfcvt.x.f.v
Prototypes
__epi_2xi32 __builtin_epi_vfcvt_x_f_2xi32_2xf32(__epi_2xf32 a,
                                                unsigned long int gvl);
__epi_1xi64 __builtin_epi_vfcvt_x_f_1xi64_1xf64(__epi_1xf64 a,
                                                unsigned long int gvl);
__epi_4xi32 __builtin_epi_vfcvt_x_f_4xi32_4xf32(__epi_4xf32 a,
                                                unsigned long int gvl);
__epi_2xi64 __builtin_epi_vfcvt_x_f_2xi64_2xf64(__epi_2xf64 a,
                                                unsigned long int gvl);
__epi_8xi32 __builtin_epi_vfcvt_x_f_8xi32_8xf32(__epi_8xf32 a,
                                                unsigned long int gvl);
__epi_4xi64 __builtin_epi_vfcvt_x_f_4xi64_4xf64(__epi_4xf64 a,
                                                unsigned long int gvl);
__epi_16xi32 __builtin_epi_vfcvt_x_f_16xi32_16xf32(__epi_16xf32 a,
                                                   unsigned long int gvl);
__epi_8xi64 __builtin_epi_vfcvt_x_f_8xi64_8xf64(__epi_8xf64 a,
                                                unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fp_to_int(a[element])
Masked prototypes
__epi_2xi32 __builtin_epi_vfcvt_x_f_2xi32_2xf32_mask(__epi_2xi32 merge,
                                                     __epi_2xf32 a,
                                                     __epi_2xi1 mask,
                                                     unsigned long int gvl);
__epi_1xi64 __builtin_epi_vfcvt_x_f_1xi64_1xf64_mask(__epi_1xi64 merge,
                                                     __epi_1xf64 a,
                                                     __epi_1xi1 mask,
                                                     unsigned long int gvl);
__epi_4xi32 __builtin_epi_vfcvt_x_f_4xi32_4xf32_mask(__epi_4xi32 merge,
                                                     __epi_4xf32 a,
                                                     __epi_4xi1 mask,
                                                     unsigned long int gvl);
__epi_2xi64 __builtin_epi_vfcvt_x_f_2xi64_2xf64_mask(__epi_2xi64 merge,
                                                     __epi_2xf64 a,
                                                     __epi_2xi1 mask,
                                                     unsigned long int gvl);
__epi_8xi32 __builtin_epi_vfcvt_x_f_8xi32_8xf32_mask(__epi_8xi32 merge,
                                                     __epi_8xf32 a,
                                                     __epi_8xi1 mask,
                                                     unsigned long int gvl);
__epi_4xi64 __builtin_epi_vfcvt_x_f_4xi64_4xf64_mask(__epi_4xi64 merge,
                                                     __epi_4xf64 a,
                                                     __epi_4xi1 mask,
                                                     unsigned long int gvl);
__epi_16xi32 __builtin_epi_vfcvt_x_f_16xi32_16xf32_mask(__epi_16xi32 merge,
                                                        __epi_16xf32 a,
                                                        __epi_16xi1 mask,
                                                        unsigned long int gvl);
__epi_8xi64 __builtin_epi_vfcvt_x_f_8xi64_8xf64_mask(__epi_8xi64 merge,
                                                     __epi_8xf64 a,
                                                     __epi_8xi1 mask,
                                                     unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fp_to_int(a[element])
   else
     result[element] = merge[element]

2.13.4. Integer to floating-point conversion

Description

Use these builtins to convert elementwise a floating-point vector to an unsigned integer vector where elements are interpreted as unsigned integers.

Instruction
vfcvt.xu.f.v
Prototypes
__epi_2xi32 __builtin_epi_vfcvt_xu_f_2xi32_2xf32(__epi_2xf32 a,
                                                 unsigned long int gvl);
__epi_1xi64 __builtin_epi_vfcvt_xu_f_1xi64_1xf64(__epi_1xf64 a,
                                                 unsigned long int gvl);
__epi_4xi32 __builtin_epi_vfcvt_xu_f_4xi32_4xf32(__epi_4xf32 a,
                                                 unsigned long int gvl);
__epi_2xi64 __builtin_epi_vfcvt_xu_f_2xi64_2xf64(__epi_2xf64 a,
                                                 unsigned long int gvl);
__epi_8xi32 __builtin_epi_vfcvt_xu_f_8xi32_8xf32(__epi_8xf32 a,
                                                 unsigned long int gvl);
__epi_4xi64 __builtin_epi_vfcvt_xu_f_4xi64_4xf64(__epi_4xf64 a,
                                                 unsigned long int gvl);
__epi_16xi32 __builtin_epi_vfcvt_xu_f_16xi32_16xf32(__epi_16xf32 a,
                                                    unsigned long int gvl);
__epi_8xi64 __builtin_epi_vfcvt_xu_f_8xi64_8xf64(__epi_8xf64 a,
                                                 unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fp_to_uint(a[element])
Masked prototypes
__epi_2xi32 __builtin_epi_vfcvt_xu_f_2xi32_2xf32_mask(__epi_2xi32 merge,
                                                      __epi_2xf32 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_1xi64 __builtin_epi_vfcvt_xu_f_1xi64_1xf64_mask(__epi_1xi64 merge,
                                                      __epi_1xf64 a,
                                                      __epi_1xi1 mask,
                                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vfcvt_xu_f_4xi32_4xf32_mask(__epi_4xi32 merge,
                                                      __epi_4xf32 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_2xi64 __builtin_epi_vfcvt_xu_f_2xi64_2xf64_mask(__epi_2xi64 merge,
                                                      __epi_2xf64 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_8xi32 __builtin_epi_vfcvt_xu_f_8xi32_8xf32_mask(__epi_8xi32 merge,
                                                      __epi_8xf32 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vfcvt_xu_f_4xi64_4xf64_mask(__epi_4xi64 merge,
                                                      __epi_4xf64 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_16xi32 __builtin_epi_vfcvt_xu_f_16xi32_16xf32_mask(__epi_16xi32 merge,
                                                         __epi_16xf32 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
__epi_8xi64 __builtin_epi_vfcvt_xu_f_8xi64_8xf64_mask(__epi_8xi64 merge,
                                                      __epi_8xf64 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fp_to_uint(a[element])
   else
     result[element] = merge[element]

2.13.5. Elementwise narrowing integer to floating-point conversion

Description

Use these builtins to convert elementwise an integer vector to a vector of floating-point elements that are half the width of the integer element.

Instruction
vfncvt.f.x.w
Prototypes
__epi_2xf32 __builtin_epi_vfncvt_f_x_2xf32_2xi64(__epi_2xi64 a,
                                                 unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfncvt_f_x_4xf32_4xi64(__epi_4xi64 a,
                                                 unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfncvt_f_x_8xf32_8xi64(__epi_8xi64 a,
                                                 unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfncvt_f_x_16xf32_16xi64(__epi_16xi64 a,
                                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = int_to_narrow_fp(a[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfncvt_f_x_2xf32_2xi64_mask(__epi_2xf32 merge,
                                                      __epi_2xi64 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfncvt_f_x_4xf32_4xi64_mask(__epi_4xf32 merge,
                                                      __epi_4xi64 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfncvt_f_x_8xf32_8xi64_mask(__epi_8xf32 merge,
                                                      __epi_8xi64 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfncvt_f_x_16xf32_16xi64_mask(__epi_16xf32 merge,
                                                         __epi_16xi64 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = int_to_narrow_fp(a[element])
   else
     result[element] = merge[element]

2.13.6. Elementwise narrowing unsigned integer to floating-point conversion

Description

Use these builtins to convert elementwise an integer vector, where elements are interpreted as unsigned integers, to a vector of floating-point elements that are half the width of the integer element.

Instruction
vfncvt.f.xu.w
Prototypes
__epi_2xf32 __builtin_epi_vfncvt_f_xu_2xf32_2xi64(__epi_2xi64 a,
                                                  unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfncvt_f_xu_4xf32_4xi64(__epi_4xi64 a,
                                                  unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfncvt_f_xu_8xf32_8xi64(__epi_8xi64 a,
                                                  unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfncvt_f_xu_16xf32_16xi64(__epi_16xi64 a,
                                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = uint_to_narrow_fp(a[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfncvt_f_xu_2xf32_2xi64_mask(__epi_2xf32 merge,
                                                       __epi_2xi64 a,
                                                       __epi_2xi1 mask,
                                                       unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfncvt_f_xu_4xf32_4xi64_mask(__epi_4xf32 merge,
                                                       __epi_4xi64 a,
                                                       __epi_4xi1 mask,
                                                       unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfncvt_f_xu_8xf32_8xi64_mask(__epi_8xf32 merge,
                                                       __epi_8xi64 a,
                                                       __epi_8xi1 mask,
                                                       unsigned long int gvl);
__epi_16xf32
__builtin_epi_vfncvt_f_xu_16xf32_16xi64_mask(__epi_16xf32 merge, __epi_16xi64 a,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = uint_to_narrow_fp(a[element])
   else
     result[element] = merge[element]

2.13.7. Elementwise narrowing integer to floating-point conversion

Description

Use these builtins to convert elementwise an floating-point vector to a vector of integer elements that are half the width of the floating-point elements.

Instruction
vfncvt.x.f.w
Prototypes
__epi_4xi16 __builtin_epi_vfncvt_x_f_4xi16_4xf32(__epi_4xf32 a,
                                                 unsigned long int gvl);
__epi_2xi32 __builtin_epi_vfncvt_x_f_2xi32_2xf64(__epi_2xf64 a,
                                                 unsigned long int gvl);
__epi_8xi16 __builtin_epi_vfncvt_x_f_8xi16_8xf32(__epi_8xf32 a,
                                                 unsigned long int gvl);
__epi_4xi32 __builtin_epi_vfncvt_x_f_4xi32_4xf64(__epi_4xf64 a,
                                                 unsigned long int gvl);
__epi_16xi16 __builtin_epi_vfncvt_x_f_16xi16_16xf32(__epi_16xf32 a,
                                                    unsigned long int gvl);
__epi_8xi32 __builtin_epi_vfncvt_x_f_8xi32_8xf64(__epi_8xf64 a,
                                                 unsigned long int gvl);
__epi_32xi16 __builtin_epi_vfncvt_x_f_32xi16_32xf32(__epi_32xf32 a,
                                                    unsigned long int gvl);
__epi_16xi32 __builtin_epi_vfncvt_x_f_16xi32_16xf64(__epi_16xf64 a,
                                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fp_to_narrow_int(a[element])
Masked prototypes
__epi_4xi16 __builtin_epi_vfncvt_x_f_4xi16_4xf32_mask(__epi_4xi16 merge,
                                                      __epi_4xf32 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_2xi32 __builtin_epi_vfncvt_x_f_2xi32_2xf64_mask(__epi_2xi32 merge,
                                                      __epi_2xf64 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_8xi16 __builtin_epi_vfncvt_x_f_8xi16_8xf32_mask(__epi_8xi16 merge,
                                                      __epi_8xf32 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_4xi32 __builtin_epi_vfncvt_x_f_4xi32_4xf64_mask(__epi_4xi32 merge,
                                                      __epi_4xf64 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_16xi16 __builtin_epi_vfncvt_x_f_16xi16_16xf32_mask(__epi_16xi16 merge,
                                                         __epi_16xf32 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
__epi_8xi32 __builtin_epi_vfncvt_x_f_8xi32_8xf64_mask(__epi_8xi32 merge,
                                                      __epi_8xf64 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_32xi16 __builtin_epi_vfncvt_x_f_32xi16_32xf32_mask(__epi_32xi16 merge,
                                                         __epi_32xf32 a,
                                                         __epi_32xi1 mask,
                                                         unsigned long int gvl);
__epi_16xi32 __builtin_epi_vfncvt_x_f_16xi32_16xf64_mask(__epi_16xi32 merge,
                                                         __epi_16xf64 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fp_to_narrow_int(a[element])
   else
     result[element] = merge[element]

2.13.8. Elementwise narrowing unsigned integer to floating-point conversion

Description

Use these builtins to convert elementwise an floating-point vector to a vector of integer elements, interpreted as unsigned, that are half the width of the floating-point elements.

Instruction
vfncvt.xu.f.w
Prototypes
__epi_4xi16 __builtin_epi_vfncvt_xu_f_4xi16_4xf32(__epi_4xf32 a,
                                                  unsigned long int gvl);
__epi_2xi32 __builtin_epi_vfncvt_xu_f_2xi32_2xf64(__epi_2xf64 a,
                                                  unsigned long int gvl);
__epi_8xi16 __builtin_epi_vfncvt_xu_f_8xi16_8xf32(__epi_8xf32 a,
                                                  unsigned long int gvl);
__epi_4xi32 __builtin_epi_vfncvt_xu_f_4xi32_4xf64(__epi_4xf64 a,
                                                  unsigned long int gvl);
__epi_16xi16 __builtin_epi_vfncvt_xu_f_16xi16_16xf32(__epi_16xf32 a,
                                                     unsigned long int gvl);
__epi_8xi32 __builtin_epi_vfncvt_xu_f_8xi32_8xf64(__epi_8xf64 a,
                                                  unsigned long int gvl);
__epi_32xi16 __builtin_epi_vfncvt_xu_f_32xi16_32xf32(__epi_32xf32 a,
                                                     unsigned long int gvl);
__epi_16xi32 __builtin_epi_vfncvt_xu_f_16xi32_16xf64(__epi_16xf64 a,
                                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fp_to_narrow_uint(a[element])
Masked prototypes
__epi_4xi16 __builtin_epi_vfncvt_xu_f_4xi16_4xf32_mask(__epi_4xi16 merge,
                                                       __epi_4xf32 a,
                                                       __epi_4xi1 mask,
                                                       unsigned long int gvl);
__epi_2xi32 __builtin_epi_vfncvt_xu_f_2xi32_2xf64_mask(__epi_2xi32 merge,
                                                       __epi_2xf64 a,
                                                       __epi_2xi1 mask,
                                                       unsigned long int gvl);
__epi_8xi16 __builtin_epi_vfncvt_xu_f_8xi16_8xf32_mask(__epi_8xi16 merge,
                                                       __epi_8xf32 a,
                                                       __epi_8xi1 mask,
                                                       unsigned long int gvl);
__epi_4xi32 __builtin_epi_vfncvt_xu_f_4xi32_4xf64_mask(__epi_4xi32 merge,
                                                       __epi_4xf64 a,
                                                       __epi_4xi1 mask,
                                                       unsigned long int gvl);
__epi_16xi16
__builtin_epi_vfncvt_xu_f_16xi16_16xf32_mask(__epi_16xi16 merge, __epi_16xf32 a,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xi32 __builtin_epi_vfncvt_xu_f_8xi32_8xf64_mask(__epi_8xi32 merge,
                                                       __epi_8xf64 a,
                                                       __epi_8xi1 mask,
                                                       unsigned long int gvl);
__epi_32xi16
__builtin_epi_vfncvt_xu_f_32xi16_32xf32_mask(__epi_32xi16 merge, __epi_32xf32 a,
                                             __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xi32
__builtin_epi_vfncvt_xu_f_16xi32_16xf64_mask(__epi_16xi32 merge, __epi_16xf64 a,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fp_to_narrow_uint(a[element])
   else
     result[element] = merge[element]

2.13.9. Elementwise widening integer to floating-point conversion

Description

Use these builtins to convert elementwise an integer vector to a vector of floating-point elements that are twice the width of the integer element.

Instruction
vfwcvt.f.x.v
Prototypes
__epi_4xf32 __builtin_epi_vfwcvt_f_x_4xf32_4xi16(__epi_4xi16 a,
                                                 unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfwcvt_f_x_2xf64_2xi32(__epi_2xi32 a,
                                                 unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfwcvt_f_x_8xf32_8xi16(__epi_8xi16 a,
                                                 unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwcvt_f_x_4xf64_4xi32(__epi_4xi32 a,
                                                 unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfwcvt_f_x_16xf32_16xi16(__epi_16xi16 a,
                                                    unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwcvt_f_x_8xf64_8xi32(__epi_8xi32 a,
                                                 unsigned long int gvl);
__epi_32xf32 __builtin_epi_vfwcvt_f_x_32xf32_32xi16(__epi_32xi16 a,
                                                    unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwcvt_f_x_16xf64_16xi32(__epi_16xi32 a,
                                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = int_to_wide_fp(a[element])
Masked prototypes
__epi_4xf32 __builtin_epi_vfwcvt_f_x_4xf32_4xi16_mask(__epi_4xf32 merge,
                                                      __epi_4xi16 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfwcvt_f_x_2xf64_2xi32_mask(__epi_2xf64 merge,
                                                      __epi_2xi32 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfwcvt_f_x_8xf32_8xi16_mask(__epi_8xf32 merge,
                                                      __epi_8xi16 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwcvt_f_x_4xf64_4xi32_mask(__epi_4xf64 merge,
                                                      __epi_4xi32 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfwcvt_f_x_16xf32_16xi16_mask(__epi_16xf32 merge,
                                                         __epi_16xi16 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwcvt_f_x_8xf64_8xi32_mask(__epi_8xf64 merge,
                                                      __epi_8xi32 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_32xf32 __builtin_epi_vfwcvt_f_x_32xf32_32xi16_mask(__epi_32xf32 merge,
                                                         __epi_32xi16 a,
                                                         __epi_32xi1 mask,
                                                         unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwcvt_f_x_16xf64_16xi32_mask(__epi_16xf64 merge,
                                                         __epi_16xi32 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = int_to_wide_fp(a[element])
   else
     result[element] = merge[element]

2.13.10. Elementwise widening unsigned integer to floating-point conversion

Description

Use these builtins to convert elementwise an integer vector, where elements are interpreted as unsigned integers, to a vector of floating-point elements that are twice the width of the integer element.

Instruction
vfwcvt.f.xu.v
Prototypes
__epi_4xf32 __builtin_epi_vfwcvt_f_xu_4xf32_4xi16(__epi_4xi16 a,
                                                  unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfwcvt_f_xu_2xf64_2xi32(__epi_2xi32 a,
                                                  unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfwcvt_f_xu_8xf32_8xi16(__epi_8xi16 a,
                                                  unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwcvt_f_xu_4xf64_4xi32(__epi_4xi32 a,
                                                  unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfwcvt_f_xu_16xf32_16xi16(__epi_16xi16 a,
                                                     unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwcvt_f_xu_8xf64_8xi32(__epi_8xi32 a,
                                                  unsigned long int gvl);
__epi_32xf32 __builtin_epi_vfwcvt_f_xu_32xf32_32xi16(__epi_32xi16 a,
                                                     unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwcvt_f_xu_16xf64_16xi32(__epi_16xi32 a,
                                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = uint_to_wide_fp(a[element])
Masked prototypes
__epi_4xf32 __builtin_epi_vfwcvt_f_xu_4xf32_4xi16_mask(__epi_4xf32 merge,
                                                       __epi_4xi16 a,
                                                       __epi_4xi1 mask,
                                                       unsigned long int gvl);
__epi_2xf64 __builtin_epi_vfwcvt_f_xu_2xf64_2xi32_mask(__epi_2xf64 merge,
                                                       __epi_2xi32 a,
                                                       __epi_2xi1 mask,
                                                       unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfwcvt_f_xu_8xf32_8xi16_mask(__epi_8xf32 merge,
                                                       __epi_8xi16 a,
                                                       __epi_8xi1 mask,
                                                       unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwcvt_f_xu_4xf64_4xi32_mask(__epi_4xf64 merge,
                                                       __epi_4xi32 a,
                                                       __epi_4xi1 mask,
                                                       unsigned long int gvl);
__epi_16xf32
__builtin_epi_vfwcvt_f_xu_16xf32_16xi16_mask(__epi_16xf32 merge, __epi_16xi16 a,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwcvt_f_xu_8xf64_8xi32_mask(__epi_8xf64 merge,
                                                       __epi_8xi32 a,
                                                       __epi_8xi1 mask,
                                                       unsigned long int gvl);
__epi_32xf32
__builtin_epi_vfwcvt_f_xu_32xf32_32xi16_mask(__epi_32xf32 merge, __epi_32xi16 a,
                                             __epi_32xi1 mask,
                                             unsigned long int gvl);
__epi_16xf64
__builtin_epi_vfwcvt_f_xu_16xf64_16xi32_mask(__epi_16xf64 merge, __epi_16xi32 a,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = uint_to_wide_fp(a[element])
   else
     result[element] = merge[element]

2.13.11. Elementwise widening integer to floating-point conversion

Description

Use these builtins to convert elementwise an floating-point vector to a vector of integer elements that are twice the width of the floating-point elements.

Instruction
vfwcvt.x.f.v
Prototypes
__epi_2xi64 __builtin_epi_vfwcvt_x_f_2xi64_2xf32(__epi_2xf32 a,
                                                 unsigned long int gvl);
__epi_4xi64 __builtin_epi_vfwcvt_x_f_4xi64_4xf32(__epi_4xf32 a,
                                                 unsigned long int gvl);
__epi_8xi64 __builtin_epi_vfwcvt_x_f_8xi64_8xf32(__epi_8xf32 a,
                                                 unsigned long int gvl);
__epi_16xi64 __builtin_epi_vfwcvt_x_f_16xi64_16xf32(__epi_16xf32 a,
                                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fp_to_wide_int(a[element])
Masked prototypes
__epi_2xi64 __builtin_epi_vfwcvt_x_f_2xi64_2xf32_mask(__epi_2xi64 merge,
                                                      __epi_2xf32 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_4xi64 __builtin_epi_vfwcvt_x_f_4xi64_4xf32_mask(__epi_4xi64 merge,
                                                      __epi_4xf32 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_8xi64 __builtin_epi_vfwcvt_x_f_8xi64_8xf32_mask(__epi_8xi64 merge,
                                                      __epi_8xf32 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_16xi64 __builtin_epi_vfwcvt_x_f_16xi64_16xf32_mask(__epi_16xi64 merge,
                                                         __epi_16xf32 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fp_to_wide_int(a[element])
   else
     result[element] = merge[element]

2.13.12. Elementwise widening unsigned integer to floating-point conversion

Description

Use these builtins to convert elementwise an floating-point vector to a vector of integer elements, interpreted as unsigned, that are twice the width of the floating-point elements.

Instruction
vfwcvt.xu.f.v
Prototypes
__epi_2xi64 __builtin_epi_vfwcvt_xu_f_2xi64_2xf32(__epi_2xf32 a,
                                                  unsigned long int gvl);
__epi_4xi64 __builtin_epi_vfwcvt_xu_f_4xi64_4xf32(__epi_4xf32 a,
                                                  unsigned long int gvl);
__epi_8xi64 __builtin_epi_vfwcvt_xu_f_8xi64_8xf32(__epi_8xf32 a,
                                                  unsigned long int gvl);
__epi_16xi64 __builtin_epi_vfwcvt_xu_f_16xi64_16xf32(__epi_16xf32 a,
                                                     unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fp_to_wide_uint(a[element])
Masked prototypes
__epi_2xi64 __builtin_epi_vfwcvt_xu_f_2xi64_2xf32_mask(__epi_2xi64 merge,
                                                       __epi_2xf32 a,
                                                       __epi_2xi1 mask,
                                                       unsigned long int gvl);
__epi_4xi64 __builtin_epi_vfwcvt_xu_f_4xi64_4xf32_mask(__epi_4xi64 merge,
                                                       __epi_4xf32 a,
                                                       __epi_4xi1 mask,
                                                       unsigned long int gvl);
__epi_8xi64 __builtin_epi_vfwcvt_xu_f_8xi64_8xf32_mask(__epi_8xi64 merge,
                                                       __epi_8xf32 a,
                                                       __epi_8xi1 mask,
                                                       unsigned long int gvl);
__epi_16xi64
__builtin_epi_vfwcvt_xu_f_16xi64_16xf32_mask(__epi_16xi64 merge, __epi_16xf32 a,
                                             __epi_16xi1 mask,
                                             unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fp_to_wide_uint(a[element])
   else
     result[element] = merge[element]

2.14. Conversions between floating-point vectors

2.14.1. Elementwise narrowing floating-point conversion

Description

Use these builtins to convert elementwise a floating-point vector to another vector of floating-point elements that are half the width of the source vector.

Instruction
vfncvt.f.f.w
Prototypes
__epi_2xf32 __builtin_epi_vfncvt_f_f_2xf32_2xf64(__epi_2xf64 a,
                                                 unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfncvt_f_f_4xf32_4xf64(__epi_4xf64 a,
                                                 unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfncvt_f_f_8xf32_8xf64(__epi_8xf64 a,
                                                 unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfncvt_f_f_16xf32_16xf64(__epi_16xf64 a,
                                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fp_to_narrow_fp(a[element])
Masked prototypes
__epi_2xf32 __builtin_epi_vfncvt_f_f_2xf32_2xf64_mask(__epi_2xf32 merge,
                                                      __epi_2xf64 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_4xf32 __builtin_epi_vfncvt_f_f_4xf32_4xf64_mask(__epi_4xf32 merge,
                                                      __epi_4xf64 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_8xf32 __builtin_epi_vfncvt_f_f_8xf32_8xf64_mask(__epi_8xf32 merge,
                                                      __epi_8xf64 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_16xf32 __builtin_epi_vfncvt_f_f_16xf32_16xf64_mask(__epi_16xf32 merge,
                                                         __epi_16xf64 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fp_to_narrow_fp(a[element])
   else
     result[element] = merge[element]

2.14.2. Elementwise widening floating-point conversion

Description

Use these builtins to convert elementwise a floating-point vector to another vector of floating-point elements that are twice the width of the source vector.

Instruction
vfwcvt.f.f.v
Prototypes
__epi_2xf64 __builtin_epi_vfwcvt_f_f_2xf64_2xf32(__epi_2xf32 a,
                                                 unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwcvt_f_f_4xf64_4xf32(__epi_4xf32 a,
                                                 unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwcvt_f_f_8xf64_8xf32(__epi_8xf32 a,
                                                 unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwcvt_f_f_16xf64_16xf32(__epi_16xf32 a,
                                                    unsigned long int gvl);
Operation
for element = 0 to gvl - 1
   result[element] = fp_to_wide_fp(a[element])
Masked prototypes
__epi_2xf64 __builtin_epi_vfwcvt_f_f_2xf64_2xf32_mask(__epi_2xf64 merge,
                                                      __epi_2xf32 a,
                                                      __epi_2xi1 mask,
                                                      unsigned long int gvl);
__epi_4xf64 __builtin_epi_vfwcvt_f_f_4xf64_4xf32_mask(__epi_4xf64 merge,
                                                      __epi_4xf32 a,
                                                      __epi_4xi1 mask,
                                                      unsigned long int gvl);
__epi_8xf64 __builtin_epi_vfwcvt_f_f_8xf64_8xf32_mask(__epi_8xf64 merge,
                                                      __epi_8xf32 a,
                                                      __epi_8xi1 mask,
                                                      unsigned long int gvl);
__epi_16xf64 __builtin_epi_vfwcvt_f_f_16xf64_16xf32_mask(__epi_16xf64 merge,
                                                         __epi_16xf32 a,
                                                         __epi_16xi1 mask,
                                                         unsigned long int gvl);
Masked operation
for element = 0 to gvl - 1
   if mask[element] then
     result[element] = fp_to_wide_fp(a[element])
   else
     result[element] = merge[element]