diff -Nur arrayfire-full-3.1.2.orig/docs/layout.xml arrayfire-full-3.1.2/docs/layout.xml
--- arrayfire-full-3.1.2.orig/docs/layout.xml 2015-09-25 19:16:18.000000000 -0300
+++ arrayfire-full-3.1.2/docs/layout.xml 2015-11-02 17:01:08.145683665 -0300
@@ -4,8 +4,6 @@
-
-
diff -Nur arrayfire-full-3.1.2.orig/docs/pages/README.md arrayfire-full-3.1.2/docs/pages/README.md
--- arrayfire-full-3.1.2.orig/docs/pages/README.md 2015-09-25 19:16:18.000000000 -0300
+++ arrayfire-full-3.1.2/docs/pages/README.md 2015-11-02 15:56:01.952662051 -0300
@@ -9,10 +9,8 @@
## Installing ArrayFire
-You can install ArrayFire using either a binary installer for Windows, OSX,
-or Linux or download it from source:
+You can install ArrayFire using Parabola or download it from source:
-* [Binary installers for Windows, OSX, and Linux](\ref installing)
* [Build from source](https://github.com/arrayfire/arrayfire)
## Easy to use
@@ -24,7 +22,7 @@
parallel programming to use ArrayFire.
A few lines of ArrayFire code
-accomplishes what can take 100s of complicated lines in CUDA or OpenCL
+accomplishes what can take 100s of complicated lines in OpenCL
kernels.
## ArrayFire is extensive!
@@ -56,25 +54,23 @@
#### Extending ArrayFire
ArrayFire can be used as a stand-alone application or integrated with
-existing CUDA or OpenCL code. All ArrayFire `arrays` can be
-interchanged with other CUDA or OpenCL data structures.
+existing OpenCL code. All ArrayFire `arrays` can be
+interchanged with other OpenCL data structure.
## Code once, run anywhere!
-With support for x86, ARM, CUDA, and OpenCL devices, ArrayFire supports for a comprehensive list of devices.
+With support for x86, ARM, and OpenCL devices, ArrayFire supports for a comprehensive list of devices.
Each ArrayFire installation comes with:
- - a CUDA version (named 'libafcuda') for [NVIDIA
- GPUs](https://developer.nvidia.com/cuda-gpus),
- an OpenCL version (named 'libafopencl') for [OpenCL devices](http://www.khronos.org/conformance/adopters/conformant-products#opencl)
- - a CPU version (named 'libafcpu') to fall back to when CUDA or OpenCL devices are not available.
+ - a CPU version (named 'libafcpu') to fall back to when OpenCL devices are not available.
## ArrayFire is highly efficient
#### Vectorized and Batched Operations
ArrayFire supports batched operations on N-dimensional arrays.
-Batch operations in ArrayFire are run in parallel ensuring an optimal usage of your CUDA or OpenCL device.
+Batch operations in ArrayFire are run in parallel ensuring an optimal usage of your OpenCL device.
You can get the best performance out of ArrayFire using [vectorization techniques]().
@@ -93,7 +89,7 @@
## Simple Example
Here's a live example to let you see ArrayFire code. You create [arrays](\ref
-construct_mat) which reside on CUDA or OpenCL devices. Then you can use
+construct_mat) which reside on OpenCL devices. Then you can use
[ArrayFire functions](modules.htm) on those [arrays](\ref construct_mat).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.cpp}
@@ -144,7 +140,7 @@
BibTeX:
@misc{Yalamanchili2015,
- abstract = {ArrayFire is a high performance software library for parallel computing with an easy-to-use API. Its array based function set makes parallel programming simple. ArrayFire's multiple backends (CUDA, OpenCL and native CPU) make it platform independent and highly portable. A few lines of code in ArrayFire can replace dozens of lines of parallel computing code, saving you valuable time and lowering development costs.},
+ abstract = {ArrayFire is a high performance software library for parallel computing with an easy-to-use API. Its array based function set makes parallel programming simple. ArrayFire's multiple backends (OpenCL and native CPU) make it platform independent and highly portable. A few lines of code in ArrayFire can replace dozens of lines of parallel computing code, saving you valuable time and lowering development costs.},
address = {Atlanta},
author = {Yalamanchili, Pavan and Arshad, Umar and Mohammed, Zakiuddin and Garigipati, Pradeep and Entschev, Peter and Kloppenborg, Brian and Malcolm, James and Melonakos, John},
publisher = {AccelerEyes},
diff -Nur arrayfire-full-3.1.2.orig/docs/pages/configuring_arrayfire_environment.md arrayfire-full-3.1.2/docs/pages/configuring_arrayfire_environment.md
--- arrayfire-full-3.1.2.orig/docs/pages/configuring_arrayfire_environment.md 2015-09-25 19:16:18.000000000 -0300
+++ arrayfire-full-3.1.2/docs/pages/configuring_arrayfire_environment.md 2015-11-02 12:12:06.817016693 -0300
@@ -18,19 +18,6 @@
present in this directory. You can use this variable to add include paths and
libraries to your projects.
-AF_CUDA_DEFAULT_DEVICE {#af_cuda_default_device}
--------------------------------------------------------------------------------
-
-Use this variable to set the default CUDA device. Valid values for this
-variable are the device identifiers shown when af::info is run.
-
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-AF_CUDA_DEFAULT_DEVICE=1 ./myprogram_cuda
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Note: af::setDevice call in the source code will take precedence over this
-variable.
-
AF_OPENCL_DEFAULT_DEVICE {#af_opencl_default_device}
-------------------------------------------------------------------------------
diff -Nur arrayfire-full-3.1.2.orig/docs/pages/getting_started.md arrayfire-full-3.1.2/docs/pages/getting_started.md
--- arrayfire-full-3.1.2.orig/docs/pages/getting_started.md 2015-09-25 19:16:18.000000000 -0300
+++ arrayfire-full-3.1.2/docs/pages/getting_started.md 2015-11-02 17:00:05.388864755 -0300
@@ -39,7 +39,6 @@
\snippet test/getting_started.cpp ex_getting_started_init
ArrayFire also supports array initialization from a device pointer.
-For example ArrayFire can be populated directly by a call to `cudaMemcpy`
\snippet test/getting_started.cpp ex_getting_started_dev_ptr
@@ -67,7 +66,7 @@
This means that function like `c[i] = a[i] + b[i]` could simply be written
as `c = a + b`.
ArrayFire has an intelligent runtime JIT compliation engine which converts
-array expressions into the smallest number of OpenCL/CUDA kernels.
+array expressions into the smallest number of OpenCL kernels.
This "kernel fusion" technology not only decreases the number of kernel calls,
but, more importantly, avoids extraneous global memory operations.
Our JIT functionality extends across C/C++ function boundaries and only ends
@@ -98,7 +97,7 @@
# Indexing {#getting_started_indexing}
Like all functions in ArrayFire, indexing is also executed in parallel on
-the OpenCL/CUDA device.
+the OpenCL device.
Because of this, indexing becomes part of a JIT operation and is accomplished
using parentheses instead of square brackets (i.e. as `A(0)` instead of `A[0]`).
To index `af::array`s you may use one or a combination of the following functions:
@@ -121,8 +120,8 @@
The `host` function *copies* the data from the device and makes it available
in a C-style array on the host.
The `device` function returns a pointer to device memory for interoperability
-with external CUDA/OpenCL kernels.
-For example, here is how we can interact with both OpenCL and CUDA:
+with external OpenCL kernels.
+For example, here is how we can interact with OpenCL:
\snippet test/getting_started.cpp ex_getting_started_ptr
@@ -192,8 +191,7 @@
Now that you have a general introduction to ArrayFire, where do you go from
here? In particular you might find these documents useful
-* [Building an ArrayFire program on Linux](\ref using_on_linux)
-* [Building an Arrayfire program on Windows](\ref using_on_windows)
+* [Building an ArrayFire program on GNU/Linux](\ref using_on_linux)
* [Timing ArrayFire code](\ref timing)
diff -Nur arrayfire-full-3.1.2.orig/docs/pages/release_notes.md arrayfire-full-3.1.2/docs/pages/release_notes.md
--- arrayfire-full-3.1.2.orig/docs/pages/release_notes.md 2015-09-25 19:16:18.000000000 -0300
+++ arrayfire-full-3.1.2/docs/pages/release_notes.md 2015-11-02 13:01:04.186402090 -0300
@@ -31,7 +31,6 @@
Installers
-----------
-* CUDA backend now depends on CUDA 7.5 toolkit
* OpenCL backend now require OpenCL 1.2 or greater
Bug Fixes
@@ -111,10 +110,6 @@
* \ref saveArray() and \ref readArray() - Stream arrays to binary files
* \ref toString() - toString function returns the array and data as a string
-* CUDA specific functionality
- * \ref getStream() - Returns default CUDA stream ArrayFire uses for the current device
- * \ref getNativeId() - Returns native id of the CUDA device
-
Improvements
------------
* dot
@@ -138,11 +133,6 @@
* CPU Backend
* Device properties for CPU
* Improved performance when all buffers are indexed linearly
-* CUDA Backend
- * Use streams in CUDA (no longer using default stream)
- * Using async cudaMem ops
- * Add 64-bit integer support for JIT functions
- * Performance improvements for CUDA JIT for non-linear 3D and 4D arrays
* OpenCL Backend
* Improve compilation times for OpenCL backend
* Performance improvements for non-linear JIT kernels on OpenCL
@@ -176,7 +166,7 @@
Installer
----------
* Fixed bug in automatic detection of ArrayFire when using with CMake in Windows
-* The Linux libraries are now compiled with static version of FreeImage
+* The GNU/Linux libraries are now compiled with static version of FreeImage
Known Issues
------------
diff -Nur arrayfire-full-3.1.2.orig/docs/pages/using_on_linux.md arrayfire-full-3.1.2/docs/pages/using_on_linux.md
--- arrayfire-full-3.1.2.orig/docs/pages/using_on_linux.md 2015-09-25 19:16:18.000000000 -0300
+++ arrayfire-full-3.1.2/docs/pages/using_on_linux.md 2015-11-02 16:42:20.209742489 -0300
@@ -1,32 +1,21 @@
-Using ArrayFire on Linux {#using_on_linux}
+Using ArrayFire on GNU/Linux {#using_on_linux}
=====
-Among the many possible build systems on Linux we suggest using ArrayFire with
+Among the many possible build systems on GNU/Linux we suggest using ArrayFire with
either CMake or Makefiles with CMake being the preferred build system.
## Pre-requisites
Before you get started, make sure you have the necessary pre-requisites.
-- If you are using CUDA, please make sure you have [CUDA 7](https://developer.nvidia.com/cuda-downloads) installed on your system.
- - [Contact us](support@arrayfire.com) for custom builds (eg. different toolkits)
-
- If you are using OpenCL, please make sure you have one of the following SDKs.
- [AMD OpenCL SDK](http://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-app-sdk/)
- [Intel OpenCL SDK](https://software.intel.com/en-us/articles/download-the-latest-intel-amt-software-development-kit-sdk)
- - [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
You will also need the following dependencies to use ArrayFire.
-#### Fedora, Centos and Redhat
-
-Install EPEL repo (not required for Fedora)
-
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-yum install epel-release
-yum update
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+#### BLAG Linux and GNU
Install the common dependencies
@@ -37,15 +26,11 @@
Install glfw (not required for no-gl installers)
-Fedora:
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
yum install glfw
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-For Centos and Redhat, please follow [these instructions](https://github.com/arrayfire/arrayfire/wiki/GLFW-for-ArrayFire)
-
-#### Debian and Ubuntu
+#### GnewSense and Trisquel
Install common dependencies
@@ -60,8 +45,6 @@
apt-get install libglfw3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-For Debian 7 and Ubuntu 14.04, please follow [these instructions](https://github.com/arrayfire/arrayfire/wiki/GLFW-for-ArrayFire)
-
**Special instructions for Tegra-K1**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -70,13 +53,12 @@
## CMake
-This is the suggested method of using ArrayFire on Linux.
+This is the suggested method of using ArrayFire on GNU/Linux.
ArrayFire ships with support for CMake by default, including a series of
`Find` scripts installed in the `/usr/local/share/ArrayFire/cmake` (or similar)
directory.
-These scripts will automatically find the CUDA, OpenCL, and CPU versions
-of ArrayFire and automatically choose the most powerful installed backend
-(typically CUDA).
+These scripts will automatically find the OpenCL, and CPU versions
+of ArrayFire and automatically choose the most powerful installed backend.
To use ArrayFire, simply insert the `FIND_PACKAGE` command inside of your
`CMakeLists.txt` file as follows:
@@ -99,14 +81,12 @@
ArrayFire_CPU_FOUND - True of the ArrayFire CPU library has been found.
ArrayFire_CPU_LIBRARIES - Location of ArrayFire's CPU library, if found
- ArrayFire_CUDA_FOUND - True of the ArrayFire CUDA library has been found.
- ArrayFire_CUDA_LIBRARIES - Location of ArrayFire's CUDA library, if found
ArrayFire_OpenCL_FOUND - True of the ArrayFire OpenCL library has been found.
ArrayFire_OpenCL_LIBRARIES - Location of ArrayFire's OpenCL library, if found
Therefore, if you wish to target a specific specific backend, switch
`${ArrayFire_LIBRARIES}` to `${ArrayFire_CPU}` `${ArrayFire_OPENCL}` or
-`${ArrayFire_CUDA}` in the `TARGET_LINK_LIBRARIES` command above.
+in the `TARGET_LINK_LIBRARIES` command above.
Finally, if you have installed ArrayFire to a non-standard location, CMake can still help
you out. When you execute CMake specify the path to the `ArrayFireConfig*` files that
@@ -127,8 +107,8 @@
instructions.
Then, in your linker line specify the path to ArrayFire using the `-L` option
(typically `-L/usr/lib` or `-L/usr/local/lib` and the specific ArrayFire backend
-you wish to use with the `-l` option (i.e. `-lafcpu`, `-lafopencl` or `-lafcuda`
-for the CPU, OpenCL and CUDA backends repsectively).
+you wish to use with the `-l` option (i.e. `-lafcpu` or `-lafopencl` for the CPU
+and OpenCL backends repsectively).
Here is a minimial example MakeFile which uses ArrayFire's CPU backend: