diff --git a/README.md b/README.md index 0e38ddb..b726cae 100644 --- a/README.md +++ b/README.md @@ -3,12 +3,104 @@ CUDA Stream Compaction **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2** -* (TODO) YOUR NAME HERE - * (TODO) [LinkedIn](), [personal website](), [twitter](), etc. -* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Zach Corse + * LinkedIn: https://www.linkedin.com/in/wzcorse/ + * Personal Website: https://wzcorse.com + * Twitter: @ZachCorse +* Tested on: Windows 10, i7-6700HQ @ 2.60GHz 32GB, NVIDIA GeForce GTX 970M (personal computer) -### (TODO: Your README) +## README -Include analysis, etc. (Remember, this is public, so don't put -anything here that you don't want to share with the world.) +Introduction +------------ +In this project I take a stab at implementing a scan algorithm on the GPU and compare the performance with a straightforward implementation on the CPU. For those not familiar with scan, this algorithm accepts an array idata of numbers then returns an array odata such that each element in odata is the sum of all elements in idata that precede the index in consideration (specifically, this is known as an exclusive scan, because the value stored at the index in consideration in idata is not included in the sum). + +I also include a parallel implementation of the compact algorithm, which uses scan as part of its implementation. This algorithm accepts an array of data, converts this array into an array of booleans according to a predefined rule (in this case, is the value zero or not), then "removes" the values that are false, thereby compacting the original array into one which preserves the array's remaining useful information. + +As shown here, my implementations pass the tests designed to check accuracy and measure timing for each algorithm. Speed comparisons are discussed in detail below. + +``` +**************** +** SCAN TESTS ** +**************** + [ 43 1 21 0 10 41 43 1 21 16 31 44 19 ... 6 0 ] +==== cpu scan, power-of-two ==== + elapsed time: 0.00079ms (std::chrono Measured) + [ 0 43 44 65 65 75 116 159 160 181 197 228 272 ... 6155 6161 ] +==== cpu scan, non-power-of-two ==== + elapsed time: 0.000395ms (std::chrono Measured) + [ 0 43 44 65 65 75 116 159 160 181 197 228 272 ... 6077 6104 ] + passed +==== naive scan, power-of-two ==== + elapsed time: 0.04864ms (CUDA Measured) + passed +==== naive scan, non-power-of-two ==== + elapsed time: 0.047904ms (CUDA Measured) + passed +==== work-efficient scan, power-of-two ==== + elapsed time: 0.20016ms (CUDA Measured) + passed +==== work-efficient scan, non-power-of-two ==== + elapsed time: 0.183488ms (CUDA Measured) + passed +==== thrust scan, power-of-two ==== + elapsed time: 0.022336ms (CUDA Measured) + passed +==== thrust scan, non-power-of-two ==== + elapsed time: 0.02464ms (CUDA Measured) + passed + +***************************** +** STREAM COMPACTION TESTS ** +***************************** + [ 3 1 3 0 2 3 1 3 3 0 1 2 3 ... 2 0 ] +==== cpu compact without scan, power-of-two ==== + elapsed time: 0.001185ms (std::chrono Measured) + [ 3 1 3 2 3 1 3 3 1 2 3 1 1 ... 2 2 ] + passed +==== cpu compact without scan, non-power-of-two ==== + elapsed time: 0.001185ms (std::chrono Measured) + [ 3 1 3 2 3 1 3 3 1 2 3 1 1 ... 1 3 ] + passed +==== cpu compact with scan ==== + elapsed time: 0.00158ms (std::chrono Measured) + [ 3 1 3 2 3 1 3 3 1 2 3 1 1 ... 2 2 ] + passed +==== work-efficient compact, power-of-two ==== + elapsed time: 0.237024ms (CUDA Measured) + passed +==== work-efficient compact, non-power-of-two ==== + elapsed time: 0.276832ms (CUDA Measured) + passed +``` + +Scan Performance Analysis +------------ + +Here I compare four different implementations of the scan algorithm. The first is on the CPU, and is straightforward: + +``` +void scan(int n, int *odata, const int *idata) { +timer().startCpuTimer(); +odata[0] = 0; +for (int k = 1; k < n; ++k) { + odata[k] = idata[k - 1] + odata[k - 1]; + } +timer().endCpuTimer(); +} + +``` + +On the GPU, I compare three scan implemenations. The first is "naive." In parallel, it loops over an array log(n) times, keeping track of an offset for each loop that scales as 2^(d-1) where d is the loop index. The "work-efficient" scan uses an algorithm developed in 1978 that first applies an "upsweep" to the array (which is equivalent to parallel reduction) then follows with a downsweep, which cleverly computes the scanned array. In theory, this should be faster than the naive implementation. Finally, I call the thrust library's scan algorithm as a gold standard to which I can compare my implementation. + +![graph1](img/scanCompare.png) + +The results are counterintuitive but informative. The CPU implementation is actually faster for sufficiently small arrays. Thrust outperforms the CPU for sufficiently large arrays, however. My naive implementation is slightly less efficient than thrust, but scales similarly. My supposedly "work efficient" implementation, however is much less efficient than all three. I can attribute this to several factors: first and foremost, my implemenation does not use blocking and shared memory. Instead, each thread must access and write values stored in global memory, which is far slower than reading and writing to shared memory per block. Additionally, for each loop d in upsweep and downsweep, I launch a thread for each index in the array. This is highly inefficient, as the greater d is, the fewer the indices that actually affect up/downsweep, which both sum and modify indices in intervals of 2^(d+1) (a binary tree structure). These indices are also distributed across multiple blocks, further indicating that executing scan recursively per block would lead to a significant speedup. + +Compact Performance Analysis +------------ + +As shown below, my CPU compact implementation outpeforms my parallel compact implementation which relies on my "work-efficient" scan implementation (and is therefore consistently slower). My CPU implementation that uses my CPU scan implementation rather than a more straightforward naive approach (CPU - no scan) is in fact slower, which is most likely due to the fact that scan requires that I generate two additional heap-based arrays to manage intermediate boolean and scanned boolean arrays. + +![graph1](img/compactCompare.png) diff --git a/img/compactCompare.png b/img/compactCompare.png new file mode 100644 index 0000000..8dabd74 Binary files /dev/null and b/img/compactCompare.png differ diff --git a/img/scanCompare.png b/img/scanCompare.png new file mode 100644 index 0000000..abb5d92 Binary files /dev/null and b/img/scanCompare.png differ diff --git a/src/main.cpp b/src/main.cpp index 1850161..ed182fe 100644 --- a/src/main.cpp +++ b/src/main.cpp @@ -13,12 +13,15 @@ #include #include "testing_helpers.hpp" -const int SIZE = 1 << 8; // feel free to change the size of array +const int SIZE = 1 << 19; // feel free to change the size of array const int NPOT = SIZE - 3; // Non-Power-Of-Two int *a = new int[SIZE]; int *b = new int[SIZE]; int *c = new int[SIZE]; +int *cudaDataArrayA; // device array for read/write during scan +int *cudaDataArrayB; // device array for read/write during scan + int main(int argc, char* argv[]) { // Scan tests @@ -31,6 +34,14 @@ int main(int argc, char* argv[]) { a[SIZE - 1] = 0; printArray(SIZE, a, true); + // initialize CUDA arrays + cudaMalloc((void**)&cudaDataArrayA, SIZE * sizeof(int)); + checkCUDAErrorFn("cudaMalloc tempA failed!"); + cudaMalloc((void**)&cudaDataArrayB, SIZE * sizeof(int)); + checkCUDAErrorFn("cudaMalloc tempB failed!"); + + cudaDeviceSynchronize(); + // initialize b using StreamCompaction::CPU::scan you implement // We use b for further comparison. Make sure your StreamCompaction::CPU::scan is correct. // At first all cases passed because b && c are all zeroes. @@ -47,9 +58,10 @@ int main(int argc, char* argv[]) { printArray(NPOT, b, true); printCmpResult(NPOT, b, c); + cudaMemcpy(cudaDataArrayA, a, SIZE * sizeof(int), cudaMemcpyHostToDevice); zeroArray(SIZE, c); printDesc("naive scan, power-of-two"); - StreamCompaction::Naive::scan(SIZE, c, a); + StreamCompaction::Naive::scan(SIZE, c, a, cudaDataArrayA, cudaDataArrayB); printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); //printArray(SIZE, c, true); printCmpResult(SIZE, b, c); @@ -60,27 +72,30 @@ int main(int argc, char* argv[]) { StreamCompaction::Naive::scan(SIZE, c, a); printArray(SIZE, c, true); */ - zeroArray(SIZE, c); + cudaMemcpy(cudaDataArrayA, a, SIZE * sizeof(int), cudaMemcpyHostToDevice); + zeroArray(SIZE, c); printDesc("naive scan, non-power-of-two"); - StreamCompaction::Naive::scan(NPOT, c, a); + StreamCompaction::Naive::scan(NPOT, c, a, cudaDataArrayA, cudaDataArrayB); printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); //printArray(SIZE, c, true); printCmpResult(NPOT, b, c); + cudaMemcpy(cudaDataArrayA, a, SIZE * sizeof(int), cudaMemcpyHostToDevice); zeroArray(SIZE, c); printDesc("work-efficient scan, power-of-two"); - StreamCompaction::Efficient::scan(SIZE, c, a); + StreamCompaction::Efficient::scan(SIZE, SIZE, c, a, cudaDataArrayA); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); //printArray(SIZE, c, true); printCmpResult(SIZE, b, c); + cudaMemcpy(cudaDataArrayA, a, SIZE * sizeof(int), cudaMemcpyHostToDevice); zeroArray(SIZE, c); printDesc("work-efficient scan, non-power-of-two"); - StreamCompaction::Efficient::scan(NPOT, c, a); + StreamCompaction::Efficient::scan(SIZE, NPOT, c, a, cudaDataArrayA); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); //printArray(NPOT, c, true); printCmpResult(NPOT, b, c); - + zeroArray(SIZE, c); printDesc("thrust scan, power-of-two"); StreamCompaction::Thrust::scan(SIZE, c, a); @@ -133,16 +148,20 @@ int main(int argc, char* argv[]) { printArray(count, c, true); printCmpLenResult(count, expectedCount, b, c); + cudaMemcpy(cudaDataArrayA, a, SIZE * sizeof(int), cudaMemcpyHostToDevice); + cudaMemcpy(cudaDataArrayB, a, SIZE * sizeof(int), cudaMemcpyHostToDevice); zeroArray(SIZE, c); printDesc("work-efficient compact, power-of-two"); - count = StreamCompaction::Efficient::compact(SIZE, c, a); + count = StreamCompaction::Efficient::compact(SIZE, SIZE, c, a, cudaDataArrayA, cudaDataArrayB); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); //printArray(count, c, true); printCmpLenResult(count, expectedCount, b, c); + cudaMemcpy(cudaDataArrayA, a, SIZE * sizeof(int), cudaMemcpyHostToDevice); + cudaMemcpy(cudaDataArrayB, a, SIZE * sizeof(int), cudaMemcpyHostToDevice); zeroArray(SIZE, c); printDesc("work-efficient compact, non-power-of-two"); - count = StreamCompaction::Efficient::compact(NPOT, c, a); + count = StreamCompaction::Efficient::compact(SIZE, NPOT, c, a, cudaDataArrayA, cudaDataArrayB); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); //printArray(count, c, true); printCmpLenResult(count, expectedNPOT, b, c); @@ -151,4 +170,7 @@ int main(int argc, char* argv[]) { delete[] a; delete[] b; delete[] c; + + cudaFree(cudaDataArrayA); + cudaFree(cudaDataArrayB); } diff --git a/stream_compaction/CMakeLists.txt b/stream_compaction/CMakeLists.txt index cdbef77..c8709e7 100644 --- a/stream_compaction/CMakeLists.txt +++ b/stream_compaction/CMakeLists.txt @@ -13,5 +13,5 @@ set(SOURCE_FILES cuda_add_library(stream_compaction ${SOURCE_FILES} - OPTIONS -arch=sm_20 + OPTIONS -arch=sm_50 ) diff --git a/stream_compaction/common.cu b/stream_compaction/common.cu index 8fc0211..d601e60 100644 --- a/stream_compaction/common.cu +++ b/stream_compaction/common.cu @@ -14,7 +14,6 @@ void checkCUDAErrorFn(const char *msg, const char *file, int line) { exit(EXIT_FAILURE); } - namespace StreamCompaction { namespace Common { @@ -24,6 +23,7 @@ namespace StreamCompaction { */ __global__ void kernMapToBoolean(int n, int *bools, const int *idata) { // TODO + // written in efficient.cu } /** @@ -33,7 +33,7 @@ namespace StreamCompaction { __global__ void kernScatter(int n, int *odata, const int *idata, const int *bools, const int *indices) { // TODO + // written in efficient.cu } - } } diff --git a/stream_compaction/common.h b/stream_compaction/common.h index 99a1b04..24a3e19 100644 --- a/stream_compaction/common.h +++ b/stream_compaction/common.h @@ -13,6 +13,8 @@ #define FILENAME (strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : __FILE__) #define checkCUDAError(msg) checkCUDAErrorFn(msg, FILENAME, __LINE__) +#define blocksize 512 + /** * Check for CUDA errors; print and exit if there was a problem. */ diff --git a/stream_compaction/cpu.cu b/stream_compaction/cpu.cu index 05ce667..d37864f 100644 --- a/stream_compaction/cpu.cu +++ b/stream_compaction/cpu.cu @@ -1,15 +1,15 @@ #include #include "cpu.h" -#include "common.h" +#include "common.h" namespace StreamCompaction { namespace CPU { - using StreamCompaction::Common::PerformanceTimer; - PerformanceTimer& timer() - { - static PerformanceTimer timer; - return timer; + using StreamCompaction::Common::PerformanceTimer; + PerformanceTimer& timer() + { + static PerformanceTimer timer; + return timer; } /** @@ -19,7 +19,10 @@ namespace StreamCompaction { */ void scan(int n, int *odata, const int *idata) { timer().startCpuTimer(); - // TODO + odata[0] = 0; + for (int k = 1; k < n; ++k) { + odata[k] = idata[k - 1] + odata[k - 1]; + } timer().endCpuTimer(); } @@ -31,8 +34,15 @@ namespace StreamCompaction { int compactWithoutScan(int n, int *odata, const int *idata) { timer().startCpuTimer(); // TODO + int counter = 0; + for (int k = 0; k < n; ++k) { + if (idata[k] != 0) { + odata[counter] = idata[k]; + counter++; + } + } timer().endCpuTimer(); - return -1; + return counter; } /** @@ -41,10 +51,39 @@ namespace StreamCompaction { * @returns the number of elements remaining after compaction. */ int compactWithScan(int n, int *odata, const int *idata) { - timer().startCpuTimer(); - // TODO - timer().endCpuTimer(); - return -1; + + int *binaryArray = new int[n]; + int *scanResult = new int[n]; + + timer().startCpuTimer(); + // prepare temporary binary array + for (int k = 0; k < n; ++k) { + if (idata[k] != 0) { + binaryArray[k] = 1; + } + else { + binaryArray[k] = 0; + } + } + // scan + scanResult[0] = 0; + for (int k = 1; k < n; ++k) { + scanResult[k] = binaryArray[k - 1] + scanResult[k - 1]; + } + // scatter + int counter = 0; + for (int k = 0; k < n; ++k) { + if (binaryArray[k] == 1) { + odata[scanResult[k]] = idata[k]; + counter++; + } + } + timer().endCpuTimer(); + + delete binaryArray; + delete scanResult; + + return counter; } } } diff --git a/stream_compaction/efficient.cu b/stream_compaction/efficient.cu index 36c5ef2..0acb3cf 100644 --- a/stream_compaction/efficient.cu +++ b/stream_compaction/efficient.cu @@ -1,3 +1,5 @@ +#define GLM_FORCE_CUDA + #include #include #include "common.h" @@ -5,20 +7,60 @@ namespace StreamCompaction { namespace Efficient { - using StreamCompaction::Common::PerformanceTimer; - PerformanceTimer& timer() - { - static PerformanceTimer timer; - return timer; + using StreamCompaction::Common::PerformanceTimer; + PerformanceTimer& timer() + { + static PerformanceTimer timer; + return timer; } + __global__ void kernUpSweep(int n, int off1, int off2, int* a) { + int idx = threadIdx.x + (blockIdx.x * blockDim.x); + if (idx % off1 != 0 || idx >= n - 1) { + return; + } + a[idx + off1 - 1] += a[idx + off2 - 1]; + } + + __global__ void kernDownSweep(int n, int off1, int off2, int* a) { + int idx = threadIdx.x + (blockIdx.x * blockDim.x); + if (idx % off1 != 0 || idx >= n - 1) { + return; + } + int t = a[idx + off2 - 1]; + a[idx + off2 - 1] = a[idx + off1 - 1]; + a[idx + off1 - 1] += t; + } + /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ - void scan(int n, int *odata, const int *idata) { - timer().startGpuTimer(); - // TODO - timer().endGpuTimer(); + void scan(int n, int npot, int *odata, const int *idata, int *cudaA) { + + dim3 fullBlocksPerGrid((n + blocksize - 1) / blocksize); + + // UPSWEEP + int dmax = ilog2ceil(n) - 1; + + timer().startGpuTimer(); + + for (int d = 0; d <= dmax; ++d) { + int off1 = (int)pow(2, d + 1); + int off2 = (int)pow(2, d); + kernUpSweep<<>>(n, off1, off2, cudaA); + } + int temp[1] = { 0 }; + cudaMemcpy(cudaA + (n - 1), temp, 1 * sizeof(int), cudaMemcpyHostToDevice); + // DOWNSWEEP + for (int d = dmax; d >= 0; d--) { + int off1 = (int)pow(2, d + 1); + int off2 = (int)pow(2, d); + kernDownSweep<<>>(n, off1, off2, cudaA); + } + + timer().endGpuTimer(); + + cudaMemcpy(odata, cudaA, npot * sizeof(int), cudaMemcpyDeviceToHost); } /** @@ -30,11 +72,79 @@ namespace StreamCompaction { * @param idata The array of elements to compact. * @returns The number of elements remaining after compaction. */ - int compact(int n, int *odata, const int *idata) { + __global__ void kernConvertToBinary(int n, int *devBinary) { + int idx = threadIdx.x + (blockIdx.x * blockDim.x); + if (idx >= n) { + return; + } + if (devBinary[idx] != 0) { + devBinary[idx] = 1; + } + else { + devBinary[idx] = 0; + } + } + + __global__ void kernScatter(int n, int *devBinary, int *devBinaryCopy, int *devCopy, int *devResult) { + int idx = threadIdx.x + (blockIdx.x * blockDim.x); + if (idx >= n) { + return; + } + if (devBinary[idx] == 1) { + devResult[devBinaryCopy[idx]] = devCopy[idx]; + } + } + + int compact(int n, int npot, int *odata, const int *idata, int *devCopy, int *devBinary) { + + dim3 fullBlocksPerGrid((n + blocksize - 1) / blocksize); + + int *devBinaryCopy; + int *devResult; + + cudaMalloc((void**)&devBinaryCopy, n * sizeof(int)); + checkCUDAErrorFn("cudaMalloc devBinaryCopy"); + cudaMalloc((void**)&devResult, n * sizeof(int)); + checkCUDAErrorFn("cudaMalloc devResult"); + timer().startGpuTimer(); - // TODO - timer().endGpuTimer(); - return -1; + + // preparing binary array + kernConvertToBinary<<>>(n, devBinary); + cudaMemcpy(devBinaryCopy, devBinary, n * sizeof(int), cudaMemcpyDeviceToDevice); + + // running scan + // UPSWEEP + int dmax = ilog2ceil(n) - 1; + + for (int d = 0; d <= dmax; ++d) { + int off1 = (int)pow(2, d + 1); + int off2 = (int)pow(2, d); + kernUpSweep<<>>(n, off1, off2, devBinaryCopy); + } + int temp[1] = { 0 }; + cudaMemcpy(devBinaryCopy + (n - 1), temp, 1 * sizeof(int), cudaMemcpyHostToDevice); + // DOWNSWEEP + for (int d = dmax; d >= 0; d--) { + int off1 = (int)pow(2, d + 1); + int off2 = (int)pow(2, d); + kernDownSweep<<>>(n, off1, off2, devBinaryCopy); + } + + // populating compact return array + kernScatter<<>>(n, devBinary, devBinaryCopy, devCopy, devResult); + + timer().endGpuTimer(); + + int tempResult[1] = { 0 }; + int offset = npot == n ? n - 1 : npot; + cudaMemcpy(&tempResult, devBinaryCopy + offset, 1 * sizeof(int), cudaMemcpyDeviceToHost); + cudaMemcpy(odata, devResult, npot * sizeof(int), cudaMemcpyDeviceToHost); + + cudaFree(devBinaryCopy); + cudaFree(devResult); + + return tempResult[0]; } } } diff --git a/stream_compaction/efficient.h b/stream_compaction/efficient.h index 803cb4f..fc5a4a6 100644 --- a/stream_compaction/efficient.h +++ b/stream_compaction/efficient.h @@ -6,8 +6,8 @@ namespace StreamCompaction { namespace Efficient { StreamCompaction::Common::PerformanceTimer& timer(); - void scan(int n, int *odata, const int *idata); + void scan(int n, int npot, int *odata, const int *idata, int *cudaA); - int compact(int n, int *odata, const int *idata); + int compact(int n, int npot, int *odata, const int *idata, int *devCopy, int *devBinary); } } diff --git a/stream_compaction/naive.cu b/stream_compaction/naive.cu index 9218f8e..b78ef5b 100644 --- a/stream_compaction/naive.cu +++ b/stream_compaction/naive.cu @@ -1,3 +1,5 @@ +#define GLM_FORCE_CUDA + #include #include #include "common.h" @@ -5,21 +7,53 @@ namespace StreamCompaction { namespace Naive { - using StreamCompaction::Common::PerformanceTimer; - PerformanceTimer& timer() - { - static PerformanceTimer timer; - return timer; + using StreamCompaction::Common::PerformanceTimer; + PerformanceTimer& timer() + { + static PerformanceTimer timer; + return timer; } - // TODO: __global__ - + // TODO: /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. - */ - void scan(int n, int *odata, const int *idata) { - timer().startGpuTimer(); - // TODO - timer().endGpuTimer(); + */ + + __global__ void kernAdvanceScan(int n, int offset, int* a, int* b) { + int idx = threadIdx.x + (blockIdx.x * blockDim.x); + if (idx >= n) { + return; + } + + if (idx >= offset) { + b[idx] = a[idx - offset] + a[idx]; + } + else { + b[idx] = a[idx]; + } + } + + void scan(int n, int *odata, const int *idata, int *cudaA, int *cudaB) { + + dim3 fullBlocksPerGrid((n + blocksize - 1) / blocksize); + + int kmax = ilog2ceil(n); + + timer().startGpuTimer(); + + for (int k = 1; k <= kmax; ++k) { + // invoke kernel + int offset = (int)pow(2, k - 1); + kernAdvanceScan<<>>(n - 1, offset, cudaA, cudaB); + // pointer swap + int *temp = cudaA; + cudaA = cudaB; + cudaB = temp; + } + + timer().endGpuTimer(); + + cudaMemcpy(odata + 1, cudaA, (n - 1) * sizeof(int), cudaMemcpyDeviceToHost); + odata[0] = 0; } } } diff --git a/stream_compaction/naive.h b/stream_compaction/naive.h index 37dcb06..53b75cc 100644 --- a/stream_compaction/naive.h +++ b/stream_compaction/naive.h @@ -5,7 +5,6 @@ namespace StreamCompaction { namespace Naive { StreamCompaction::Common::PerformanceTimer& timer(); - - void scan(int n, int *odata, const int *idata); + void scan(int n, int *odata, const int *idata, int *cudaA, int *cudaB); } } diff --git a/stream_compaction/thrust.cu b/stream_compaction/thrust.cu index 36b732d..3b9a7e2 100644 --- a/stream_compaction/thrust.cu +++ b/stream_compaction/thrust.cu @@ -8,21 +8,24 @@ namespace StreamCompaction { namespace Thrust { - using StreamCompaction::Common::PerformanceTimer; - PerformanceTimer& timer() - { - static PerformanceTimer timer; - return timer; + using StreamCompaction::Common::PerformanceTimer; + PerformanceTimer& timer() + { + static PerformanceTimer timer; + return timer; } /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { - timer().startGpuTimer(); // TODO use `thrust::exclusive_scan` // example: for device_vectors dv_in and dv_out: - // thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); - timer().endGpuTimer(); + thrust::device_vector dv_in(idata, idata + n); + thrust::device_vector dv_out = thrust::device_vector(n, 0); + timer().startGpuTimer(); + thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); + timer().endGpuTimer(); + thrust::copy(dv_out.begin(), dv_out.end(), odata); } } }