CUDA C parallel implementation of the Merge operation.
-
Updated
Jul 17, 2022 - Cuda
CUDA C parallel implementation of the Merge operation.
A C# based download manager that uses task-based programming using Data parallelism, Task Parallel Library in C# Scheduling, controlling and managing tasks
The project utilizes OpenMP to implement parallelism in a large dataset by leveraging multicore processor architectures to concurrently execute code sections, optimizing performance and scalability for efficient database processing
Single-node data parallelism in Julia with CUDA
CUDA C parallel implementations of some well-known algorithms.
Data parallel and stream parallel skeletons implemented in erlang
A fully distributed hyperparameter optimization tool for PyTorch DNNs
Example of Distributed pyTorch
Towards Rehearsal-based Continual Learning at Scale: distributed CL with Horovod + PyTorch
MapReduceSimulator for Scheduling and Provisioning Algorithms
Binary data classification using TensorFlow and Keras in python and achieving data parallelism using MPI
Scaling Unet in Tensorflow
Official Repository for the paper: Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation
The Levenshtein edit-distance algorithm, in Javascript, parallelised across workers [WIP]
Sequential and Parallel Implementation of the Hodgkin-Huxley Neuron model.
Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation
Torch Automatic Distributed Neural Network (TorchAD-NN) training library. Built on top of TorchMPI, this module automatically parallelizes neural network training.
Scaling Unet in Pytorch
Complex ray tracing algorithm optimized by using parallelization over different partitioning schemes and explore the performance gains through grain size and processing units (parameters) over sequential algorithm to render a high resolution image.
Add a description, image, and links to the data-parallelism topic page so that developers can more easily learn about it.
To associate your repository with the data-parallelism topic, visit your repo's landing page and select "manage topics."