CUDA implementation of Weakly-Compressible Smoothed Particle Hydrodynamics for Elasto Plastic and thermal coupled Mechanics
-
Updated
Jun 12, 2024 - C++
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
CUDA implementation of Weakly-Compressible Smoothed Particle Hydrodynamics for Elasto Plastic and thermal coupled Mechanics
CUDA C++ Core Libraries
Python Computer Vision & Video Analytics Framework With Batteries Included
An animal can do training and inference every day of its existence until the day of its death. A forward pass is all you need.
DaCe - Data Centric Parallel Programming
✨ Zero-code distributed tracing and profiling, observability via eBPF 🚀
Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan
HIP: C++ Heterogeneous-Compute Interface for Portability
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
TornadoVM: A practical and efficient heterogeneous programming framework for managed languages
(GPU accelerated) Multi-arch (linux/amd64, linux/arm64/v8) JupyterLab R docker images. Please submit Pull Requests to the GitLab repository. Mirror of
A high-throughput and memory-efficient inference and serving engine for LLMs
(GPU accelerated) Multi-arch (linux/amd64, linux/arm64/v8) JupyterLab Python docker images. Please submit Pull Requests to the GitLab repository. Mirror of
GPU-accelerated tree-search in Chapel
Created by Nvidia
Released June 23, 2007