Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help

The search service can find package by either name (apache), provides(webserver), absolute file names (/usr/bin/apache), binaries (gprof) or shared libraries (libXm.so.2) in standard path. It does not support multiple arguments yet...

The System and Arch are optional added filters, for example System could be "redhat", "redhat-7.2", "mandrake" or "gnome", Arch could be "i386" or "src", etc. depending on your system.

System Arch

RPM resource llama-cpp

The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook * Plain C/C++ implementation without dependencies * Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks * AVX, AVX2 and AVX512 support for x86 architectures * Mixed F16 / F32 precision * 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support * CUDA, Metal and OpenCL GPU backend support The original implementation of llama.cpp was hacked in an evening. Since then, the project has improved significantly thanks to many contributions. This project is mainly for educational purposes and serves as the main playground for developing new features for the ggml library.

Found 1 sites for llama-cpp

Found 2 RPM for llama-cpp

PackageSummaryDistributionDownload
llama-cpp-b2619-1.fc41.aarch64.htmlPort of Facebook's LLaMA model in C/C++Fedora Rawhide for aarch64llama-cpp-b2619-1.fc41.aarch64.rpm
llama-cpp-b2619-1.fc41.x86_64.htmlPort of Facebook's LLaMA model in C/C++Fedora Rawhide for x86_64llama-cpp-b2619-1.fc41.x86_64.rpm

Generated by rpm2html 1.6

Fabrice Bellet