LM-Kit.NET
2024.9.2
See the version list below for details.
dotnet add package LM-Kit.NET --version 2024.9.2
NuGet\Install-Package LM-Kit.NET -Version 2024.9.2
<PackageReference Include="LM-Kit.NET" Version="2024.9.2" />
paket add LM-Kit.NET --version 2024.9.2
#r "nuget: LM-Kit.NET, 2024.9.2"
// Install LM-Kit.NET as a Cake Addin #addin nuget:?package=LM-Kit.NET&version=2024.9.2 // Install LM-Kit.NET as a Cake Tool #tool nuget:?package=LM-Kit.NET&version=2024.9.2
Enterprise-Grade .NET SDK for Integrating Generative AI Capabilities.
With LM-Kit.NET, integrating or building AI is no longer complex.
LM-Kit.NET is a cutting-edge, cross-platform SDK that offers a wide range of advanced Generative AI capabilities.
It enables seamless orchestration of multiple AI models through a single API, tailored to meet specific business needs.
The SDK offers innovative AI capabilities across a wide range of domains, including text completion, chat assistance, content retrieval, text analysis, translation, and more...
<br/>
Wide range of capabilities
LM-Kit.NET offers a suite of highly optimized low-level APIs designed to facilitate the development of fully customized Large Language Model (LLM) inference pipelines.
Additionally, LM-Kit.NET provides an extensive array of high-level AI functionalities spanning multiple domains, including:
- 📝 Text Generation: Create coherent and contextually relevant text automatically.
- 📋 Structured Output Generation: Extract structured information based on a predefined scheme.
- ✅ Text Quality Evaluation: Assess the quality metrics of generated text content.
- 🔗 Function Calling: Enable the dynamic invocation of specific functions within your own application.
- 🌐 Language Detection: Identify the language of text input with high accuracy.
- 🔄 Text Translation: Convert text between multiple languages seamlessly.
- ✍️ Text Correction: Correct grammar and spelling in text of any length.
- 🔄 Text Rewriting: Rewrite text using a specific communication style.
- 💻 Code Analysis: Perform various programming code processing tasks.
- 🛠️ Model Fine-Tuning: Customize pre-trained models to better suit specific needs.
- ⚙️ Model Quantization: Optimize models for efficient inference.
- 🔍 Retrieval-Augmented Generation (RAG): Enhance text generation with information retrieved from a large corpus.
- 🔢 Text Embeddings: Transform text into numerical representations that capture semantic meanings.
- ❓ Question Answering: Provide answers to queries, supporting both single-turn and multi-turn interactions.
- 🏷️ Custom Text Classification: Categorize text into predefined classes according to content.
- 😊 Sentiment Analysis: Detect and interpret the emotional tone from text.
- 😄 Emotion Detection: Identify specific emotions expressed in text.
- 😏 Sarcasm Detection: Detect instances of sarcasm in written text.
- 🚀 And More: Explore additional features that extend the capabilities of your applications.
These ever-expanding capabilities ensure seamless integration of advanced AI solutions, tailored to meet diverse needs through a single Software Development Kit (SDK).
<br/>
Run local LLMs on any device
The LM-Kit.NET model inference system is built to deliver high performance across a wide variety of hardware with minimal setup and no external dependencies. LM-Kit.NET runs inference entirely on-device, also known as edge computing, providing users with full control and precise tuning of the inference process. Moreover, LM-Kit.NET supports an ever-growing list of model architectures, including LLaMA-2, LLaMA-3, Mistral, Falcon, Phi, and others.
<br/>
Highest degree of performance
1. 🚀 Optimized for various GPUs and CPUs
LM-Kit.NET is expertly engineered to maximize the capabilities of a wide range of hardware configurations, ensuring top-tier performance across all platforms. This multi-platform optimization allows LM-Kit.NET to specifically leverage the unique hardware strengths of each device. For instance, it automatically uses CUDA on NVIDIA GPUs to boost computation speeds significantly, Metal on Apple devices to enhance both graphics and processing tasks, and Vulkan to efficiently harness the power of multiple GPUs, including those from AMD, Intel, and NVIDIA, across diverse environments.
2. ⚙️ State of the art architectural foundations
The core system of LM-Kit.NET has undergone rigorous optimization to handle a wide array of scenarios efficiently.
Its advanced internal caching and recycling mechanisms are designed to maintain high performance levels consistently, even under varied operational conditions.
Whether your application is running a single instance or multiple concurrent instances, LM-Kit.NET's sophisticated core system orchestrates all requests smoothly, delivering rapid performance while minimizing resource consumption.
3. 🌟 Unrivaled performances
Experience model inference speeds up to 5x faster with LM-Kit.NET, thanks to its cutting-edge underlying technologies that are continuously refined and benchmarked to ensure you stay ahead of the curve.
<br/>
Be an Early Adopter of the latest and future Generative AI innovations
LM-Kit.NET is crafted by industry experts employing a strategy of continuous innovation.
It is designed to rapidly address emerging market needs and introduce new capabilities to modernize existing applications.
Leveraging state-of-the-art AI technologies, LM-Kit.NET offers a modern, user-friendly, and intuitive API suite, making advanced AI accessible for any type of application.
<br/>
Maintain full control over your data
Maintaining full control over your data is crucial for both privacy and security.
By using LM-Kit.NET, which performs model inference directly on-device, you ensure that your sensitive data remains within your controlled environment and does not traverse external networks.
Here are some key benefits of this approach:
1. 🔒 Enhanced Privacy
Since all data processing is done locally on your device, there is no need to send data to a remote server.
This drastically reduces the risk of exposure or leakage of sensitive information, keeping your data confidential.
2. 🛡️ Increased Security
With zero external requests, the risk of intercepting data during transmission is completely eliminated.
This closed system approach minimizes vulnerabilities that are often exploited in data breaches, offering a more secure solution.
3. ⚡ Faster Response Times
Processing data locally reduces the latency typically associated with sending data to a remote server and waiting for a response.
This results in quicker model inferences, leading to faster decision-making and improved user experience.
4. 📉 Reduced Bandwidth Usage
By avoiding the need to transfer large volumes of data over the internet, LM-Kit.NET minimizes bandwidth consumption.
This is particularly beneficial in environments with limited or costly data connectivity.
5. ✅ Full Compliance with Data Regulations
Local processing helps in complying with strict data protection regulations, such as GDPR or HIPAA, which often require certain types of data to be stored and processed within specific geographical boundaries or environments.
By leveraging LM-Kit.NET on-device processing capabilities, organizations can achieve higher levels of data autonomy and protection, while still benefiting from advanced computational models and real-time analytics.
<br/>
Seamless integration and simple deployment
LM-Kit.NET offers an exceptionally streamlined deployment model, being packaged as a single NuGet for all supported platforms.
Integrating LM-Kit.NET into any .NET application is a straightforward process, typically requiring just a few clicks.
LM-Kit.NET combines C# and C++ coding, meticulously crafted without dependencies to perfectly suit its functionalities.
1. 🔧 Simplified Integration
LM-Kit.NET requires no external containers or complex deployment procedures, making the integration process exceptionally straightforward.
This approach significantly reduces development time and lowers the learning curve, enabling a broader range of developers to effectively deploy and leverage the technology.
2. 🚀 Streamlined Deployment
LM-Kit.NET is designed for efficiency and simplicity. By default, it runs directly within the same application process that calls it, avoiding the complexities and resource demands typically associated with containerized systems.
This direct integration accelerates performance and simplifies the incorporation into existing applications by removing the common hurdles associated with container use.
3. ⚙️ Efficient Resource Management
Operating in-process, LM-Kit.NET minimizes its impact on system resources, making it ideal for devices with limited capacity or situations where maximizing computing efficiency is essential.
4. 🌟 Enhanced Reliability
By avoiding reliance on external services or containers, LM-Kit.NET offers more stable and predictable performance.
This reliability is vital for applications that demand consistent, rapid data processing without external dependencies.
<br/>
Supported Operating Systems
LM-Kit.NET is designed for full compatibility with a wide range of operating systems, ensuring smooth and reliable performance on all supported platforms:
- 🪟 Windows: Compatible with versions from Windows 7 through to the latest release.
- 🍏 macOS: Supports macOS 11 and all subsequent versions.
- 🐧 Linux: Functions optimally on distributions with glibc version 2.27 or newer.
<br/>
Supported .NET Frameworks
LMkit.NET is compatible with a wide range of .NET frameworks, spanning from version 4.6.2 up to .NET 9.
To maximize performance through specific optimizations, separate binaries are provided for each supported framework version.
<br/>
Hugging Face Integration
The LM-Kit section on Hugging Face provides state-of-the-art quantized models that have been rigorously tested with the LM-Kit SDK. Moreover, LM-Kit enables you to seamlessly load models directly from Hugging Face repositories via the Hugging Face API, simplifying the integration and deployment of the latest models into your applications.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 is compatible. net5.0-windows was computed. net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 is compatible. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
.NET Core | netcoreapp2.0 was computed. netcoreapp2.1 was computed. netcoreapp2.2 was computed. netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.0 is compatible. netstandard2.1 was computed. |
.NET Framework | net461 was computed. net462 was computed. net463 was computed. net47 was computed. net471 was computed. net472 was computed. net48 was computed. net481 was computed. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen40 was computed. tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETCoreApp 2.1
- No dependencies.
-
.NETCoreApp 3.1
- No dependencies.
-
.NETStandard 2.0
- Microsoft.Bcl.AsyncInterfaces (>= 8.0.0)
- System.Buffers (>= 4.5.1)
- System.Linq.Async (>= 6.0.1)
- System.Memory (>= 4.5.5)
- System.Numerics.Vectors (>= 4.5.0)
- System.Runtime.CompilerServices.Unsafe (>= 6.0.0)
- System.Text.Encodings.Web (>= 8.0.0)
- System.Text.Json (>= 8.0.4)
- System.Threading.Tasks.Extensions (>= 4.5.4)
- System.ValueTuple (>= 4.3.0)
-
net5.0
- No dependencies.
-
net6.0
- No dependencies.
-
net7.0
- No dependencies.
-
net8.0
- No dependencies.
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated |
---|---|---|
2024.11.3 | 95 | 11/12/2024 |
2024.11.2 | 103 | 11/5/2024 |
2024.11.1 | 107 | 11/4/2024 |
2024.10.5 | 160 | 10/24/2024 |
2024.10.4 | 193 | 10/17/2024 |
2024.10.3 | 117 | 10/16/2024 |
2024.10.2 | 152 | 10/9/2024 |
2024.10.1 | 159 | 10/1/2024 |
2024.9.4 | 134 | 9/25/2024 |
2024.9.3 | 170 | 9/18/2024 |
2024.9.2 | 155 | 9/11/2024 |
2024.9.1 | 155 | 9/6/2024 |
2024.9.0 | 136 | 9/3/2024 |
2024.8.4 | 153 | 8/26/2024 |
2024.8.3 | 179 | 8/21/2024 |
2024.8.2 | 139 | 8/20/2024 |
2024.8.1 | 152 | 8/15/2024 |
2024.8.0 | 117 | 8/11/2024 |
2024.7.10 | 111 | 8/6/2024 |
2024.7.9 | 86 | 7/31/2024 |
2024.7.8 | 73 | 7/30/2024 |
2024.7.7 | 87 | 7/29/2024 |
2024.7.6 | 93 | 7/27/2024 |
2024.7.5 | 119 | 7/26/2024 |
- **Added `Seed` property to the `RandomSampling` class**
- **Added `Seed` property to the `MirostatSampling` class**
- **Added `Seed` property to the `Mirostat2Sampling` class**
- **Added `TrimAuto` member to the `InputLengthOverflowPolicy` enumeration**
- **`ChatHistory` objects can now be deserialized without specifying a `Model` parameter**
- **Improved inference speed on CPU**
- **Enhanced internal API for better error handling**