Private LLM - Local AI Chat 12+

Local Offline Private AI Chat

Numen Technologies Limited

    • USD 4.99

Description

Meet Private LLM: Your Secure, Offline AI Assistant for macOS

Private LLM brings advanced AI capabilities directly to your iPhone, iPad, and Mac—all while keeping your data private and offline. With a one-time purchase and no subscriptions, you get a personal AI assistant that works entirely on your device.

Key Features:

- Local AI Functionality: Interact with a sophisticated AI chatbot without needing an internet connection. Your conversations stay on your device, ensuring complete privacy.

- Wide Range of AI Models: Choose from various open-source LLM models like Llama 3.2, Llama 3.1, Google Gemma 2, Microsoft Phi-3, Mistral 7B, and StableLM 3B. Each model is optimized for iOS and macOS hardware using advanced OmniQuant quantization, which offers superior performance compared to traditional RTN quantization methods.

- Siri and Shortcuts Integration: Create AI-driven workflows without writing code. Use Siri commands and Apple Shortcuts to enhance productivity in tasks like text parsing and generation.

- No Subscriptions or Logins: Enjoy full access with a single purchase. No need for subscriptions, accounts, or API keys. Plus, with Family Sharing, up to six family members can use the app.

- AI Language Services on macOS: Utilize AI-powered tools for grammar correction, summarization, and more across various macOS applications in multiple languages.

- Superior Performance with OmniQuant: Benefit from the advanced OmniQuant quantization process, which preserves the model's weight distribution for faster and more accurate responses, outperforming apps that use standard quantization techniques.

Supported Model Families:
- DeepSeek R1 Distill based models
- Phi-4 14B model
- Llama 3.3 70B based models
- Llama 3.2 based models
- Llama 3.1 based models
- Llama 3.0 based models
- Google Gemma 2 based models
- Qwen 2.5 based models (0.5B to 32B)
- Qwen 2.5 Coder based models (0.5B to 32B)
- Google Gemma 3 1B based models
- Solar 10.7B based models
- Yi 34B based models

For a full list of supported models, including detailed specifications, please visit privatellm.app/models.

Private LLM is a better alternative to generic llama.cpp and MLX wrappers apps like Enchanted, Ollama, LLM Farm, LM Studio, RecurseChat, etc on three fronts:
1. Private LLM uses a significantly faster mlc-llm based inference engine.
2. All models in Private LLM are quantised using the state of the art OmniQuant quantization algorithm, while competing apps use naive round-to-nearest quantization.
3. Private LLM is a fully native app built using C++, Metal and Swift, while many of the competing apps are bloated and non-native Electron JS based apps.

Please note that Private LLM only supports inference with text based LLMs.

Private LLM has been specifically optimized for Apple Silicon Macs.Private LLM for macOS delivers the best performance on Macs equipped with the Apple M1 or newer chips. Users on older Intel Macs without eGPUs may experience reduced performance. Please note that although the app nominally works on Intel Macs, we've stopped adding support for new models on Intel Macs due to performance issues associated with Intel hardware.

What’s New

Version 1.9.9

- Added support for 3-bit and 4-bit OmniQuant quantized versions of the Perplexity r1-1776-distill-llama-70b model
- Added support for a 4-bit OmniQuant quantized version of the Llama-3.1-8B-UltraMedical model
- Added support for a 4-bit OmniQuant quantized version of the Meta-Llama-3.1-8B-SurviveV3 survival specialist model
- Added support for a 4-bit GPTQ quantized versions of the openhands 7B and 32B coding models
- Added support for 4-bit QAT version of the Google Gemma3 1B IT model
- Added support for 4-bit OmniQuant quantized versions of the Google Gemma3 1B based gemma-3-1b-it-abliterated and amoral-gemma3-1B-v2 models
- Other minor bug fixes and updates

App Privacy

The developer, Numen Technologies Limited, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy.

Data Not Collected

The developer does not collect any data from this app.

Privacy practices may vary based on, for example, the features you use or your age. Learn More

Supports

  • Family Sharing

    Up to six family members can use this app with Family Sharing enabled.

You Might Also Like

Local Brain
Productivity
Hugging Chat
Productivity
YourChat
Productivity
PocketGPT: Private AI
Productivity
Pal Chat - AI Chat Client
Productivity
PocketPal AI
Productivity