Private LLM - Local AI Chat 12+

Numen Technologies Limited

    • 69,00 kr

Skärmavbilder

Beskrivning

Meet Private LLM: Your Secure, Offline AI Assistant for macOS

Private LLM brings advanced AI capabilities directly to your iPhone, iPad, and Mac—all while keeping your data private and offline. With a one-time purchase and no subscriptions, you get a personal AI assistant that works entirely on your device.

Key Features:

- Local AI Functionality: Interact with a sophisticated AI chatbot without needing an internet connection. Your conversations stay on your device, ensuring complete privacy.

- Wide Range of AI Models: Choose from various open-source LLM models like Llama 3.2, Llama 3.1, Google Gemma 2, Microsoft Phi-3, Mistral 7B, and StableLM 3B. Each model is optimized for iOS and macOS hardware using advanced OmniQuant quantization, which offers superior performance compared to traditional RTN quantization methods.

- Siri and Shortcuts Integration: Create AI-driven workflows without writing code. Use Siri commands and Apple Shortcuts to enhance productivity in tasks like text parsing and generation.

- No Subscriptions or Logins: Enjoy full access with a single purchase. No need for subscriptions, accounts, or API keys. Plus, with Family Sharing, up to six family members can use the app.

- AI Language Services on macOS: Utilize AI-powered tools for grammar correction, summarization, and more across various macOS applications in multiple languages.

- Superior Performance with OmniQuant: Benefit from the advanced OmniQuant quantization process, which preserves the model's weight distribution for faster and more accurate responses, outperforming apps that use standard quantization techniques.

Supported Model Families:
- DeepSeek R1 Distill based models
- Phi-4 14B model
- Llama 3.3 70B based models
- Llama 3.2 based models
- Llama 3.1 based models
- Llama 3.0 based models
- Google Gemma 2 based models
- Qwen 2.5 based models (0.5B to 32B)
- Qwen 2.5 Coder based models (0.5B to 32B)
- Google Gemma 3 1B based models
- Solar 10.7B based models
- Yi 34B based models

For a full list of supported models, including detailed specifications, please visit privatellm.app/models.

Private LLM is a better alternative to generic llama.cpp and MLX wrappers apps like Enchanted, Ollama, LLM Farm, LM Studio, RecurseChat, etc on three fronts:
1. Private LLM uses a significantly faster mlc-llm based inference engine.
2. All models in Private LLM are quantised using the state of the art OmniQuant quantization algorithm, while competing apps use naive round-to-nearest quantization.
3. Private LLM is a fully native app built using C++, Metal and Swift, while many of the competing apps are bloated and non-native Electron JS based apps.

Please note that Private LLM only supports inference with text based LLMs.

Private LLM has been specifically optimized for Apple Silicon Macs.Private LLM for macOS delivers the best performance on Macs equipped with the Apple M1 or newer chips. Users on older Intel Macs without eGPUs may experience reduced performance. Please note that although the app nominally works on Intel Macs, we've stopped adding support for new models on Intel Macs due to performance issues associated with Intel hardware.

Nyheter

Version 1.9.11

- Support for two Qwen3 4B Instruct 2507 based models: Qwen3 4B Instruct 2507 abliterated and Josiefied Qwen3 4B Instruct 2507 (on Apple Silicon Macs with 16GB or more RAM)
- Fix for the rare crash in the Settings panel on some Macs.
- Minor bug fixes and updates

Betyg och recensioner

4,2 av 5
5 betyg

5 betyg

Mickemelu ,

Ok

Testar idag fungerar helt ok

OfficialFilip ,

Unusable! Lags behind!

The developer is very proud that this isn’t a Llama wrapper, but I don’t know if that something to be proud of considering that Private LLM isn’t even close to being up-to-date with the AI & LLM-world.

Some examples below:
1 - Still, 6 months after its release the Gemma 3 models is only available in the smallest, barely usable 1B variant. Where is the 12B Gemma 3 model?

2 - No support for multimodal LLMs (that can also take images and other media types as input).

3 - The "Convert to bullet points with Private LLM"-command is not available in the service menu when inside a non-text editor, for example if you have selected a non-editable text (like all of the text contents of a long article online in the Safari web browser), it would be great to have this command available to automatically open the app and have Private LLM summarize it inside the Private LLM-app on macOS.

4 - No support for the GPT-oss-20b model. Seriously, what’s the point of an LLM app that can’t even run the models from the most prominent AI company? Considering the Gemma 3 models still aren’t incorporated, I would expect the GPT-oss-20b to be incorporated in 2028 at the earliest - when they will already be obsolete.

This would all be acceptable if this was a free app, but since you actually have to spend your hard earned money on this app the development pace is unacceptable. The developer seems to more focused on responding the negative reviews than to actually develop this app at a resonable pace.

Svar från utvecklaren ,

Thank you for your continued feedback across multiple app updates. We appreciate that you've taken the time to update your review with each release. It shows you're actively following our development. You've been claiming that our app is useless for a long time now, and yet, based on your review updates, you seem to be a very active user. That's a very interesting dissonance!


We'd love to discuss your specific use cases in more detail. Please email us at support@numen.ie so we can better understand your needs and provide personalized assistance. Email allows for a more productive dialogue than review updates. For context to other potential users reading this: Private LLM is a one-time purchase with no subscriptions, and we've been regularly shipping updates with new models and features at no additional cost, for over two years now. We remain committed to bringing hight quality local AI to iOS, macOS and appreciate all constructive feedback.

OfficialFilip ,

Gemma 3 is missing on iPhone.

Hopefully support for Gemma 3 can be added soon. Gemma 3 1B is the best model for its size so not having this revolutionary LLM in the iPhone app unfortunate.

I would also like a way to create new chats (so the LLM starts from scratch), have a history view over previous chats, and functionality for submitting images to the multimodal LLMs.

Svar från utvecklaren ,

Support for Gemma3 1B QAT was added in the v1.9.7 update.

Appintegritet

Utvecklaren Numen Technologies Limited har angett att appens integritetsrutiner kan inkludera hantering av data enligt beskrivningen nedan. Det finns mer information i utvecklarens integritetspolicy.

Data samlas inte in

Utvecklaren samlar inte in några data från den här appen.

Sekretessriktlinjer kan variera, till exempel utifrån de funktioner du använder eller din ålder. Läs mer

Support

  • Familjedelning

    Upp till sex familjemedlemmar kan använda den här appen när Familjedelning är aktiverad.

Mer från den här utvecklaren

Du kanske också gillar

MLC Chat
Produktivitet
Pal Chat - AI Chat Client
Produktivitet
PocketPal AI
Produktivitet
Local Brain
Produktivitet
YourChat
Produktivitet
Enclave - Local AI Assistant
Produktivitet