Private LLM - Local AI Chatbot 17+

Secure Private AI Chatbot

Numen Technologies Limited

    • $12.99

Description

Discover Private LLM, your secure, private AI assistant for iPhone, iPad, and macOS. Designed to boost your productivity and creativity while ensuring your privacy, Private LLM is a one-time purchase offering a universe of AI capabilities without subscriptions. Our chatbot utilizes cutting-edge on-device AI to keep your interactions confidential and completely offline, compatible with all your Apple devices.

Why Choose Private LLM?

- Diverse AI Models at Your Command: Select from a wide array of open-source LLM model families such as Google Gemma 2B, Mixtral 8x7B, Mistral 7B, Llama 2 (7B, 13B, and 33B), CodeLlama 13B, Solar 10.7B, and Phi 2 3B. Customize your AI experience for creative brainstorming, coding assistance, or daily inquiries.

- Seamless Siri & Shortcuts Integration: Enhance your AI interactions with Siri commands and customizable Shortcuts, making your digital assistant more integrated and accessible within your Apple ecosystem.

- Customizable System Prompts: Fine-tune your AI's responses and interactions to your liking with customizable system prompts, catering to your specific needs and preferences.

- Complete Privacy and Security: Private LLM confines your conversations to your device, providing unparalleled privacy. Our on-device AI enables powerful computing without data compromise or the need for an internet connection.

- Family Sharing & Offline Functionality: A one-time purchase that supports Family Sharing. Download models as needed and enjoy full AI assistant functionality, even offline.

- System Wide Grammar Correction and Summarization: Offering grammar correction, summarization, text shortening, and rephrasing within any app on your Mac.

Private LLM is more than a chatbot; it's an all-encompassing AI companion that respects your privacy while offering versatile, on-demand assistance. Whether for creative writing, solving complex programming issues, or general inquiries, Private LLM adapts to your needs, ensuring your data remains secure and on your device. Embark on a journey with Private LLM today and elevate your productivity and creative projects with the most private AI assistant for macOS and iOS devices.

Leveraging state-of-the-art Omniquant quantized models, Private LLM is a native Mac app that surpasses others with superior text generation, faster performance, and deeper integration compared to apps using generic baseline RTN quantized models like Ollama and LMStudio.

Supported Model Families:

- Google Gemma Based Models
- Mixtral 8x7B Based Models
- Llama 33B Based Models
- Llama 2 13B Based Models
- CodeLlama 13B Based Models
- Llama 2 7B Based Models
- Solar 10.7B Based Models
- Phi 2 3B Based Models
- Mistral 7B Based Models
- StableLM 3B Based Models
- Yi 6B Based Models
- Yi 34B Based Models

For a full list of supported models, including detailed specifications, please visit our website.

Optimized for Apple Silicon Macs with the Apple M1 chip or later, Private LLM for macOS delivers the best performance. Users on older Intel Macs without eGPUs may experience reduced performance.

What’s New

Version 1.8.6

- Support for downloading a 4-bit OmniQuant quantized version of the Meta-Llama-3-70B-Instruct model on Apple Silicon Macs with 48GB or more RAM.
- Support for downloading a 4-bit OmniQuant quantized version of the new Phi-3-Mini based kappa-3-phi-abliterated model on all Macs.
- Stability improvements and bug fixes.

Ratings and Reviews

4.5 out of 5
2 Ratings

2 Ratings

dtlnx ,

Great app.

Would love to be able to download different models. If not from an in app catalogue, perhaps putting in a url from hugging face?

Otherwise pretty neat.

shadows_lord ,

Why Phi-3 only 4-bit!!!

This app looks ok, but they are ruining the experience by only allowing very low quants of the model. I got this app just to use Llama 3 at 4-bit quant and Phi-3 mini at 8-bit quant (which an iPad Pro M2 can easily handle). Please add these quants or allow a way to add ours. Happy to update the review if this is done.

If you’re worried about the app crashing on older hardware you can add a warning to those models.

Developer Response ,

Thanks for the review! We've got a task on our roadmap to allow users to download 4-bit quantized versions of Llama 3 8B on 16GB M1/M2 iPads (and possibly next iPhone 16 Pro/Pro Max when they come out). All Llama 3 8B models on macOS will be 4-bit OmniQuant quantized (update releasing this week). While I agree that 4-bit quants are better than 3-bit quants in perplexity, 8-bit quants are unnecessary with OmniQuant. Your priors seem to be from the llama.cpp/Ollama/LMStudio world where they use RTN (round to nearest) quants and Q4 RTN quantized models aren't great. We invest a lot of GPU time and human effort into quantizing models with OmniQuant and 3-bit quantized OmniQuant models are comparable in perplexity to Q4 RTN quantized models and 4-bit quantized OmniQuant models are comparable to Q8 RTN quantized models. I encourage you to read the OmniQuant paper if you're interested in the details.

Kuutana ,

Reasonable Expectations

The developer has given easy access to a usable information source, disconnected from the tentacles of big tech. This is to be applauded. But expectations are important here. You can't compare a solo effort with that of an organization with billion-dollar funding. Just contextualize this with pre-ChatGPT tech and you may find this a tool to have (in situations where yes.. you are disconnected from the internet or you don't want big tech operators to hoover your brain for free). Docs on if or how to do do supervised instruction of the model would be welcome or how to tweak this. Read the vendor warnings. These are known industry limitations of this type of offering.

App Privacy

The developer, Numen Technologies Limited, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer's privacy policy.

Data Not Collected

The developer does not collect any data from this app.

Privacy practices may vary, for example, based on the features you use or your age. Learn More

Supports

  • Family Sharing

    Up to six family members can use this app with Family Sharing enabled.

You Might Also Like

YourChat
Productivity
MLC Chat
Productivity
Local Brain
Productivity
Vanessa LLM
Productivity
Patagonia AI - Private LLM
Productivity
LLM: Answering Machine
Productivity