Discover the Ultimate Privacy-Focused AI Assistant on iOS: Private LLM
Unlock a new realm of productivity and creativity on your iPhone and iPad with Private LLM, the premier AI assistant designed with your privacy in mind. Available for a one-time purchase, it offers a range of AI capabilities without needing a subscription. Experience advanced on-device AI that keeps your interactions confidential and offline.
Why Private LLM is Your Go-To AI Companion:
- Exclusive AI Model Selection: Choose from a diverse set of open-source LLM models optimized for performance and perplexity on iOS with state of the art OmniQuant quantization: including models from Llama 2, Llama 3.2, Llama 3.1, Google Gemma 2, Gemma 3, Microsoft Phi-3, Mistral 7B, Qwen 2.5, Qwen 3, StableLM 3B and many more. Whether you need help with creative brainstorming, coding, or daily questions, customize your AI experience to meet your unique needs.
- Integrated with Siri & Shortcuts: Enhance your AI interactions with Siri commands and customizable Shortcuts. Private LLM seamlessly fits within your Apple ecosystem, making your digital assistant more accessible.
- Customizable Interactions: Tailor your AI's responses and interactions with customizable system prompts to match your preferences and needs.
- Uncompromised Privacy and Security: With Private LLM, your conversations stay confidential and on your device. Our advanced on-device AI performs robust computing without risking data compromise or needing an internet connection.
- Family Sharing & Offline Capabilities: Benefit from a one-time purchase that includes Family Sharing. Download models as needed and enjoy the full functionality of your AI assistant, even without internet access.
Supported LLM Model families:
- DeepSeek R1 Distill based models
- Phi 4 based models
- Qwen 3 based models (Qwen3-4B-Instruct-2507)
- Qwen 2.5 based models (0.5B, 1.5B, 3B and 7B)
- Qwen 2.5 Coder based Models (0.5B, 1.5B, 3B, 7B and 14B)
- Llama 3.1 8B based models
- Llama 3.2 1B and 3B based models
- Google Gemma 2 2B and 9B based models
- Google Gemma 3 1B based models
- Mistral 7B based models
- Yi 6B based models
For a full list of supported models, including detailed specifications, please visit privatellm.app/models.
Private LLM is not just a chatbot; it's a comprehensive AI companion designed to respect your privacy while providing versatile, on-demand assistance. Whether you're enhancing your creative writing, tackling complex programming challenges, or just seeking answers, Private LLM adapts to meet your needs while keeping your data secure. Start your journey with Private LLM today and elevate your productivity and creative projects with the most private AI assistant for iOS devices.
Private LLM is a better alternative to generic llama.cpp and MLX wrappers apps like Enchanted, Ollama, LLM Farm, LM Studio, Locally AI, RecurseChat, etc on three fronts:
1. Private LLM uses a faster and highly-optimized mlc-llm based inference engine.
2. Models in Private LLM are quantized using the state of the art quantization algorithms like OmniQuant, while competing apps use naive round-to-nearest quantization.
3. Private LLM is a fully native app built using C++, Metal and Swift with deep integrations with iOS and iPadOS, while many of the competing apps are bloated and non-native Electron or Flutter based apps.
Please note that Private LLM only supports inference with text based LLMs. Model support varies by device capabilities.
Would love to be able to download different models. If not from an in app catalogue, perhaps putting in a url from hugging face?Otherwise pretty neat.
Good
harrisonmackie70
Does what it says lots of LLMs only request is to have it available on my Apple Watch.
IPHONE 16 and iPad Pro gibberish mode as well
Multipodman
I have the same issue someone else had. Eventually then LLMs start delivering gibberish and I have to restart. Too bad. Good idea, but too many issues.
Developer Response
Thanks for the feedback. Private LLM runs LLM inference fully on-device, so results depend on the model you pick and how much free memory your device has. If the device runs low on RAM - or if a chat grows past the model’s context window - responses can degrade and look like gibberish. On iPhone, background apps that hog memory and Low Power Mode can also impact quality.Close background apps and avoid Low Power Mode while generating. If a chat gets very long, start a new one to keep within the model’s context window. You can see available free memory at the bottom of the Help screen in the app.If you’d like, please join our Discord and we’ll help you choose the right model and settings for your device.
Only gives gibberish answers
Daffodil Eleven
While the app technically does allow you to run models locally on your device, I can’t find a single model, including the packaged one, that gives even remotely accurate answers to simple questions. The answers are always gibberish. This is on a 16pro, so if it’s a limitation of my device, I’m not sure which one would do any better. Tweaking the model parameters doesn’t help at all.
Developer Response
What model is this with? We support hundreds of models at this point.
- Support for the Qwen3-4B-Instruct-2507-heretic abliterated model (on any iOS device with 6GB or more RAM)
- Support for the Qwen3-4B-Instruct-2507-heretic-noslop model (on any iOS device with 6GB or more RAM)
- The noslop model has been specially tuned with abliterated to reduce LLM slop in its generated outputs and is exclusively available only on Private LLM
- Minor bug fixes and updates
Thank you for choosing Private LLM. We are committed to continue improving the app and to making it more useful for you. For support requests and feature suggestions, please feel free to join our Discord, email us at support@numen.ie, or tweet us @private_llm. If you enjoy the app, leaving an App Store is a great way to support us.
Version 1.9.11
The developer, Numen Technologies Limited, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy .
Data Not Collected
The developer does not collect any data from this app.
Privacy practices may vary, for example, based on the features you use or your age. Learn More
The developer indicated that this app supports the following accessibility features. Learn More
Supported Features
Dark Interface
Information
Seller
Numen Technologies Limited
Size
1.4 GB
Category
Utilities
Compatibility
Requires iOS 17.0 or later and a device with the A12 Bionic chip or later.
iPhone Requires iOS 17.0 or later and a device with the A12 Bionic chip or later. • iPhone XS • iPhone XS Max • iPhone XR • iPhone 11 • iPhone 11 Pro • iPhone 11 Pro Max • iPhone SE (2nd generation) • iPhone 12 mini • iPhone 12 • iPhone 12 Pro • iPhone 12 Pro Max • iPhone 13 Pro • iPhone 13 Pro Max • iPhone 13 mini • iPhone 13 • iPhone SE (3rd generation) • iPhone 14 • iPhone 14 Plus • iPhone 14 Pro • iPhone 14 Pro Max • iPhone 15 • iPhone 15 Plus • iPhone 15 Pro • iPhone 15 Pro Max • iPhone 16 • iPhone 16 Plus • iPhone 16 Pro • iPhone 16 Pro Max • iPhone 16e • iPhone 17 Pro • iPhone 17 Pro Max • iPhone 17 • iPhone Air
iPad Requires iPadOS 17.0 or later and a device with the A12 Bionic chip or later. • iPad Pro (11‑inch) • iPad Pro (11‑inch) Wi‑Fi + Cellular • iPad Pro (12.9‑inch) (3rd generation) • iPad Pro (12.9‑inch) (3rd generation) Wi‑Fi + Cellular • iPad mini (5th generation) • iPad mini (5th generation) Wi‑Fi + Cellular • iPad Air (3rd generation) • iPad Air (3rd generation) Wi‑Fi + Cellular • iPad Pro (11‑inch) (2nd generation) • iPad Pro (11‑inch) (2nd generation) Wi‑Fi + Cellular • iPad Pro (12.9‑inch) (4th generation) • iPad Pro (12.9‑inch) (4th generation) Wi‑Fi + Cellular • iPad Air (4th generation) • iPad Air (4th generation) Wi‑Fi + Cellular • iPad (8th generation) • iPad (8th generation) Wi‑Fi + Cellular • iPad Pro (11-inch) (3rd generation) • iPad Pro (11-inch) (3rd generation) Wi-Fi + Cellular • iPad Pro (12.9-inch) (5th generation) • iPad Pro (12.9-inch) (5th generation) Wi-Fi + Cellular • iPad mini (6th generation) • iPad mini (6th generation) Wi‑Fi + Cellular • iPad (9th generation) • iPad (9th generation) Wi‑Fi + Cellular • iPad Air (5th generation) • iPad Air (5th generation) Wi‑Fi + Cellular • iPad (10th generation) • iPad (10th generation) Wi‑Fi + Cellular • iPad Pro (11‑inch) (4th generation) • iPad Pro (11‑inch) (4th generation) Wi‑Fi + Cellular • iPad Pro (12.9‑inch) (6th generation) • iPad Pro (12.9‑inch) (6th generation) Wi‑Fi + Cellular • iPad Air 11-inch (M2) • iPad Air 11-inch (M2) Wi-Fi + Cellular • iPad Air 13-inch (M2) • iPad Air 13-inch (M2) Wi-Fi + Cellular • iPad Pro 11-inch (M4) • iPad Pro 11-inch (M4) Wi-Fi + Cellular • iPad Pro 13-inch (M4) • iPad Pro 13-inch (M4) Wi-Fi + Cellular • iPad mini (A17 Pro) • iPad mini (A17 Pro) Wi-Fi + Cellular • iPad (A16) • iPad (A16) Wi-Fi + Cellular • iPad Air 11-inch (M3) • iPad Air 11-inch (M3) Wi-Fi + Cellular • iPad Air 13-inch (M3) • iPad Air 13-inch (M3) Wi-Fi + Cellular • iPad Pro 11-inch (M5) • iPad Pro 11-inch (M5) Wi-Fi + Cellular • iPad Pro 13-inch (M5) • iPad Pro 13-inch (M5) Wi-Fi + Cellular
Mac Requires macOS 14.0 or later.
Apple Vision Requires visionOS 1.0 or later and a device with the A12 Bionic chip or later.