Private LLM - Local AI Chatbot 17+

Secure Private AI Chatbot

Numen Technologies Limited

    • $9.99

Capturas de pantalla

Descripción

Now with Llama 3.2 1B, Llama 3.2 3B and Gemma 2 9B based models.

Meet Private LLM: Your Secure, Offline AI Assistant for macOS

Private LLM brings advanced AI capabilities directly to your iPhone, iPad, and Mac—all while keeping your data private and offline. With a one-time purchase and no subscriptions, you get a personal AI assistant that works entirely on your device.

Key Features:

- Local AI Functionality: Interact with a sophisticated AI chatbot without needing an internet connection. Your conversations stay on your device, ensuring complete privacy.

- Wide Range of AI Models: Choose from various open-source LLM models like Llama 3.2, Llama 3.1, Google Gemma 2, Microsoft Phi-3, Mistral 7B, and StableLM 3B. Each model is optimized for iOS and macOS hardware using advanced OmniQuant quantization, which offers superior performance compared to traditional RTN quantization methods.

- Siri and Shortcuts Integration: Create AI-driven workflows without writing code. Use Siri commands and Apple Shortcuts to enhance productivity in tasks like text parsing and generation.

- No Subscriptions or Logins: Enjoy full access with a single purchase. No need for subscriptions, accounts, or API keys. Plus, with Family Sharing, up to six family members can use the app.

- AI Language Services on macOS: Utilize AI-powered tools for grammar correction, summarization, and more across various macOS applications in multiple languages.

- Superior Performance with OmniQuant: Benefit from the advanced OmniQuant quantization process, which preserves the model's weight distribution for faster and more accurate responses, outperforming apps that use standard quantization techniques.

Supported Model Families:

- Llama 3.2 Based Models
- Llama 3.1 Based Models
- Phi-3 Based Models
- Google Gemma 2 Based Models
- Mixtral 8x7B Based Models
- CodeLlama 13B Based Models
- Solar 10.7B Based Models
- Mistral 7B Based Models
- StableLM 3B Based Models
- Yi 6B Based Models
- Yi 34B Based Models

For a full list of supported models, including detailed specifications, please visit privatellm.app/models.

Private LLM is a better alternative to generic llama.cpp and MLX wrappers apps like Ollama, LLM Farm, LM Studio, RecurseChat, etc on three fronts:
1. Private LLM uses a faster mlc-llm based inference engine.
2. All models in Private LLM are quantised using the state of the art OmniQuant quantization algorithm, while competing apps use naive round-to-nearest quantization.
3. Private LLM is a fully native app built using C++, Metal and Swift, while many of the competing apps are (bloated) Electron based apps.

Optimized for Apple Silicon Macs with the Apple M1 chip or later, Private LLM for macOS delivers the best performance. Users on older Intel Macs without eGPUs may experience reduced performance.

Novedades

Versión 1.9.2

- Bugfix release: fix for crash while loading some of the older models that use the sentencepiece tokenizer.
- Drop support for Llama 3.2 1B and 3B models on Intel Macs due to stability issues.

Thank you for choosing Private LLM. We are committed to continue improving the app and to making it more useful for you. For support requests and feature suggestions, please feel free to email us at support@numen.ie, or tweet us @private_llm. If you enjoy the app, leaving an App Store is a great way to support us.

Valoraciones y reseñas

4,6 de 5
138 valoraciones

138 valoraciones

adora55 ,

Pretty great, but it has some limitations

The implementation here is great, especially now that the devs have added support for several models! Works best with devices with a lot of RAM - I have an iPhone 15 Pro, which seems to be enough, but my old 12 didn't really handle it well. I just wish you could save threads like in the ChatGPT app (and maybe give specific system instructions depending on the thread?). Otherwise, pretty good and worth the price!

Also, if we could save specific temperature and top-P presets with models and maybe be able to directly type values in, that would be nice.

N7nathan ,

So much potential

I've been using this app for a couple of months at this point. When it was first released, it was a neat proof of concept, but it only supported 7B models and was overall just too simple to use effectively. With a recent update, a 13B model was released for all macs with 16GB of memory, and it makes such a huge difference! It's not quite at the same level as ChatGPT 3.5T, but it's close enough that I never use 3.5T anymore; this is my go-to. I greatly appreciate the on-device processing (hallelujah privacy!), and it doesn't even use too much power - my battery still lasts for hours and hours. The performance is also great; my base M1 Air powers right through the prompts.

I only have two complaints about the app at this point. 1) The 13B model uses about 12GB of memory by itself, which does force the use of swap on a 16GB Air. Not much the dev can do about this, but it is something to keep in mind. You'll want to close out of other programs before launching this. 2) There still is no feature that has separate conversations. If you want to start a new conversation, you need to delete the existing conversation. I'd love it if we could get separate conversations in a future update; it would make this app so much easier to use.

Overall, I love it and do not regret buying it at all. I can't wait to see what future updates bring :)

Universe NZ ,

A great tool, with huge potential.

Obviously being an offline GPT limits the responses, but the fundamentals for something impressive in future less memory restricted devices eg iPhone 15 and beyond is really exciting. Offline and privacy focused are essential qualities of this app and it works quite well. A good bargain for the price with an active developer. One recommendation/piece of feedback would be to provide information on what the settings TOP-P and temperature do. If we come across an error or weird, reproducible response could you provide an option to email you a log so we can forward you the issue easily? This way you don’t need to embed any privacy violating analytics packages.

Respuesta del desarrollador ,

Thanks for your feedback! Indeed we're hoping the upcoming iPhone 15 series of devices will have more memory, so we can ship bigger and smarter models for newer devices (As free updates, of course!). We're in the process of adding an FAQ on our website to answer the temperature, top-p question. We've already received the question a few times by now on our support email and on discord. Perhaps I should also add a help section within the app. Thanks for the suggesting the email log idea, I'll add it to our backlog. We specifically don't embed any analytics packages in the app, that would be antithetical to the concept of privacy, which is one of our app's USPs. The only logs we get are crash-logs from iOS's off by default, and opt-in Analytics feature (Settings -> Privacy & Security -> Analytics & Improvements -> Share With App Developers). Also, responses from the LLM in the app are (intentionally) stochastic, and are often hard to reproduce.

Privacidad de la app

El desarrollador, Numen Technologies Limited, ha indicado que las prácticas de privacidad de la app pueden incluir la gestión de datos descrita a continuación. Para obtener más información, consulta la política de privacidad del desarrollador.

Data Not Collected

The developer does not collect any data from this app.

Las prácticas de privacidad pueden variar, por ejemplo, dependiendo de tu edad o de las funciones que uses. Más información

Compatibilidad

  • En familia

    Hasta seis miembros de la familia pueden usar esta app si está activado En familia.

También te puede interesar

MLC Chat
Productivity
Local Brain
Productivity
YourChat
Productivity
PocketGPT: Private AI
Productivity
PocketPal AI
Productivity
LLM: Answering Machine
Productivity