Private LLM - Local AI Chatbot 17+

Numen Technologies Limited

    • ₩14,000

설명

Meet Private LLM: Your Secure, Offline AI Assistant for macOS

Private LLM brings advanced AI capabilities directly to your iPhone, iPad, and Mac—all while keeping your data private and offline. With a one-time purchase and no subscriptions, you get a personal AI assistant that works entirely on your device.

Key Features:

- Local AI Functionality: Interact with a sophisticated AI chatbot without needing an internet connection. Your conversations stay on your device, ensuring complete privacy.

- Wide Range of AI Models: Choose from various open-source LLM models like Llama 3.2, Llama 3.1, Google Gemma 2, Microsoft Phi-3, Mistral 7B, and StableLM 3B. Each model is optimized for iOS and macOS hardware using advanced OmniQuant quantization, which offers superior performance compared to traditional RTN quantization methods.

- Siri and Shortcuts Integration: Create AI-driven workflows without writing code. Use Siri commands and Apple Shortcuts to enhance productivity in tasks like text parsing and generation.

- No Subscriptions or Logins: Enjoy full access with a single purchase. No need for subscriptions, accounts, or API keys. Plus, with Family Sharing, up to six family members can use the app.

- AI Language Services on macOS: Utilize AI-powered tools for grammar correction, summarization, and more across various macOS applications in multiple languages.

- Superior Performance with OmniQuant: Benefit from the advanced OmniQuant quantization process, which preserves the model's weight distribution for faster and more accurate responses, outperforming apps that use standard quantization techniques.

Supported Model Families:

- Llama 3.2 Based Models
- Llama 3.1 Based Models
- Phi-3 Based Models
- Google Gemma 2 Based Models
- Mixtral 8x7B Based Models
- CodeLlama 13B Based Models
- Solar 10.7B Based Models
- Mistral 7B Based Models
- StableLM 3B Based Models
- Yi 6B Based Models
- Yi 34B Based Models

For a full list of supported models, including detailed specifications, please visit privatellm.app/models.

Private LLM is a better alternative to generic llama.cpp and MLX wrappers apps like Ollama, LLM Farm, LM Studio, RecurseChat, etc on three fronts:
1. Private LLM uses a faster mlc-llm based inference engine.
2. All models in Private LLM are quantised using the state of the art OmniQuant quantization algorithm, while competing apps use naive round-to-nearest quantization.
3. Private LLM is a fully native app built using C++, Metal and Swift, while many of the competing apps are (bloated) Electron based apps.

Optimized for Apple Silicon Macs with the Apple M1 chip or later, Private LLM for macOS delivers the best performance. Users on older Intel Macs without eGPUs may experience reduced performance.

새로운 기능

버전 1.9.2

- Bugfix release: fix for crash while loading some of the older models that use the sentencepiece tokenizer.
- Drop support for Llama 3.2 1B and 3B models on Intel Macs due to stability issues.

Thank you for choosing Private LLM. We are committed to continue improving the app and to making it more useful for you. For support requests and feature suggestions, please feel free to email us at support@numen.ie, or tweet us @private_llm. If you enjoy the app, leaving an App Store is a great way to support us.

평가 및 리뷰

Simerotora ,

Would you Add llama3 q4 and q5

Would you add Llama3 q4 and q5

개발자 답변 ,

Thanks for the review! q4 and q5 are terminologies from the llama.cpp world. We don't use llama.cpp, and we don't use naive RTN quants like llama.cpp does. Our quantization algorithm of choice is OmniQuant. We support w4g128asym quantized on the macOS app. We use the same quantization for smaller models on iOS, but we use w3g40sym quantization for models with 7B or more parameters due to memory constrains. Our plan is to support w4g128asym quantized models on iPads (and hopefully the next generation of Pro and Pro Max devices) with 16GB of RAM.

앱이 수집하는 개인정보

Numen Technologies Limited 개발자가 아래 설명된 데이터 처리 방식이 앱의 개인정보 처리방침에 포함되어 있을 수 있다고 표시했습니다. 자세한 내용은 개발자의 개인정보 처리방침을 참조하십시오.

데이터가 수집되지 않음

개발자가 이 앱에서 데이터를 수집하지 않습니다.

개인정보 처리방침은 사용하는 기능이나 사용자의 나이 등에 따라 달라질 수 있습니다. 더 알아보기

지원

  • 가족 공유

    가족 공유를 활성화하면 최대 6명의 가족 구성원이 이 앱을 사용할 수 있습니다.

좋아할 만한 다른 항목

MLC Chat
생산성
Local Brain
생산성
YourChat
생산성
PocketGPT: Private AI
생산성
Hugging Chat
생산성
PocketPal AI
생산성