Private LLM - Local AI Chatbot 17+

Numen Technologies Limited

    • ₩14,000

설명

Discover Private LLM, your secure, private AI assistant for iPhone, iPad, and macOS. Designed to boost your productivity and creativity while ensuring your privacy, Private LLM is a one-time purchase offering a universe of AI capabilities without subscriptions. Our chatbot utilizes cutting-edge on-device AI to keep your interactions confidential and completely offline, compatible with all your Apple devices.

Why Choose Private LLM?

- Diverse AI Models at Your Command: Select from a wide array of open-source LLM model families such as Google Gemma 2B, Mixtral 8x7B, Mistral 7B, Llama 2 (7B, 13B, and 33B), CodeLlama 13B, Solar 10.7B, and Phi 2 3B. Customize your AI experience for creative brainstorming, coding assistance, or daily inquiries.

- Seamless Siri & Shortcuts Integration: Enhance your AI interactions with Siri commands and customizable Shortcuts, making your digital assistant more integrated and accessible within your Apple ecosystem.

- Customizable System Prompts: Fine-tune your AI's responses and interactions to your liking with customizable system prompts, catering to your specific needs and preferences.

- Complete Privacy and Security: Private LLM confines your conversations to your device, providing unparalleled privacy. Our on-device AI enables powerful computing without data compromise or the need for an internet connection.

- Family Sharing & Offline Functionality: A one-time purchase that supports Family Sharing. Download models as needed and enjoy full AI assistant functionality, even offline.

- System Wide Grammar Correction and Summarization: Offering grammar correction, summarization, text shortening, and rephrasing within any app on your Mac.

Private LLM is more than a chatbot; it's an all-encompassing AI companion that respects your privacy while offering versatile, on-demand assistance. Whether for creative writing, solving complex programming issues, or general inquiries, Private LLM adapts to your needs, ensuring your data remains secure and on your device. Embark on a journey with Private LLM today and elevate your productivity and creative projects with the most private AI assistant for macOS and iOS devices.

Leveraging state-of-the-art Omniquant quantized models, Private LLM is a native Mac app that surpasses others with superior text generation, faster performance, and deeper integration compared to apps using generic baseline RTN quantized models like Ollama and LMStudio.

Supported Model Families:

- Google Gemma Based Models
- Mixtral 8x7B Based Models
- Llama 33B Based Models
- Llama 2 13B Based Models
- CodeLlama 13B Based Models
- Llama 2 7B Based Models
- Solar 10.7B Based Models
- Phi 2 3B Based Models
- Mistral 7B Based Models
- StableLM 3B Based Models
- Yi 6B Based Models
- Yi 34B Based Models

For a full list of supported models, including detailed specifications, please visit our website.

Optimized for Apple Silicon Macs with the Apple M1 chip or later, Private LLM for macOS delivers the best performance. Users on older Intel Macs without eGPUs may experience reduced performance.

새로운 기능

버전 1.9.0

- Support for 2 new models from the Gemma 2 family of models (on Apple Silicon Macs).
- 4-bit OmniQuant quantized version of the gemma-2-2b-it model.
- 4-bit OmniQuant quantized version of the multilingual SauerkrautLM-gemma-2-2b-it model.
- Stability improvements and bug fixes.

Thank you for choosing Private LLM. We are committed to continue improving the app and to making it more useful for you. For support requests and feature suggestions, please feel free to email us at support@numen.ie, or tweet us @private_llm. If you enjoy the app, leaving an App Store is a great way to support us.

평가 및 리뷰

Simerotora ,

Would you Add llama3 q4 and q5

Would you add Llama3 q4 and q5

개발자 답변 ,

Thanks for the review! q4 and q5 are terminologies from the llama.cpp world. We don't use llama.cpp, and we don't use naive RTN quants like llama.cpp does. Our quantization algorithm of choice is OmniQuant. We support w4g128asym quantized on the macOS app. We use the same quantization for smaller models on iOS, but we use w3g40sym quantization for models with 7B or more parameters due to memory constrains. Our plan is to support w4g128asym quantized models on iPads (and hopefully the next generation of Pro and Pro Max devices) with 16GB of RAM.

앱이 수집하는 개인정보

Numen Technologies Limited 개발자가 아래 설명된 데이터 처리 방식이 앱의 개인정보 처리방침에 포함되어 있을 수 있다고 표시했습니다. 자세한 내용은 개발자의 개인정보 처리방침을 참조하십시오.

데이터가 수집되지 않음

개발자가 이 앱에서 데이터를 수집하지 않습니다.

개인정보 처리방침은 사용하는 기능이나 사용자의 나이 등에 따라 달라질 수 있습니다. 더 알아보기

지원

  • 가족 공유

    가족 공유를 활성화하면 최대 6명의 가족 구성원이 이 앱을 사용할 수 있습니다.

좋아할 만한 다른 항목

MLC Chat
생산성
YourChat
생산성
Local Brain
생산성
Hugging Chat
생산성
Vanessa LLM
생산성
PocketGPT: Private AI
생산성