
Pico AI Server for MLX LLM VLM 17+
Ollama & OpenAI API compatible
Starling Protocol Inc
-
- Free
Screenshots
Description
Lightning fast LLM and VLM server for home and office local area networks. Built on MLX, powered by the latest DeepSeek, Gemma, Llama, Mistral, Qwen, and Phi models.
One click—and your Apple-silicon Mac becomes a lightning-fast Ollama and OpenAI-compatible LLM server for everyone on your local network. Run Llama, DeepSeek R1, Gemma 3, and hundreds more, all on-device. No cloud. Complete privacy.
• Instant AI Server
Install from the Mac App Store, pick a model, and Pico starts serving a local URL you, your teammates or family can open in any browser. Zero terminal commands.
• Private & Offline by Default
Every prompt, every response, every model stays on your Mac. Works even offline.
• Built for Apple Silicon Speed
MLX-accelerated to tap the full power of M-series chips—up to 2-3× faster than generic ports.
• 300+ Ready-to-Use Models
Coding helpers, creative writers, research assistants—you name it. Swap models as easily as changing songs, or import your own MLX models.
• Friendly Dashboard, Pro Controls
Adjust creativity, context length, system prompts and more with simple sliders. Power users can dive into advanced settings anytime.
• Works with Your Favorite Tools
Pico exposes both Ollama-style and OpenAI-compatible APIs, so it plugs right into LangChain, Raycast, Obsidian, Logic Pro plug-ins, and countless other apps.
––– REQUIREMENTS –––
• Apple Silicon Mac (M-series)
• 16 GB RAM minimum
• 32 GB + for best performance with larger models
NO SUBSCRIPTIONS • NO SIGN-UPS • NO CLOUD
Download Pico today and give your home or team a private, high-performance AI server—powered entirely by the Mac you already own.
• Pico AI Homelab supports over 300 state-of-the-art LLM and VLM models, such as:
• Google Gemma 3
• DeepSeek R1
• Meta Llama
• Alibaba Qwen 3
• Microsoft Phi 4
• Mistral
…And many more
• Pico AI Homelab supports 23 embedding models, such as:
• BERT
• RoBERTa
• XLM-RoBERTa
• CLIP
• Word2Vec
• Model2Vec
• Static
• Compatible with your existing chat app, including:
• Open WebUI
• Apollo AI
• Bolt AI
• IntelliBar
• Msty
• Ollamac
• MindMac
• Enchanted
• Kerling
• LibreChat
• Hollama
• Ollama-SwiftUI
• Witsy
• Reactor AI
... And many more
What’s New
Version 1.1.17
- Bug fixes
- Updated privacy policy and terms of use
Ratings and Reviews
Great app for starting with LLMs, Not easy, but great
Overall this is an amazing app the wraps up a lot of tricky installs into one and handles it rather well. It’s not going to do everything for you but this will save you a lot of time. You might out grow it, but it’s so amazing for what it is.
Faster than ollama
Great to see that this is built on top of MLX so really optimized for MacOS
🤯
This is so easy….. THANK YOU!
App Privacy
The developer, Starling Protocol Inc, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy.
Data Not Collected
The developer does not collect any data from this app.
Privacy practices may vary, for example, based on the features you use or your age. Learn More
Information
- Seller
- Starling Protocol, Inc
- Size
- 20.6 MB
- Category
- Productivity
- Compatibility
-
- Mac
- Requires macOS 15.0 or later.
- Languages
-
English, Arabic, Catalan, Croatian, Czech, Danish, Dutch, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Malay, Norwegian Bokmål, Polish, Portuguese, Romanian, Russian, Simplified Chinese, Slovak, Spanish, Swedish, Thai, Traditional Chinese, Turkish, Ukrainian, Vietnamese
- Age Rating
- 17+ Infrequent/Mild Medical/Treatment Information
- Copyright
- © 2025 Starling Protocol, Inc
- Price
- Free