
Inferencer - Private AI Studio
Advanced Local AI Assistant
Only for Mac
Free · In‑App Purchases
Mac
Inferencer lets you run, host and deeply control the latest SOTA AI models (OSS, DeepSeek, Qwen, Kimi, GLM and more) from your own computer.
No data is sent to the cloud for processing - maintaining your complete privacy.
Advanced inferencing controls give you complete control on their accuracy and outputs.
Models
Start in the models section where you can select the location of existing models or download new ones directly from Hugging Face.
Use the distributed compute feature to load a model across two Macs, or use the model streaming feature to inference larger models partially from storage.
Chats
Select the model to interact with on the top menu bar and write a prompt to begin. At any point you can switch between models and continue the chat to see what else they can uncover. You can also selectively delete past messages to keep the model focused and less scatterbrain.
Chat Controls
Control the inferencing parameters including batching to inference multiple chats at the same time, intensity of processing, and model streaming to load models larger than available memory.
Token Entropy and Inspection
Select the inspectors to peek into the inner-workings of each word outputted and see the model's confidence levels and alternative choices.
Prompt Framing
Expanding the prompt section to utilise the framing feature which allows you to control the output the model generates.
Tools
The tools editor allows you to enable built in tools such as get_webpage_content or add in your own, so that models can use them when needed. For example, if you'd like a webpage or search result inferenced, simply enable the tool in the Tools section, and allow tool calls in the chat settings panel.
Server
If enabled, the server feature allows you to serve and connect to your own or trusted devices. No data is sent elsewhere. Also includes compatible APIs for application development.
Distributed Inference
With distributed compute you can link together two Macs, sharing the memory to inference larger models. To use make sure it's enabled in both the app and server settings. Once a connection to your server is made, if both the computers have the same model, a distributed compute icon will appear. Simply tap on it to load the model for distributed compute.
Coding Tools
Built-in support for Xcode Intelligence and Visual Studio Code. Use the server feature with Compatibility APIs enabled and SSL disabled to allow Xcode or Visual Studio Code to use Inferencer as a service provider.
Shortcuts
Use the Shortcuts app to automate inferencing workflows (e.g., copy text from clipboard > inference > speak result).
Settings
Includes parental controls, an automatic deletion policy and more.
Privacy
For maximum privacy, all AI processing happens offline and on your device, by default.
Subscriptions
Basic (Free): Most features unlocked for free including unlimited chats.
Professional: Upgrade for more advanced token inspection, prompt-framing and model streaming.
Terms & Support
Terms of Use: inferencer.com/terms
Privacy Policy: inferencer.com/privacy
Disclaimer
Inferenced models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions.
This app hasn’t received enough ratings or reviews to display an overview.
+ Control the Mixture of Experts for faster inference or deeper intelligence
+ Added tool call support for Agent workflows with GitHub Copilot, Continue.dev, Cline, Kilo Code, Roo Code and more
+ Tasks view now includes prompt progress bar and can now be detached for background updates
+ Support for K-EXAONE-236B, Solar-Open-100B and IQuest-Coder
+ Better support for coding agents with response length override API setting
+ Cache reuse when resuming a cancelled generation
+ Loading screens are is no longer model
+ Moved distributed compute out of preview
+ Better support for coding agents with response length API setting
+ Reduced network traffic for client/server
+ Crash fix for macOS 26 when downloading models
+ More bug fixes and performance improvements
The developer, Ashraf Samy, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy .
Data Not Collected
The developer does not collect any data from this app.
Accessibility
The developer has not yet indicated which accessibility features this app supports. Learn More
Information
- Seller
- Ashraf Samy
- Size
- 609.3 MB
- Category
- Productivity
- Compatibility
Requires macOS 15.0 or later and a Mac with Apple M1 chip or later.
- Mac
Requires macOS 15.0 or later and a Mac with Apple M1 chip or later.
- Languages
- English
- Age Rating
13+
- 13+
- This app has an age rating of 13+ with content restrictions. Some content may be rated higher, but access is managed by the developer through in-app controls.
- In-App Controls
Parental Controls
Infrequent
Cartoon or Fantasy Violence
Profanity or Crude Humor
Mature or Suggestive Themes
Horror/Fear Themes
Medical Treatment information
Alcohol, Tobacco, Drug Use or References
Guns or Other Weapons
Contains
User-Generated Content
- In-App Purchases
Yes
- Professional $9.99
- Copyright
- © 2025 Inferencer