LLM Farm 4+
Run LLM
Artem Savkin
Designed for iPad
-
- Free
Screenshots
Description
LLMFarm is an iOS and MacOS app to work with large language models (LLM). It allows you to load different LLMs with certain parameters.
# Features
* Various inferences
* Various sampling methods
* Metal
* Model setting templates
* LoRA adapters support
* LoRA FineTune and Export
# Inferences
* LLaMA
* GPTNeoX
* Replit
* GPT2 + Cerebras
* Starcoder(Santacoder)
* RWKV (20B tokenizer)
* Falcon
* MPT
* Bloom
* StableLM-3b-4e1t
* Qwen
* Gemma
* Phi
* Mamba
* Others
# Multimodal
* LLaVA 1.5 models
* Obsidian
* MobileVLM 1.7B/3B models
Note: For Falcon, Alpaca, GPT4All, Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2, Vigogne (French), Vicuna, Koala, OpenBuddy (Multilingual), Pygmalion/Metharme, WizardLM, Baichuan 1 & 2 + derivations, Aquila 1 & 2, Mistral AI v0.1, Refact, Persimmon 8B, MPT, Bloom select llama inferece in model settings.
Sampling methods
* Temperature (temp, tok-k, top-p)
* Tail Free Sampling (TFS)
* Locally Typical Sampling
* Mirostat
* Greedy
* Grammar (dont work with GGJTv3)
What’s New
Version 1.1.1
Changes:
* llama.cpp updated to b2717
* Phi3, Mamba(CPU only), Gemma, StarCoder2, GritLM, Command-R, MobileVLM_V2, qwen2moe models
* IQ1_S, IQ2_S, IQ2_M, IQ3_S, IQ4_NL, IQ4_XS quntization support
* Performance improvements
* Fixed crash when EOS option is on
* Fixed image orientation
* Added warning about creating a chat without a selected model
Ratings and Reviews
Looks promising
This app looks pretty promising, but it’s a little bit daunting to someone who’s not as familiar with setting up LLMs. For example, how do you download the LLMs and where do you go to get them? Which alarms are likely to work? It might be a good idea to include specific LLMs that have been tested on which devices. Some sort of tutorial or instructions would be really useful.
Impressive start.
Love this app. Surprisingly powerful and has tons of tweaking options that many apps on full computers lack. I use it on phone and iPad. I would love to see support for newer Gemma models and the ability act as an inference server like ollama or LM studio so I can run local inference for Obsidian or other apps.
App only needs shortcut integrations
This is one of the best LLM apps for mobile! It has a lot of customizable settings and it supports a lot of quantized models. It also supports all the quantized models. Shortcut integration is all this amazing app needs please great developer add app intents to the app!
App Privacy
The developer, Artem Savkin, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy.
Data Not Collected
The developer does not collect any data from this app.
Privacy practices may vary, for example, based on the features you use or your age. Learn More
Information
- Seller
- Artem Savkin
- Size
- 18.4 MB
- Category
- Developer Tools
- Compatibility
-
- iPhone
- Requires iOS 16.0 or later.
- iPad
- Requires iPadOS 16.0 or later.
- Mac
- Requires macOS 13.0 or later and a Mac with Apple M1 chip or later.
- Apple Vision
- Requires visionOS 1.0 or later.
- Languages
-
English
- Age Rating
- 4+
- Copyright
- © Artem Savkin
- Price
- Free