Introduction
At Accessibility Labs, we believe that artificial intelligence should be available to everyone—not just through cloud-based services, but right on your own device. Many users are unaware that powerful AI models can run offline, subscription-free, and entirely under their control.
In this guide, we’ll walk you through installing and running Ollama, a lightweight tool that lets you use Large Language Models (LLMs) locally.
What is Artificial Intelligence?
Artificial Intelligence, or AI, is a way for computers to learn and solve problems much like people do. It works by taking in many examples, pictures, words, or numbers, and then finding patterns. Once it learns these patterns, it can use them to make decisions or answer questions. AI is teaching a computer to recognize things and do tasks without needing a human to guide every step.
What Are LLMs and What Is Ollama?
Large Language Models (LLMs): LLMs are a type of AI system, trained on vast amounts of text data to generate human-like language. They understand context, answer questions, and generate creative content. Popular examples include OpenAI’s ChatGPT or Google’s Gemini, which run on remote servers dependent on internet connection. These Hosted LLMs are managed, updated, and scaled by the provider.
Ollama: Ollama is a lightweight tool that allows you run LLMs locally on your machine. Unlike cloud-based services, Ollama is primarily designed to be used via Command Line. This means you interact with it through your Command Line Interface, which provides benefits like enhanced privacy, lower latency, offline availability, and no recurring subscription fees.

Installing Ollama
Download Ollama Installer:
Visit Ollama’s Website and download the installer for your operating system.
Download Details:
- Supported Operating Systems: Windows, Mac, and Linux
- Operating System Requirements: Windows 10, macOS 11 Big Sur, or later.
- App Size: 768 MB for Windows, 425 MB for Mac
Run the Installer:
- Windows: Run OllamaSetup.exe
- MacOS: Open Ollama.app
- Linux: Install by running the following command:
curl -fsSL https://ollama.com/install.sh | sh
After installing Ollama, opening its application will not launch a visible interface. This is expected, as Ollama operates through the command line. The following applications are the default Command Line Interfaces for Ollama’s supported Operating Systems:
- Windows: Command Prompt
- MacOS: Terminal
- Linux: Bash Shell
With Ollama installed, you now have the ability to run any model provided on Ollama’s website. In the next section we will cover model variety and how to install them, readying Ollama for prompting.
Ollama Models
Ollama’s provided models are the datasets the application will reference when provided a prompt. Models cover everything from general-purpose to specialized ones for coding, vision, and more. Ollama’s collection of models is constantly growing, currently there are well over a hundred.
On Ollama’s Model list, each Model is presented with the model name, description, versions, amount of downloads, supporting tags, and time since last updated.

Versions within a Model are represented in b’s, shorthand for how many billions of parameters the model contains, the amount of adjustable values the model has learned during training. Larger versions will typically require more storage and high-end computing to function well.
Running Your First Model
To download any model, select your model version in Ollama’s catalogue.
Our team at Accessibility Labs has selected to demonstrate Mistral-Small 22b due to its impressive general outputs, wide language support, and lightweight size at 13 gigabytes. If you have limited system resources, smaller models are recommended as they require less RAM and storage. For advanced usage requiring greater accuracy, larger models are available.
On all models, there will be a Version Dropdown and a Command Line Prompt after the last updated date. When modifying the Version Dropdown, the Command Line will responsively match the selected version. This command line is what needs to be copied and pasted into your Command Line Interface in order to Install it.

Install the Model
To initially install your model, in this example Mistral-Small 22b, directly paste the provided command line in your device’s Command Line Interface (Command Prompt, Terminal, Bash Shell):
ollama run mistral-small:22b
If successfully ran, the download will begin. Your Command Line Interface will begin ‘Pulling Manifest’ and represent your download progress.


Prompt the Model
Once the pull is complete, the prompt will indicate ‘Success’ and that you can ‘Send a Message’. You are now free to explore the power of Mistral-Small 22b by prompting it anything you would like further general knowledge on!
Here are a couple prompt examples to try:
- “Explain the rules of chess like I’m a beginner.”
- “Help me write a polite but firm email requesting a refund.”
- “Generate a short science fiction story?”
If the window is closed and you have new prompt ideas, paste the ‘ollama run’ command in a new Command Line Interface window again.
ollama run mistral-small:22b
This time the model will not need to be installed, you will immediately be prompted with ‘>>> Send a Message…’ ready to receive new prompts.
Through Ollama you can have multiple Models installed to your PC at once. To specify the model, modify the ‘ollama run’ command to contain the new model’s name.
Available Commands:
- Set session variables: /set
- Show model information: /show
- Load a session or model: /load
- Save your current session: /save
- Clear session context: /clear
- Exit: /bye
- Command List: /help
- Beginning a multi-line message: “””, 3 quotation marks
Keyboard Shortcuts:
- Move to the beginning of the line: Ctrl + A
- Move to the end of the line: Ctrl + E
- Move back one word: Alt + B
- Move forward one word: Alt + F
- Delete the sentence after the cursor: Ctrl + K
- Delete the sentence before the cursor: Ctrl + U
- Delete the word before the cursor: Ctrl + W
- Clear the screen: Ctrl + L
- Stop the model from responding: Ctrl + C
- Exit Ollama: Ctrl + D (also /bye)
Uninstall the Model
If you would like to uninstall a model, in your Command Line Interface, run the command ‘ollama rm’ followed by the model’s name. For Mistral-Small 22b the command is:
ollama rm mistral-small:22b
A Note on Screen Reader Support
Windows
Using Windows Narrator or NVDA with Command Prompt or PowerShell presents challenges when interacting with Ollama. Responses are not keyboard-navigable, and narration often reads partial, out-of-context phrases as text appears on screen.
For NVDA, a workaround is:
- Send your prompt.
- Briefly switch focus to another window while the response processes.
- Return to the command window and use Ctrl + Up Arrow (Read Previous Paragraph) to hear the full response.
Unfortunately, Windows Narrator currently has no effective workaround for this issue.
Mac
With VoiceOver in Terminal, we observed the same issue: responses cannot be navigated by keyboard, and narration reads incomplete phrases as text loads dynamically.
A workaround is to switch focus to another window while the response completes, then return to Terminal to review the output more effectively.
Optional: Graphical User Interface (GUI)
If you prefer a graphical interface over the command line, consider these third-party options:
- Ollama GUI: A modern web-based interface that lets you chat with your local LLMs via Ollama. It’s built to offer a clean, user-friendly experience while leveraging Ollama’s backend. Check out the Ollama GUI GitHub repository for installation instructions and more details.
- Open WebUI: Another popular option that supports various local LLM setups, including Ollama. This tool provides a rich web interface and additional features like chat history, markdown support, and more. See the Open WebUI project for more information.
