Create Your Own ChatGPT AI/ML with Ollama!

ChatGPT at home

ChatGPT and Bard have taken the world by storm in the last 12 months. Users can freely consume AI solutions or pay for advanced solutions such as ChatGPT Plus.

This blog will show you how easy it is to create your very own AI platform at home or in the cloud on your very own private VPS.

Ollama.Ai is an open-source framework that allows you to run large language models (LLMs) locally on your computer. It provides a user-friendly interface for interacting with LLMs, making it easy to generate text, translate languages, and write various creative content. Ollama supports a variety of pre-trained LLMs, including Llama 2, CodeLlama, and Orca-Mini, and it also allows you to train your own custom LLMs.

Here are some of the key features of Ollama:

  1. Ease of Use: Ollama’s simple API makes it straightforward to load, run, and interact with LLMs. You can quickly get started with basic tasks without extensive coding knowledge.
  2. Flexibility: Ollama offers a versatile platform for exploring various applications of LLMs. You can use it for text generation, language translation, creative writing, and more.
  3. Powerful LLMs: Ollama includes pre-trained LLMs like Llama 2, renowned for its large size and capabilities. It also supports training custom LLMs tailored to your specific needs.
  4. Local Execution: Ollama enables you to run LLMs locally on your device, enhancing privacy and control over your data. You don’t rely on cloud-based services and avoid potential latency issues.
  5. Community Support: Ollama actively participates in the LLM community, providing documentation, tutorials, and open-source code to facilitate collaboration and knowledge sharing.

Overall, stands as a valuable tool for researchers, developers, and anyone interested in exploring the potential of large language models without the complexities of cloud-based platforms. Its ease of use, flexibility, and powerful LLMs make it accessible to a wide range of users.

How to Install and Use


According to the official documentation, the recommended system requirements for running Ollama are:

Operating System:Linux: Ubuntu 18.04 or later
macOS: macOS 11 Big Sur or later
RAM:8GB for running 3B models
16GB for running 7B models
32GB for running 13B models
Disk Space:12GB for installing Ollama and the base models
Additional space required for storing model data, depending on the models you use
CPU:Any modern CPU with at least 4 cores is recommended
For running 13B models, a CPU with at least 8 cores is recommended.
A GPU is not required for running Ollama, but it can improve performance, especially for running larger models.
If you have a GPU, you can use it to accelerate training of custom models.

In addition to the above, Ollama also requires a working internet connection to download the base models and install updates.

Step 1 – Install

Please Note: As of December 2023, only supports macOS and Linux. There are future plans for Windows support.

To install, simply run the following commands:

  • Check you have curl installed

apt install curl -y


Fedora / CentOS / RHEL
yum install curl -y


dnf install curl -y

  • Run the installation script:

curl | sh

You should see the following output:

curl | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7494 0 7494 0 0 27786 0 --:--:-- --:--:-- --:--:-- 27858

Downloading ollama…

################################################################## 100.0%##O#-#

Installing ollama to /usr/local/bin…
[sudo] password for rbailey:
Creating ollama systemd service…
Enabling and starting ollama service…
WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode.

Step 2 – Configure an AI model

Browse this website to view the available AI models that work with Ollama –

I am going to use Codellama – see here for more info.

ollama run codellama

Note: If you hit an error, try pulling the Codellama repo before running it

ollama pull codellama:latest

Step 3 – Query the AI model

Here is an example of Codellama creating a bash script to interact with the Okta API

>>> Write me a bash script that will create a new user via the Okta API
 To create a new user via the Okta API, you can use the `curl` command to make an HTTP request to the `/api/v1/users` endpoint. """



# Set the variables for the API request
EMAIL="[email protected]"
# Create the user via the API
curl -X POST \
  "${API_URL}/api/v1/users" \
  -H "Content-Type: application/json" \
  -H "Authorization: SSWS ${API_TOKEN}" \
  -d "{
    \"profile\": {
      \"firstName\": \"${USERNAME}\",
      \"email\": \"${EMAIL}\"
    \"credentials\": {
      \"password\": {
        \"value\": \"${PASSWORD}\"
This script assumes that you have already set up an API token for your Okta account. If you haven't, you can do so by following the instructions in the Okta documentation.
Once you have the API token set up, you can use this script to create a new user via the Okta API. Simply replace the `your_api_token` placeholder with the actual value of your API token, and run 
the script as is. The script will create a new user with the specified username, password, and email address.
Note that this script uses the `curl` command to make an HTTP request to the `/api/v1/users` endpoint. If you don't have `curl` installed on your system, you can install it by running the following 
sudo apt-get install curl

Ollama Q&A

Q: What sets apart from other AI frameworks like ChatGPT and Bard?

A: distinguishes itself by being an open-source framework that allows users to run large language models (LLMs) locally. This provides a unique advantage in terms of privacy, control over data, and the ability to operate without reliance on cloud-based services.

Q: How user-friendly is for those without extensive coding knowledge?

A: prides itself on its simplicity, featuring a straightforward API that makes loading, running, and interacting with LLMs accessible even for users with limited coding expertise. This ease of use facilitates quick engagement in basic tasks without a steep learning curve.

Q: What are the standout features of that make it a valuable tool for users?

A: offers a versatile platform with powerful pre-trained LLMs like Llama 2, CodeLlama, and Orca-Mini. Its flexibility allows users to explore applications such as text generation, language translation, and creative writing. Additionally, the support for custom LLM training caters to specific user needs.

Elsewhere On TurboGeek:  RAID 0, 1, 5, 6, 10, 50 Advanced

Q: How does contribute to user privacy and control over data?

A: allows users to run LLMs locally on their devices, enhancing privacy by avoiding reliance on cloud-based platforms. This not only provides users with increased control over their data but also eliminates potential latency issues associated with cloud services.

Q: Can you share the recommended system requirements for installing

A: According to the official documentation, the recommended system requirements include operating systems such as Ubuntu 18.04 or later for Linux and macOS 11 Big Sur or later for macOS. The requirements also specify RAM, disk space, CPU, and, optionally, GPU specifications based on the desired model size.

Q: What steps are involved in installing on a system?

A: To install, users need to follow a straightforward process. They should ensure the fulfillment of prerequisites, including system requirements, and then run specific commands for installation. The process involves checking for the presence of curl, running the installation script, and configuring the AI model based on user preferences.

Q: Which AI models are compatible with, and where can users find them?

A: supports various AI models, including Codellama. Users can explore the available models on the official library website ( to find detailed information about each model and choose the one that best suits their needs.

Q: How does facilitate user queries and interactions with AI models?

A: provides a user-friendly interface for querying AI models. Users can interact with the models by using simple commands, as demonstrated in the article with the example of Codellama creating a bash script to interact with the Okta API.


Richard Bailey, a seasoned tech enthusiast, combines a passion for innovation with a knack for simplifying complex concepts. With over a decade in the industry, he's pioneered transformative solutions, blending creativity with technical prowess. An avid writer, Richard's articles resonate with readers, offering insightful perspectives that bridge the gap between technology and everyday life. His commitment to excellence and tireless pursuit of knowledge continues to inspire and shape the tech landscape.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »