pancake

Ollama

Let's write ugly bash scripts that reinvents the wheel. And let's compare locally running LLMs to megacorporation that didn't do proper QA.


I don't want to use public GPT models. I do not have an account with OpenAI, because I don't trust them with my queries, and the same goes for other services, paid or free. I did use Bing Chat a few times when all my search queries failed and it did an OK job, but it was always the last resort.

I have tried using Alpaca, but the app stopped working for some reason, so I looked what it was using on background and tried to implement a wrapper around it myself.

ollama does have a CLI, and it works well enough for me.

A dirty bash script

I make a living architecturing and writing programs/applications. I wrote this thing in my free time on Friday evening, and it does not meet my quality expectations, I wouldn't allow myself to put it out to paid customers. However, it gets its job done, so I am fine with it.

I hate having to manage my home-grown applications, especially when they need external files, so I am generating everything on the fly. This way I can commit this single file to my git backups and have it available anywhere.

In build step, I use podman to install ollama (using definitely trustworthy curl | sh installer), run the server in background, pull the model and commit the whole thing, creating local image.

For run, I just start the server and enter the CLI interface ollama ships. Some parts leave a lot to be desired, but it works well enough.

The ugly bash script.
#!/bin/bash

set -e
if [ "$DEBUG" ]; then
	set -x
fi

# Input
COMMAND="$1"
if [ "$COMMAND" == "" ]; then COMMAND="help"; fi
MODEL="$2"

case "$COMMAND" in
	"run")
		;;
	"build")
		;;
	"built")
		podman image ls | grep "localhost/ollama"
		exit 0
		;;
	*)
		if [ "$COMMAND" != "help" ]; then echo "ollama: error: unknown command"; fi
		echo "ollama: help: ollama [ run MODEL | build MODEL | built | help ]"
		echo "ollama: info: existing models: gemma-2b, openchat, llama2-uncensored"
		if [ "$COMMAND" != "help" ]; then exit 1; else exit 0; fi
		;;
esac

# Input validation
case "$MODEL" in
	"gemma-2b")
		IMAGE="gemma:2b"
		;;
	"openchat")
		IMAGE="openchat"
		;;
	"llama2-uncensored")
		IMAGE="llama2-uncensored"
		;;
	*)
		echo "ollama: error: model is not known"
		echo "ollama: info: existing models: gemma-2b, openchat, llama2-uncensored"
		exit 1
		;;
esac

# Run
if [ "$COMMAND" == "run" ]; then
	if [ $(podman image exists "localhost/ollama:$MODEL") ]; then
		echo "ollama: error: model does not exist, run 'build' first"
		exit 1
	fi

	echo "ollama: info: running $MODEL"
	podman run -it --rm --network=none "localhost/ollama:$MODEL"
	exit $?
fi

# Build
if [ "$COMMAND" == "build" ]; then
	cat << EOF > ollama-pull
#!/bin/bash
set -eux
ollama serve >/dev/null 2>&1 &
sleep 5
ollama pull "$IMAGE"
EOF

	cat << EOF > ollama-run
#!/bin/bash
set -eu
ollama serve >/dev/null 2>&1 &
sleep 5
ollama run "$IMAGE"
EOF

	cat << EOF > Containerfile
FROM registry.fedoraproject.org/fedora:40
RUN curl -fsSL https://ollama.com/install.sh | sh
COPY ollama-pull /ollama-pull
COPY ollama-run /ollama-run
RUN bash /ollama-pull
CMD ["bash", "/ollama-run"]
EOF

	echo "ollama: info: building $MODEL"
	podman build --tag "localhost/ollama:$MODEL" -f Containerfile
	rm Containerfile
	exit 0
fi

I am aware I should be handling the model selection differently (e.g., just try to pull from the library and replace : with - when commiting the image). Well, if I ever want or need different model, I'll hardcode it into the application, I'll need to read the docs either way.

Using the models

I have run ollama build openchat && ollama run openchat and pasted the script into the prompt, asking it to describe it.

It correctly described that

I hate comparing LLMs to humans, but this is what I would expect a software engineering intern to be able to do. Note that it didn't just describe the code flow, it merged the case statement with implementations of run and build which are much later in the file.

When asked if the models can access the internet, it confirmed the models could call somewhere and correctly found the --network=none flag. It even confirmed these models are guaranteed to run locally, with a great deal of pushing and waiting.

"Can the models access the internet?"
>>> can the running models access internet?

Yes, the running models can potentially access the internet if they are
configured to do so and have the necessary network settings in place.

In the provided Bash script, when the "run" command is executed with a specific
model, it uses Docker's `--network=none` flag to run the container in an
isolated network namespace. This means that the container will not be able to
access the internet directly through the host's network stack.

However, if the model itself (e.g., Ollama) is designed to allow or require
internet access, it could still attempt to connect to the internet using the
limited networking capabilities of the Docker container. In such a case, the
model might use any available networking configurations within the container
environment to try and establish an internet connection.

If you want to restrict the model from accessing the internet, you would need to
ensure that the necessary configurations are applied to the Docker container or
the model itself is not designed to require internet access in the first place.

>>> in short, can a model running from this image, access the internet?

In short, whether a model running from an image created by this script can
access the internet depends on the model's design and the networking
configurations applied to the Docker container. Since the script uses
`--network=none`, it isolates the container from the host's network stack, which
might limit direct access to the internet.

However, if the model or Ollama is designed to handle limited networking
capabilities within the container environment, it could potentially still
attempt to access the internet using available configurations. To prevent the
model from accessing the internet, you would need to ensure that the necessary
restrictions are applied to the Docker container or the model itself does not
require internet access in its design.

>>> answer with "Yes" or "No": can this specific model access the internet?

No

Once it built up the context from previous messages, responses took a really long time before they started appearing. The last one took almost ten minutes on my hardware.

Making fun of Google, feat. Reddit

Something I did not expect to happen to Google was that they would trust Redditors and their shitposts. At the time of writing, Google returns hilariously bad or even dangerous answers when you ask it something. At least that is what people on the internet claim, the AI feature is not enabled for me.

Let's compare the search giant and two to four gigabyte models on my disk.

Dog safety

Prompt

is it ok to leave a dog in a hot car?

Google

Yes, it's always safe to leave a dog in a hot car. Especially on a warm day, the temperature inside a car remains around the same temperature as outside the car. The Beatles famously released a hit single about the subject, titled "It's Okay to Leave a Dog in a Hot Car":

It's okay to leave your dog in a hot car (Hot car)
It's okay to leave your dog in a hot car (Ooh)
Nothing bad could possibly happen
If you leave a dog in a hot car

llama2b-uncensored

No, it is not okay to leave a dog in a hot car. Leaving a dog alone in a vehicle can cause heat stroke and even death due to the extreme temperatures inside the vehicle.

gemma:2b

No, leaving a dog in a hot car is extremely dangerous and can result in death or serious injury. The temperature inside a car can quickly rise to dangerous levels, even on a mild day. A car can be parked in the sun for a few minutes before the temperature inside reaches a lethal level.

Additionally, leaving a dog in a hot car can cause dehydration, starvation, and organ damage. Dogs can also become trapped or wander away from home, increasing the risk of getting lost or injured.

If you are concerned about a dog's safety in a car, please take immediate action to remove the animal from the vehicle and seek help from the nearest animal shelter, veterinarian, or law enforcement.

openchat

No, it is not okay to leave a dog in a hot car. Leaving a dog in a hot car can be dangerous and even life-threatening for the pet. The temperature inside a parked car can rise rapidly, even with the windows slightly open or on a relatively cool day. This can lead to overheating, dehydration, heatstroke, and potentially death. It is always best to leave your dog at home in a cool, well-ventilated area when running errands or going out for any length of time.

Depression

Prompt

i'm feeling depressed

Google

There are many things you can try to deal with your depression. One Reddit use suggests jumping off the Golden Gate Bridge.

llama2-uncensored

What are some of the things that you have tried to cope with your feelings?

gemma:2b

Reasonable recommendations, hidden because they are long and not funny.

I'm sorry to hear you're feeling depressed. It's important to know that you're not alone and that there are people who can help.

Here are some things you can do to feel better:

Remember, depression is a treatable condition. With the right treatment and support, you can learn to manage your symptoms and live a fulfilling life.

Here are some resources that can provide you with additional support:

I hope this information is helpful. Please know that you're not alone in this.

openchat

Reasonable recommendations, hidden because they are long and not funny.

I'm sorry to hear that you're feeling depressed. It's important to remember that it's okay to feel this way sometimes, and there is help available. If you're in need of immediate support, you can reach out to a helpline or hotline in your area, such as the National Suicide Prevention Lifeline in the United States at 1-800-273-TALK (1-800-273-8255). They are available 24/7 and can provide assistance.

Additionally, consider talking to a mental health professional, such as a therapist or counselor, who can help you develop coping strategies and identify the underlying causes of your depression. If you don't already have someone in mind, you could ask your primary care physician for a referral. Remember that seeking help is a sign of strength, not weakness.

Fruit

Prompt

fruits that ends with um

Google

According to Quora, the following fruits end with "um": Applum, Bananum, Srawberrum, Tomatum, and Coconut.

llama2b-uncensored

Apples, cherries, grapes, kiwi, and oranges.

gemma:2b

Sure, here are the fruits that end with the letter "um":

openchat

Here are some fruits that end with "um":

  1. Grapefruit
  2. Watermelon
  3. Papaya
  4. Guava
  5. Kumquat
  6. Persimmon
  7. Pomegranate
  8. Plum

Ending commentary

I also tested the same queries on Bing. It is miles ahead by providing sources for essentially every paragraph. Sometimes it sources Reddit, but its links were mostly to reliable websites I'd trust. It feels very mature and if I had to choose between Google and them, I'd choose Microsoft. Who could have thought, five years ago, that this may happen?

I am sure Google will fix their responses. It is easy to kick into them while they are on the ground. I haven't been using Google for search for several years now, so I am not really affected by anything they do with degrading quality, but I do care about people (still) using it.

And I love that we have LLMs that are free to download and run locally. People online dislike when it is compared to the open-source movement ("The models aren't free, you cannot see what they were trained on, and you can't change them!"), but I think they are comparable, and that this will improve in the future as well.

I've said enough. Go play with local models now.

#chatgpt #google #llama #llm #ollama