I did not try out OpenCode
If you are interested in following along and trying OpenCode, follow here. If you are interested in tips on how to write a relatively maintainable stateful Bash wrapper around other commands, follow here.
Part one: Using containers to run OpenCode with Ollama
tl;dr: I didn't get it working.
I discovered there's a something called OpenCode that provides terminal-based interface to large language models. Basically open-source alternative to Claude Code.
I do not own a GPU capable of running the models, but I have been successful with CPU-bound inference before for playful use-cases. (For example, I have a simple Discord bot that uses qwen2.5:3b hosted on Raspberry Pi 4B. It has replaced my 2022 Markov Chain bot, but their utility is about the same.)
Installation options
The documentation suggests either curl-pipe-Bash, or npm, or other packaging tools that share a common theme: the thing gets installed directly into your system. I am not willing to let an AI model, let alone AI agent, touch anything I don't explicitly allow.
There are various ways to jail a process, both within and outside of what a typical Linux distribution provides, and while I could probably use systemd-nspawn, firejail or own SELinux policy to achieve a portion of what I'd want from it, I usually default to Podman: I use it daily at work, and I know what to expect from it.
OpenCode does provide a GHCR-hosted container, so I went for it. The internal structure and path expectations aren't documented, but it was trivial to figure out. Fortunately, their binary respects XDG Base Directory Specification: everything is stored at ~/.config/opencode/ or ~/.local/share/opencode/.
$ podman run -it --rm \
--security-opt label=disable \
-v $HOME/.local/share/opencode/:/root/.local/share/opencode/:rw \
-v $HOME/.config/opencode/:/root/.config/opencode/:rw \
ghcr.io/anomalyco/opencode
Getting the models
Ollama provides an OCI container for us to use. Since all I cared about was CPU inference, it's just about bind-mounting a directory to download the models to.
$ podman run -it --rm -d \
--name ocode-ollama \
--hostname ocode-ollama \
--security-opt label=disable \
-v $HOME/.local/share/ollama/:/root/.ollama/:rw \
docker.io/ollama/ollama:latest
$ podman exec -it ocode-ollama ollama pull cogito:8b
$ # To verify everyhing works:
$ podman exec -it ocode-ollama ollama run cogito:8b
$ podman stop ocode-ollama
Integrating OpenCode with Ollama
To configure Ollama as an provider, follow their documentation.
Since we are enclosed in containers, we cannot use 127.0.0.1 to refer to the Ollama API: OpenCode container would be looking in its own loopback. When starting the Ollama container, set a reasonable name for it (--hostname ocode-ollama): you'll be able to refer to that from other containers.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {"baseURL": "http://ocode-ollama:11434/v1"}},
"models": {...}
}
}
}
Putting the steps together
- Download the models (and optionally verify they work by chatting with them directly using
ollama run) - Create and fill
~/.config/opencode/opencode.json - Run Ollama container
- Run OpenCode container
- ???
- Profit
I don't know what's wrong
On my laptop, it takes 4-15 seconds to open the TUI. I've got no idea why, logs told me nothing.
What's worse, when I ask a question through the OpenCode chat box, nothing happens. All my CPU cores spin up to 100%, and they stay there until I Esc-Esc the TUI or kill one of the containers. I know text inference is slow, but not this slow.
And that's where I gave up. I spent two and half ours on a Saturday morning trying to make it work, and if it didn't work, I'll move on, or I'll revisit later. Since I don't have proper hardware, it wouldn't have been usable anyway.
Part two: Productization of the shell snippets
tl;dr: Every year I like Bash micro-software more and more.
Initiate the project
$ touch ~/.local/bin/ocode
$ chmod +x ~/.local/bin/ocode
Whenever I decide to write a 'proper' Bash script, I have two or three terminal windows open. One has text editor (I use nano), and the rest are for ad-hoc commands, man pages or other reference (like standard output of a command I am using).
The part about using nano is important to me. It is my mental model of "I am keeping it simple": if I am able to stay productive writing single-file script in an editor that does syntax highlighting and nothing else, I am kept organized and close to the ground. It is a nice change from work projects with hundreds of files and fat IDEs.
Code organization
I haven't yet converged on an internal model of whether I have too many or too little global variables, it feels like I do it wrong every time. I ended up with two this time: PROJECT=$(basename $(pwd)")" and CONTAINER="ocode-$PROJECT.
Below them, I order methods. I like to start with function help() { being first, because it can act as a documentation for developers.
I like to sort helper functions just below functions that call them, but it gets tricky when the helper functions are reused. Reusable functions live on the bottom of the file, but it depends on how I feel at the moment.
At the very end, there is a case statement that parses user input and executes appropriate branch. It both fits my mental model and ensures everything has been parsed before it is executed, but it also means the script won't run if it got mangled, corrupted or cut in transfer (think curl-pipe-Bash).
Shared network
I am not giving neither OpenCode nor Ollama any internet access. It is fair point that source code isn't probably as sensitive as discussions about a struggle in a real world, but since we're running local models, it would be strange if any of these containers needed any internet.
By default, Podman containers connect to the podman bridged network, using subnet 10.88.0.0/16. That's the network used to download Ollama models.
podman network create --internal --disable-dns --subnet 192.168.254.0/24 ocode
You can keep DNS enabled if you wanted to. At some point of script creation, my Ollama container had both podman and ocode network, and it was trying to pipe all network traffic using the isolated one.
The flag stuck; it doesn't hurt and I like I have slightly more stability in my system. The drawback of having no DNS is that now we need to refer to Ollama container using an IP address, because its hostname will not resolve.
OpenCode container
While writing stateful programs in Bash sounds strange, it can make the code more readable and easier to reason about. Podman exposes exists subcommand for almost every operation, we start each function with an early exit:
podman container exists "$CONTAINER" && return
The rest of the method is very similar to what you've seen above. We just add more bind-mounted volumes (so the AI agent has access to the code we want it to work on). I exlicitly mount the .git directory as read-only, so the model cannot mess with the history, but can read it to get more context.
VOLUMES=""
VOLUMES="$VOLUMES v $HOME/.local/share/opencode/:/root/.local/share/opencode/:rw"
VOLUMES="$VOLUMES -v $HOME/.config/opencode/:/root/.config/opencode/:rw"
VOLUMES="$VOLUMES -v $(pwd)/:/project/:rw"
[ -d "$(pwd)/.git/" ] && VOLUMES="$VOLUMES -v $(pwd)/.git/:/project/.git/:ro"
podman run -it --rm -d \
--hostname "$CONTAINER" --name "$CONTAINER" \
--security-opt "label=disable" \
$VOLUMES --workdir "/project/" --network "ocode" \
ghcr.io/anomalyco/opencode:latest
Ollama container
Bind-mount ~/.local/share/ollama/:/root/.ollama, assign the network and IP address, expose port (-p 11434:11434), that's about it.
Argument parsing
Because we're starting all containers a detached, we can make the user-facing API very simple. A single command run will set up the network, start both containers and attach our shell to the container's embedded TUI application. If the containers already exist, it will attach us directly: you can run two or more agents at the same time without having to do active state management.
Because of that, we need to have an explicit stop subcommand that will tear down the containers.
COMMAND=${1:-run}
case "$COMMAND" in
"run")
mk-network
mk-container-ollama
mk-container-opencode
attach
;;
"stop")
podman stop --ignore --time 5 "$CONTAINER"
podman stop --ignore --time 5 code-ollama
;;
*)
help
;;
esac
Statefullness visibility
What is actually running? While you can run podman ps, I usually find it prettier to implement my own status command where I can select what exactly is displayed.
echo "ocode"
podman container exists ocode-ollama && echo " ollama: yes" || echo " ollama: no"
podman container exists "$CONTAINER" && echo " opencode: yes" || echo " opencode: no"
echo "models"
podman container exists ocode-ollama && {
podman exec -it ocode-ollama ollama list | tail -n +2 | sed 's/^\([^ ]*\).*/ \1/'
} || echo " unknown (ollama not running)"
Model and configuration management
At the current state, to add a model, you have to podman exec into the Ollama container, and update the ~/.config/opencode/opencode.json file. We can do better.
Everything is a tradeoff. We could expose command model with add and remove subcommands, but once you figure out which models work for you, you wouldn't be using that command. The utility of it is low, so let's pick a different way.
By iterating through a hardcoded list, we instruct Ollama to download a model and jq to add a configuration option.
podman run -it --rm --hostname ocode-ollama-download ... docker.io/ollama/ollama:latest
for model in "cogito:8b" "cogito:3b"; do
podman exec -it code-ollama-download ollama pull "$model"
cat "$HOME/.config/opencode/opencode.json" | \
jq ".provider.ollama.models += {\"$model\": {\"name\": \"$model\"}}" > \
"$HOME/.config/opencode/opencode.json.new"
mv "$HOME/.config/opencode/opencode.json.new" "$HOME/.config/opencode/opencode.json"
done
Finished
As I said in the first part, I never got this actually working.
I know my script isn't perfect, the stop subcommand doesn't deal with multiple agents in multiple projects, and it may break in some edge cases (though, if the name of your project directory contains a space character, you are a monster and you deserve it).
shellcheck is quite happy with it, and it works on theoretical level, and I'm leaving it be as-is.
Epilogue
Since Forgejo doesn't support gists/snippets, I'll leave the code here, untracked, to be forgotten. If I ever fix it, and will have a will to maintain it, I will create a git repository for it.
Full source code
#!/bin/bash
set -euo pipefail
PROJECT="$(basename "$(pwd)")"
CONTAINER="ocode-$PROJECT"
function help() {
echo "usage $0 COMMAND"
echo "COMMANDS:"
echo " init Download ollama models"
echo " run Open ocode for this project"
echo " stop Stop ocode for this project"
echo " status Display status of podman network and containers"
echo " help Display this help and exit"
}
function status() {
echo "ocode"
podman container exists "ocode-ollama" && echo ' ollama: yes' || echo ' ollama: no'
podman container exists "$CONTAINER" && echo " $CONTAINER: yes" || echo " $CONTAINER: no"
echo "models"
podman container exists "ocode-ollama" && {
podman exec -it "ocode-ollama" ollama list | tail -n +2 | sed 's/^\([^ ]*\).*/ \1/'
} || echo " unknown (ollama not running)"
}
function mk-network() {
podman network exists "ocode" && return
podman network create --internal --subnet "192.168.254.0/24" --disable-dns "ocode"
}
function mk-models() {
mkdir -p "$HOME/.local/share/ollama/"
podman run -it --rm -d \
--hostname "ocode-ollama-download" \
--name "ocode-ollama-download" \
--security-opt label=disable \
-v "$HOME/.local/share/ollama/:/root/.ollama/:rw" \
docker.io/ollama/ollama:latest
for model in "cogito:8b" "cogito:3b" "qwen2.5-coder:7b" "granite3.2:8b"; do
podman exec -it ocode-ollama-download ollama pull "$model"
cat "$HOME/.config/opencode/opencode.json" | \
jq ".provider.ollama.models += {\"$model\": {\"name\": \"$model\"}}" > \
"$HOME/.config/opencode/opencode.json.new"
mv "$HOME/.config/opencode/opencode.json.new" "$HOME/.config/opencode/opencode.json"
done
podman stop --ignore --time 5 "ocode-ollama-download"
}
function mk-container-ollama() {
podman container exists "ocode-ollama" && return
mkdir -p "$HOME/.local/share/ollama/"
VOLUMES="-v $HOME/.local/share/ollama/:/root/.ollama/:rw"
podman run -it --rm -d \
--hostname "ocode-ollama" \
--name "ocode-ollama" \
--security-opt label=disable \
$VOLUMES \
--network "ocode:ip=192.168.254.2" \
-p 11434:11434 \
--env OLLAMA_CONTEXT_LENGTH=32000 \
docker.io/ollama/ollama:latest
}
function mk-container-opencode() {
podman container exists "$CONTAINER" && return
mkdir -p "$HOME/.local/share/opencode/"
mkdir -p "$HOME/.config/opencode/"
VOLUMES=""
VOLUMES="$VOLUMES -v $HOME/.local/share/opencode/:/root/.local/share/opencode/:rw"
VOLUMES="$VOLUMES -v $HOME/.config/opencode/:/root/.config/opencode/:rw"
VOLUMES="$VOLUMES -v $(pwd)/:/project/:rw"
[ -d "$(pwd)/.git/" ] && VOLUMES="$VOLUMES -v $(pwd)/.git/:/project/.git/:ro"
podman run -it --rm -d \
--hostname "$CONTAINER" \
--name "$CONTAINER" \
--security-opt label=disable \
$VOLUMES \
--workdir "/project/" \
--network "ocode" \
--env OPENCODE_DISABLE_MODELS_FETCH=1 \
ghcr.io/anomalyco/opencode
}
function attach() {
podman exec -it "$CONTAINER" opencode
}
COMMAND=${1:-run}
case "$COMMAND" in
"init")
mk-models
;;
"run")
mk-network
mk-container-ollama
mk-container-opencode
attach
;;
"stop")
podman stop --ignore --time 5 "$CONTAINER"
podman stop --ignore --time 5 "ocode-ollama"
;;
"status")
status
;;
*)
help
;;
esac