Running an AI Coding Assistant on a Remote Linux Server

post Main Image

Self-Hosted AI Assistance: QEMU/KVM + OpenClaw + DeepSeek

Note:

The process of configuring Deepseek on OpenClaw is incorrect at this moment on this post. I will remove this comment from here when this is corrected.

These instructions were completely generated by an AI assistant based on the steps I performed to set up OpenClaw on a remote Linux server (even the images were created by the assistant). The content is hopefully(!) accurate and reflects the actual process I took minus the missteps and troubleshooting I had to do along the way, but may contain some formatting inconsistencies or minor errors. I use this guide as a reference for myself. Please let me know if you have any questions or need clarification on any of the steps.

The Problem

Many developers work with Linux servers that are accessible only through SSH — no GUI, no local display, just a terminal. Most popular AI coding assistants assume you're running on a local machine with a graphical interface. But what if your development environment is already remote? What if your "computer" is simply a terminal session on a server you cannot physically access?

This guide documents how to solve that problem by running an AI coding assistant inside a virtual machine on a remote Linux host, configured as a systemd service so it persists after the SSH session ends.


Why a Virtual Machine?

There are several reasons to run the AI assistant inside a VM rather than directly on the host:

  1. Isolation: A VM provides a clean, reproducible environment. If something goes wrong, you can recreate it without affecting the host system.
  2. Resource control: The VM receives dedicated CPU and RAM. You can pause, migrate, or resize it independently of the host.
  3. Security: Running the AI tool with API credentials inside an isolated VM provides an additional layer of separation from the host environment.
  4. Flexibility: The VM can be backed up, cloned, or moved to another host if needed.

The architecture is straightforward: Linux host → KVM/QEMU virtual machine → Ubuntu Server → OpenClaw AI assistant.

Section image

Prerequisites


Step 1: Installing the Virtualization Stack

The host machine needs to run QEMU with KVM acceleration, along with libvirt for VM management. Execute the following commands on the host:

sudo apt update
sudo apt install -y qemu-kvm libvirt-daemon-system virt-manager bridge-utils

These packages provide:

Verify that KVM acceleration is available:

kvm-ok

A successful output indicates the system can use hardware virtualization:

INFO: /dev/kvm exists
KVM acceleration can be used

Add your user to the libvirt group to avoid prefixing commands with sudo:

sudo usermod -aG libvirt $(whoami)

Log out and back in for the group change to take effect.


Step 2: Downloading Ubuntu Server

Obtain the Ubuntu 24.04 Server ISO image:

cd /path/to/downloads
wget https://releases.ubuntu.com/24.04/ubuntu-24.04-live-server-amd64.iso

The file is approximately 3.1GB.


Step 3: Creating the Virtual Machine

Create a VM with the following specifications:

sudo virt-install \
  --name openclaw-vm \
  --ram 4096 \
  --vcpus 2 \
  --disk path=/var/lib/libvirt/images/openclaw-vm.qcow2,size=40 \
  --os-type linux \
  --os-variant ubuntu24.04 \
  --network network=default \
  --graphics none \
  --console pty,target_type=serial \
  --location /path/to/downloads/ubuntu-24.04-live-server-amd64.iso \
  --extra-args='console=ttyS0,115200n8 serial'

Parameter explanations:

The Ubuntu installer will launch in the terminal. Complete the installation with the following selections:

  1. Language: English
  2. Keyboard: Configure as preferred
  3. Installation type: Ubuntu Server
  4. Network: DHCP should configure automatically on the NAT network
  5. Storage: Use the entire disk
  6. User account: Create a standard user (for example: demo_user)
  7. SSH Server: Ensure OpenSSH server is installed
  8. Reboot when prompted

Step 4: Configuring SSH Access

After the VM restarts, log in with the account created during installation. Determine the VM's IP address:

ip a

Note the IP address assigned to the primary network interface (typically 192.168.122.x on the default libvirt NAT network).

From the host machine, verify SSH connectivity:

ssh demo_user@192.168.122.x

If connection fails, enable password authentication in the VM:

sudo sed -i 's/^#PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config
sudo systemctl restart sshd

Step 5: Setting Up OpenClaw

The following steps execute inside the VM via SSH.

OpenClaw requires Node.js 22. Install it:

curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs

node --version
sudo npm install -g pnpm

Clone and configure OpenClaw:

cd ~
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install

This step takes several minutes.


Step 6: Configuring the DeepSeek API

OpenClaw connects to LLMs via providers. This guide uses DeepSeek.

  1. Create an account at https://platform.deepseek.com/
  2. Obtain an API key from the dashboard
  3. Create the configuration directory:
mkdir -p ~/.openclaw
  1. Create ~/.openclaw/openclaw.json:
{
  "provider": "deepseek",
  "model": "deepseek-chat",
  "config": {
    "temperature": 0.7
  }
}

Managing the Virtual Machine

Use virsh to manage your VMs from the host. Here are the essential commands:

List all VMs

sudo virsh list --all

This shows all VMs (running and stopped).

Start the VM

sudo virsh start openclaw-vm

Stop the VM

Graceful shutdown:

sudo virsh shutdown openclaw-vm

Force stop (if graceful doesn't work):

sudo virsh destroy openclaw-vm

Connect to the VM Console

sudo virsh console openclaw-vm

Press Ctrl+] to exit the console.

Tip: After starting the VM, wait a few seconds for it to boot before attempting to connect via SSH.

Get VM Information

Find the IP address to SSH into:

sudo virsh domifaddr openclaw-vm

This shows the IP address (typically in the 192.168.122.x range on the default libvirt NAT network).

Get detailed VM info (state, UUID, memory, vCPUs):

sudo virsh dominfo openclaw-vm

List all VMs (running and stopped):

sudo virsh list --all

Step 7: Configuring Systemd Service

To keep OpenClaw running after the SSH session closes, create a systemd service.

Create the service file:

sudo nano /etc/systemd/system/openclaw.service

Add the following content:

[Unit]
Description=OpenClaw AI Assistant Gateway
After=network.target

[Service]
Type=simple
User=demo_user
WorkingDirectory=/home/demo_user/openclaw
Environment="DEEPSEEK_API_KEY=your-deepseek-api-key"
ExecStart=/usr/local/bin/npx openclaw gateway --allow-unconfigured
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Replace your-deepseek-api-key with your actual API key.

Enable and start the service:

sudo systemctl daemon-reload
sudo systemctl enable openclaw
sudo systemctl start openclaw

Verify the service is running:

sudo systemctl status openclaw

The output should indicate active (running).

To confirm persistence, close the SSH session, wait, reconnect, and check the service status again. It will remain running.

Alternative: Environment File for API Key

For improved security, store the API key in a separate file:

sudo mkdir -p /etc/systemd/system/openclaw.service.d
sudo nano /etc/systemd/system/openclaw.service.d/env.conf
[Service]
Environment="DEEPSEEK_API_KEY=your-deepseek-api-key"

Remove the Environment line from the main service file, then reload:

sudo systemctl daemon-reload
sudo systemctl restart openclaw

Step 8: Connecting to the AI Assistant

The OpenClaw gateway listens on ws://127.0.0.1:18789. Two options exist for connecting.

Option A: SSH Tunnel (Recommended)

From the local machine, create an SSH tunnel:

ssh -L 18789:127.0.0.1:18789 demo_user@192.168.122.x

In a separate terminal, launch the TUI:

cd ~/openclaw
npx openclaw tui

The TUI connects through the tunnel to the remote gateway.

Option B: Direct Connection Within the VM

If already SSH'd into the VM:

export DEEPSEEK_API_KEY=your-key-here
cd ~/openclaw
npx openclaw tui

This connects directly to the local gateway.


Use Cases

A persistent AI assistant on a remote server enables several practical applications:

Future posts will explore these workflows in depth, including multi-model support, advanced configuration, and integration with development tools.


Conclusion

This guide covered:

The result is a personal AI coding assistant running on a remote server, available whenever needed, with full control over the infrastructure.