Installing Proxmox VE on Debian 13 Trixie
- tags
- #Proxmox #Debian #Virtualization #Homelab
- categories
- Infrastructure Tutorials
- published
- reading time
- 8 minutes
I recently set up a new Proxmox VE server on Debian 13 (Trixie) and wanted to document the process. This guide follows the official Proxmox wiki instructions with real-world networking configuration.
Prerequisites
Start with a clean Debian 13 Trixie installation:
- Use expert mode installer for static IP configuration
- Select only “standard system utilities” and “SSH server”
- Don’t install desktop environment or QEMU (Proxmox brings its own)
Important: Installation is unsupported with systemd-boot and Secure Boot enabled. Use GRUB instead.
Step 1: Configure Hostname
The hostname must resolve to a non-loopback IP address. Edit /etc/hosts:
nano /etc/hosts
Add your server’s entry (use your actual IP):
192.0.2.100 pve.example.com pve
Verify it resolves correctly:
hostname --ip-address
# Should return: 192.0.2.100 (not 127.0.0.1)
Step 2: Add Proxmox Repository
Create the package source file using the new DEB822 format:
cat > /etc/apt/sources.list.d/pve-install-repo.sources << 'EOL'
Types: deb
URIs: http://download.proxmox.com/debian/pve
Suites: trixie
Components: pve-no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOL
Download the Proxmox archive keyring:
wget https://enterprise.proxmox.com/debian/proxmox-archive-keyring-trixie.gpg \
-O /usr/share/keyrings/proxmox-archive-keyring.gpg
Update package lists:
apt update && apt full-upgrade -y
Step 3: Install Proxmox Kernel
Install the Proxmox kernel first:
apt install proxmox-default-kernel
Reboot into the new kernel:
systemctl reboot
Step 4: Install Proxmox VE
After reboot, install Proxmox VE packages:
apt install proxmox-ve postfix open-iscsi chrony
During installation:
- Postfix configuration: Select “Local only” unless you need mail relay
- This installs the full Proxmox stack including web interface
Step 5: Remove Debian Kernel
Now remove the original Debian kernel (optional but recommended):
apt remove linux-image-amd64 'linux-image-6.12*'
update-grub
Also remove os-prober to prevent boot menu clutter:
apt remove os-prober
Step 6: Configure Network Bridges
Proxmox uses Linux bridges for VM networking. Here’s my production network configuration at /etc/network/interfaces:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
# Public bridge (VMs get public IPs)
auto vmbr0
iface vmbr0 inet static
address 192.0.2.100/24
gateway 192.0.2.1
bridge-ports eth0
bridge-stp off
bridge-fd 0
up ip route replace 192.0.2.0/24 via 192.0.2.1 dev vmbr0
iface vmbr0 inet6 static
address 2001:db8::1/64
gateway fe80::1
# Private bridge (internal network for VMs)
auto vmbr1
iface vmbr1 inet static
address 10.10.10.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
Key points:
vmbr0- Public bridge attached toeth0for VMs with public IPsvmbr1- Private bridge with no physical ports for internal VM networkbridge-stp off- Spanning Tree Protocol disabled (not needed for single bridge)bridge-fd 0- No forwarding delay
Restart networking:
systemctl restart networking
Step 7: Access Web Interface
Proxmox is now ready! Access the web interface:
https://192.0.2.100:8006
Login credentials:
- Username:
root - Realm:
Linux PAM standard authentication - Password: Your root password
Post-Installation Steps
Remove Temporary Repository
After successful installation, clean up the temp repository:
rm /etc/apt/sources.list.d/pve-install-repo.sources
Upload Subscription Key (Optional)
If you have a Proxmox subscription:
- Navigate to Datacenter → Subscription
- Upload your subscription key
- This enables the enterprise repository with stable updates
Create Storage
Configure your storage in Datacenter → Storage:
- Local storage (already configured)
- Add NFS/CIFS for shared storage
- Configure ZFS pools if available
Troubleshooting
Boot Signature Error
If you get kernel signature verification errors:
- Disable Secure Boot in BIOS/UEFI
- Proxmox kernel doesn’t support Secure Boot in this setup
DNS Resolution Issues
If /etc/resolv.conf keeps getting overwritten:
- Remove
resolvconforrdnssdpackages - These conflict with Proxmox networking
apt remove resolvconf rdnssd
Connection Refused on Port 8006
Check Proxmox services:
systemctl status pveproxy
systemctl status pvedaemon
systemctl status pve-cluster
Verify firewall isn’t blocking:
iptables -L -n | grep 8006
Hostname Resolution Errors
Ensure /etc/hosts has proper entry:
hostname --ip-address
# Must return your server's IP, not 127.0.0.1
Useful Commands
# Check Proxmox version
pveversion -v
# List all VMs
qm list
# List all containers
pct list
# Check cluster status (if clustered)
pvecm status
# Update all packages
apt update && apt dist-upgrade
# Restart Proxmox services
systemctl restart pveproxy pvedaemon pve-cluster
Network Bridge Explanation
The bridge configuration allows:
vmbr0 (Public Bridge):
- VMs get public IPs from your provider
- Direct internet access
- Useful for web servers, mail servers, etc.
vmbr1 (Private Bridge):
- Internal network (10.10.10.0/24)
- VMs can communicate with each other
- No direct internet (needs NAT/routing through another VM)
- Perfect for databases, internal services
You can create VMs on either bridge depending on requirements.
Setting Up DHCP for Private Network
For the private bridge (vmbr1), set up a DHCP server so VMs can get IPs automatically:
apt install isc-dhcp-server
Configure /etc/dhcp/dhcpd.conf:
cat > /etc/dhcp/dhcpd.conf << 'EOF'
# DHCP configuration for vmbr1
option domain-name "internal.local";
option domain-name-servers 10.10.10.1;
default-lease-time 600;
max-lease-time 7200;
# vmbr1 subnet
subnet 10.10.10.0 netmask 255.255.255.0 {
range 10.10.10.100 10.10.10.200;
option routers 10.10.10.1;
option subnet-mask 255.255.255.0;
option broadcast-address 10.10.10.255;
}
EOF
Configure DHCP to listen on vmbr1:
echo 'INTERFACESv4="vmbr1"' > /etc/default/isc-dhcp-server
Start and enable DHCP server:
systemctl restart isc-dhcp-server
systemctl enable isc-dhcp-server
systemctl status isc-dhcp-server
Now VMs on vmbr1 will automatically get IPs in the 10.10.10.100-200 range.
Creating a Cloud-Init VM Template
Cloud-init templates allow rapid VM deployment with automated configuration:
Download Cloud Image
cd /var/lib/vz/template/iso/
wget https://cloud-images.ubuntu.com/releases/noble/release/ubuntu-24.04-server-cloudimg-amd64.img
Create Template VM
# Create a new VM (ID 9000)
qm create 9000 --name ubuntu-2404-template --memory 2048 --net0 virtio,bridge=vmbr1
# Import the disk
qm importdisk 9000 ubuntu-24.04-server-cloudimg-amd64.img local-lvm
# Attach the disk
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
# Add Cloud-Init drive
qm set 9000 --ide2 local-lvm:cloudinit
# Set boot disk
qm set 9000 --boot c --bootdisk scsi0
# Add serial console
qm set 9000 --serial0 socket --vga serial0
# Enable QEMU guest agent
qm set 9000 --agent enabled=1
# Convert to template
qm template 9000
Custom Cloud-Init Configuration
For advanced setup with automatic package updates and QEMU guest agent, create a cloud-init config file:
cat > /var/lib/vz/snippets/cloud-init-custom.yaml << 'EOF'
#cloud-config
package_update: true
package_upgrade: true
packages:
- qemu-guest-agent
- curl
- ca-certificates
users:
- default
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIExampleKeyHashHere1234567890ABCDEFG user@hostname
runcmd:
- systemctl enable --now qemu-guest-agent
EOF
What this does:
- Updates packages on first boot
- Installs QEMU guest agent for better VM management
- Creates ubuntu user with passwordless sudo
- Adds your SSH public key for authentication
- Enables QEMU guest agent service
Clone and Deploy
To create a VM from the template:
# Clone the template
qm clone 9000 100 --name my-vm
# Configure cloud-init (simple method)
qm set 100 --ciuser ubuntu
qm set 100 --cipassword $(openssl passwd -6 "your-password")
qm set 100 --sshkeys ~/.ssh/authorized_keys
qm set 100 --ipconfig0 ip=10.10.10.101/24,gw=10.10.10.1
# OR use custom cloud-init file
qm set 100 --cicustom "user=local:snippets/cloud-init-custom.yaml"
qm set 100 --ipconfig0 ip=10.10.10.101/24,gw=10.10.10.1
# Resize disk (optional, expand from 2GB default)
qm resize 100 scsi0 +18G
# Start the VM
qm start 100
# Wait for cloud-init to finish (check with)
qm agent 100 ping
ssh [email protected] 'cloud-init status'
NAT and Firewall Configuration
To allow VMs on the private network (vmbr1) to access the internet and forward HTTP/HTTPS traffic to a proxy VM, configure iptables rules.
This setup lets you run a reverse proxy (like nginx or Traefik) on a VM that handles all incoming connections.
Enable IP Forwarding
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
Configure iptables Rules
** If you are using ipv6 don’t forget to configure ipv6tables as well! **
Filter table (main firewall):
# Set default policies
iptables -P INPUT DROP
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
# Allow loopback
iptables -A INPUT -i lo -j ACCEPT
# Allow established connections
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# Allow SSH
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
# Allow Proxmox web UI from vmbr1 (from proxy VM)
iptables -A INPUT -i vmbr1 -p tcp -s 10.10.10.101 --dport 8006 -j ACCEPT
# Forward rules for private network
iptables -A FORWARD -s 10.10.10.0/24 -j ACCEPT
iptables -A FORWARD -d 10.10.10.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# Forward HTTP/HTTPS to proxy VM
iptables -A FORWARD -p tcp --dport 80 -d 10.10.10.101 -j ACCEPT
iptables -A FORWARD -p tcp --dport 443 -d 10.10.10.101 -j ACCEPT
NAT table (port forwarding and masquerading):
# Port forward HTTP to proxy VM
iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 80 -j DNAT --to-destination 10.10.10.101:80
# Port forward HTTPS to proxy VM
iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 443 -j DNAT --to-destination 10.10.10.101:443
# NAT for private network (internet access for VMs)
iptables -t nat -A POSTROUTING -s 10.10.10.0/24 -o vmbr0 -j MASQUERADE
Verify Rules
# Check filter table
iptables -nvL
# Check NAT table
iptables -nvL -t nat
Make Rules Persistent
Install iptables-persistent:
apt install iptables-persistent
# Save current rules
iptables-save > /etc/iptables/rules.v4
How It Works
- Public traffic arrives on vmbr0 (your public IP)
- HTTP/HTTPS is forwarded to 10.10.10.101 (proxy VM)
- Proxy VM (nginx/Traefik) handles requests and routes to backend VMs
- Private VMs can access internet via NAT (MASQUERADE)
- Proxmox UI is accessible from the proxy VM only
This architecture allows:
- Single public IP serving multiple services
- All services behind a reverse proxy VM
- Internal VMs remain private
- Centralized SSL/TLS termination
- Easy to add new services (just update proxy config)
Next Steps
Now you can:
- Create your first VM or LXC container
- Configure backups (Proxmox Backup Server or local)
- Set up ZFS if you have multiple disks
- Build a cluster (minimum 3 nodes)
- Configure HA (High Availability)