Remote PC as Authoritative Workstation¶
Using the Windows 11 + RTX 4060 16GB as a beefy remote dev/compute server driven from your Linux machine at home over Tailscale.
Architecture¶
[Your Linux Machine] [Remote Win11 + RTX 4060]
│ │
│── Tailscale (WireGuard mesh) ─────────│
│ │
│── SSH ──────────────────────────────► WSL2 (Ubuntu)
│ ├── zellij (persistent term) │ ├── Ollama (11434)
│ ├── VS Code Remote SSH │ ├── Jupyter (8888)
│ └── sshfs (mount remote fs) │ ├── ComfyUI (8188)
│ │ ├── Docker + GPU
│── xfreerdp ─── RDP ────────────────► Windows Desktop (3389)
│── Moonlight ── Sunshine ────────────► GPU streaming (47984)
│── Syncthing ────────────────────────► File sync (22000)
No SSH tunnels needed -- Tailscale provides direct connectivity. Every service
is addressable by TAILSCALE_IP:port.
1. Remote Desktop Access¶
RDP (general desktop, best latency/quality ratio)¶
# Optimized xfreerdp command:
xfreerdp3 /u:Admin /v:100.125.78.30 /dynamic-resolution \
/compression /network:auto /gfx:AVC420:on +clipboard -themes
Key flags: /gfx:AVC420:on uses H.264 codec, -themes disables visual effects.
Sunshine + Moonlight (GPU-accelerated streaming)¶
For GPU-intensive work, gaming, creative apps. Uses NVENC hardware encoding on the RTX 4060 -- near-zero CPU overhead.
- Sunshine (host, on Windows): https://github.com/LizardByte/Sunshine
- Moonlight (client, on Linux):
flatpak install com.moonlight_stream.Moonlight - Pair via Sunshine web UI at
https://TAILSCALE_IP:47990
Comparison¶
| RDP | Sunshine/Moonlight | Parsec | NoMachine | |
|---|---|---|---|---|
| Latency | Good | Excellent | Excellent | Good |
| GPU encoding | No | NVENC | Yes | NVENC |
| Linux client | xfreerdp | moonlight-qt | Yes | Yes |
| Best for | Productivity | Gaming/visual | Low-latency | Cross-platform |
2. Development Workflows¶
SSH + Zellij (foundation)¶
# Remote ~/.bashrc auto-attach:
if [[ -z "$ZELLIJ" ]] && [[ -n "$PS1" ]]; then
exec zellij attach --create main
fi
The $PS1 guard prevents breaking scp/rsync/git-over-ssh.
VS Code Remote SSH¶
Connects directly to WSL2. Add to local ~/.ssh/config:
In VS Code: Remote-SSH → Connect → wsl-dev. All indexing/compilation
runs on the remote RTX 4060 machine.
JetBrains Gateway¶
Same SSH config. Gateway installs a backend IDE on remote, thin client runs locally. Needs 2+ cores, 4GB+ RAM on remote.
3. GPU Workload Offloading¶
Ollama (LLM inference)¶
# On WSL2:
curl -fsSL https://ollama.com/install.sh | sh
OLLAMA_HOST=0.0.0.0 ollama serve
# From home:
curl http://100.125.78.30:11434/api/generate -d '{"model":"llama3.1:8b","prompt":"hello"}'
RTX 4060 16GB handles 7B-8B models comfortably. Point any OpenAI-compatible
tool (Continue.dev, aider, Open WebUI) at http://TAILSCALE_IP:11434.
Open WebUI (ChatGPT-like interface):
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway \
-e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
--name open-webui ghcr.io/open-webui/open-webui:main
Jupyter Lab¶
Access at http://TAILSCALE_IP:8888. Full CUDA inside notebooks.
ComfyUI / Stable Diffusion¶
python main.py --listen 0.0.0.0 --port 8188 # ComfyUI
./webui.sh --listen --port 7860 # AUTOMATIC1111
Docker GPU Containers¶
docker run --rm --gpus all nvidia/cuda:12.3.0-base-ubuntu22.04 nvidia-smi
docker run --gpus all -it -v $(pwd):/workspace pytorch/pytorch:latest bash
4. File Sharing¶
sshfs (mount remote filesystem locally)¶
mkdir ~/remote-wsl
sshfs simon@100.125.78.30:/home/simon ~/remote-wsl \
-o Compression=yes,cache=yes,kernel_cache,reconnect
Good for browsing. Too slow for IDE indexing -- use VS Code Remote instead.
Syncthing (bidirectional, continuous)¶
Run on both machines, force traffic over Tailscale:
- Disable "Local Discovery" and "Enable Relaying"
- Set remote address to tcp://TAILSCALE_IP:22000
- All sync stays within your tailnet
rsync (one-shot sync)¶
Taildrop (ad-hoc)¶
5. Port Allocation¶
| Service | Port | Protocol |
|---|---|---|
| SSH (WSL2) | 22 | TCP |
| Windows RDP | 3389 | TCP |
| WSL2 xrdp (optional Linux desktop) | 3390 | TCP |
| Open WebUI | 3000 | TCP |
| SD WebUI | 7860 | TCP |
| ComfyUI | 8188 | TCP |
| Jupyter Lab | 8888 | TCP |
| Ollama API | 11434 | TCP |
| Syncthing | 22000 | TCP |
| Sunshine | 47984-47990 | TCP/UDP |
Our firewall rules allow 22 and 9000-9999 from Tailscale CGNAT. Services
outside that range (Ollama 11434, Sunshine 47984) need additional rules
or should be accessed via SSH tunnel (ssh -L).
6. Extension Scripts (post-SSH)¶
These run over SSH on the remote WSL2 after initial setup. Located in
ssh_ready/extensions/ (TODO: create these):
ollama.bash-- Install Ollama, configure systemd service, open portjupyter.bash-- Install JupyterLab, configure systemd, CUDA kerneldocker-gpu.bash-- Install Docker + NVIDIA Container Toolkitcomfyui.bash-- Install ComfyUI with CUDA, systemd servicesyncthing.bash-- Install Syncthing, configure Tailscale-only peerssunshine.bash-- Install Sunshine on Windows side (via PowerShell over SSH)zellij.bash-- Install zellij, configure auto-attach in .bashrcinterop-fix.bash-- WSL_INTEROP + WSLg env fix for SSH sessions
Created: 2026-03-17