An Example of Xray Transit Proxy Setup

Let's have a "bridge"

An Example of Xray Transit Proxy Setup

0. Background

Recently someone asked how to set up Xray transit proxy. The basic idea is:
Your device ↔ (Protocol A) ↔ Transit server ↔ (Protocol B) ↔ Original server ↔ Desired destination.
Here in this article I’m using VLESS+TCP+XTLS for both protocol A & B. In production, you might want to use a more efficient protocol B.
Actually I do not recommend this set up, for the reason that it can cause speed loss and increased latency.

1. Original Server

config.json sample

{
    "log": {
        "loglevel": "warning"
    },
    "inbounds": [{
        "listen": "127.0.0.1",
        "port": 20001,
        "protocol": "vless",
        "settings": {
            "clients": [{
                    "id": "UUID",
                    "flow": "xtls-rprx-direct",
                    "level": 0
                }
            ],
            "decryption": "none",
            "fallbacks": [{
                "dest": "23333" # fallback port
            }]
        },
        "streamSettings": {
            "network": "tcp",
            "security": "xtls",
            "xtlsSettings": {
                "alpn": [
                    "http/1.1"
                ],
                "certificates": [{
                    "certificateFile": "/usr/local/etc/xray/fullchain.pem", # your cert path
                    "keyFile": "/usr/local/etc/xray/privkey.pem" # your key path
                }]
            }
        }
    }],
    "outbounds": [{
            "protocol": "freedom",
            "settings": {}
        },
        {
            "protocol": "blackhole",
            "settings": {},
            "tag": "blocked"
        }
    ]
}

2. Transit Server

config.json sample

{
    "log": {
        "loglevel": "warning"
    },
    "inbounds": [{
        "listen": "127.0.0.1",
        "port": 20001,
        "protocol": "vless",
        "settings": {
            "clients": [{
                    "id": "UUID", # match the one in original server
                    "flow": "xtls-rprx-direct",
                    "level": 0
                }
            ],
            "decryption": "none",
            "fallbacks": [{
                "dest": "20011" # fallback port
            }]
        },
        "streamSettings": {
            "network": "tcp",
            "security": "xtls",
            "xtlsSettings": {
                "alpn": [
                    "http/1.1"
                ],
                "certificates": [{
                    "certificateFile": "/usr/local/etc/xray/fullchain.pem", # your cert path
                    "keyFile": "/usr/local/etc/xray/privkey.pem" # your key path
                }]
            }
        }
    }],
    "outbounds": [{
        "protocol": "vless",
        "settings": {
            "vnext": [{
                "address": "original.example.com", # domain of your original server
                "port": 443,
                "users": [{
                    "id": "UUID", # match above
                    "encryption": "none",
                    "flow": "xtls-rprx-direct",
                    "level": 0
                }]
            }]
        },
        "streamSettings": {
            "network": "tcp",
            "security": "xtls"
        }
    }]
}

3. Nginx Setup

Assume you host multiple services on the server, please refer to this article about Nginx SNI configuration.
Coexistence of Web Applications and VLESS TCP XTLS


Copyright statement: Unless otherwise stated, all articles on this blog adopt the CC BY-NC-SA 4.0 license agreement. For non-commercial reprints and citations, please indicate the author: Henry, and original article URL. For commercial reprints, please contact the author for authorization.