Yggdrasil Network as an Embedded Go Library

Yggdrasil is an experimental overlay IPv6 mesh network. In short, it lets you build a "network on top of a network": each node gets a stable IPv6 address derived from its public key, and that address does not depend on where the node is physically located or what external IP address it currently has.
Nodes can connect to public peers, to each other directly, or discover each other on the local network. Once connectivity is established, ordinary TCP/UDP applications can communicate as if they were simply using another IPv6 network.
In the classic setup, Yggdrasil is a daemon that creates a virtual network interface in the operating system.
But sometimes it would be useful to embed Yggdrasil directly into an application. For example, into Matrix clients, or into web applications.
The original yggdrasil-go is not especially convenient for that role because of leaky abstractions and strong coupling between components. To make library-style usage easier, and to support features that were repeatedly rejected because "this is not a goal of Yggdrasil", I maintain my own compatible fork.
This article is about embedding its library part into a Go application. But is should be usefull for work with original yggdrasil-go codebase.
What we are going to build
In this article, we will build two Yggdrasil nodes running inside the same Go process.
Each node consists of two layers:
a Yggdrasil Core, responsible for peer connectivity and packet routing a VTun, which exposes the Yggdrasil IPv6 network as a userspace TCP/IP stack
The two nodes communicate with each other through a carrier network.
On top of the virtual IPv6 network created by Yggdrasil, we will run ordinary TCP, UDP, and HTTP applications using familiar Go networking primitives like net.Listener, net.Conn, and http.Client.
By the end of the article, we will have TCP, UDP, and HTTP communication working entirely inside one process, without creating a system TUN interface.
A minimal node
Let’s start with the smallest useful example: create one Yggdrasil Core node, register TCP and TLS transports, print the node address, and exit.
package main
import (
"fmt"
"github.com/asciimoth/gonnect/native"
"github.com/asciimoth/ygg/ygglib/config"
"github.com/asciimoth/ygg/ygglib/core"
ygglogger "github.com/asciimoth/ygg/ygglib/logger"
"github.com/asciimoth/ygg/ygglib/transport"
)
func main() {
// The config contains the node identity.
// For the example, we generate a new self-signed certificate on every run.
cfg := config.GenerateConfig()
if err := cfg.GenerateSelfSignedCertificate(); err != nil {
panic(err)
}
// native.Network is the normal operating-system network.
// It will be used to open carrier connections to other peers.
network := &native.Network{}
if err := network.Up(); err != nil {
panic(err)
}
defer network.Down()
// Transport manager registers transport implementations (tcp, tls, ws, etc.)
// and maps addresses to the carrier network they should use.
manager := transport.NewManager(network)
// Plain tcp:// transport.
if err := manager.RegisterTransport(transport.NewTCPTransport()); err != nil {
panic(err)
}
// tls:// transport uses our node certificate.
tlsConfig, err := core.GenerateTLSConfig(cfg.Certificate)
if err != nil {
panic(err)
}
if err := manager.RegisterTransport(transport.NewTLSTransport(tlsConfig)); err != nil {
panic(err)
}
// Create the Core itself.
// Logging is disabled here to keep the example small.
node, err := core.New(
cfg.Certificate,
ygglogger.Discard(),
core.TransportManager{Manager: manager},
)
if err != nil {
panic(err)
}
defer node.Stop()
// This is the IPv6 address of the node inside the Yggdrasil network.
fmt.Println(node.Address())
}
Transports
A transport in ygglib owns one or more URL schemes and provides methods for dialing outgoing connections and listening for incoming ones.
Transports are registered in a concrete node instance at runtime. The library part includes transports for tcp://... and tls://..., while the daemon also implements quic, ws/wss, and unix.
You can write your own transports too.
For demonstration, let’s wrap an existing transport and add a bit of behavior around it. For example, we can count dial/listen operations and name our scheme metered+tcp.
package main
import (
"context"
"net/url"
"sync/atomic"
"github.com/asciimoth/ygg/ygglib/transport"
)
type meteredTransport struct {
// All real work is delegated to the plain TCP transport.
base transport.Transport
// Counters are only here for demonstration.
dials atomic.Uint64
listens atomic.Uint64
}
func (t *meteredTransport) Schemes() []string {
// Now the manager can handle URLs like metered+tcp://127.0.0.1:1234.
return []string{"metered+tcp"}
}
func (t *meteredTransport) Dial(
ctx context.Context,
network transport.Network,
u *url.URL,
opts transport.Options,
) (transport.Conn, error) {
t.dials.Add(1)
// The base TCP transport does not understand our metered+tcp scheme,
// so we rewrite it to tcp before delegating.
return t.base.Dial(ctx, network, rewriteScheme(u, "tcp"), opts)
}
func (t *meteredTransport) Listen(
ctx context.Context,
network transport.Network,
u *url.URL,
opts transport.Options,
) (transport.Listener, error) {
t.listens.Add(1)
return t.base.Listen(ctx, network, rewriteScheme(u, "tcp"), opts)
}
func (t *meteredTransport) Dials() uint64 {
return t.dials.Load()
}
func (t *meteredTransport) Listens() uint64 {
return t.listens.Load()
}
func rewriteScheme(u *url.URL, scheme string) *url.URL {
clone := *u
clone.Scheme = scheme
return &clone
}
This transport is registered in exactly the same way as the built-in ones:
manager := transport.NewManager(nil)
metered := &meteredTransport{
base: transport.NewTCPTransport(),
}
if err := manager.RegisterTransport(metered); err != nil {
return err
}
Network mapping
transport.Manager can use one default network:
manager := transport.NewManager(defaultNetwork)
But you can also explicitly describe which network should be used for which hosts.
// All connections to 127.0.0.1 will go through our loopback/native network.
if err := manager.MapNetwork("127.0.0.1", localNetwork); err != nil {
return err
}
This lets you route connections to the outside world through different carrier networks:
manager.SetDefaultNetwork(nativeNetwork)
// Tor addresses can go through a SOCKS network.
_ = manager.MapNetwork("*.onion", torNetwork)
// I2P can be handled in the same way.
_ = manager.MapNetwork("*.i2p", i2pNetwork)
// Some zones can be blocked explicitly.
_ = manager.MapNetwork("*.loki", nil)
A nil mapping means that matching addresses must be blocked.
One more important detail: mapping changes are live. If you change the network for a host, the manager closes affected listeners and connections, so new ones will go through the new network.
Two nodes in one process
A single node is not very interesting by itself. Let’s create two Core instances and connect them to each other.
First, it is useful to move node creation into a function:
func newCore(manager *transport.Manager) (*core.Core, error) {
// In a real application, the key should usually be persisted between runs.
// Here we generate a new one to keep the example self-contained.
cfg := config.GenerateConfig()
if err := cfg.GenerateSelfSignedCertificate(); err != nil {
return nil, err
}
return core.New(
cfg.Certificate,
ygglogger.Discard(),
core.TransportManager{Manager: manager},
)
}
Now create the server and the client:
network := loopback.NewLoopbackNetwok()
metered := &meteredTransport{
base: transport.NewTCPTransport(),
}
manager := transport.NewManager(nil)
if err := manager.MapNetwork("127.0.0.1", network); err != nil {
return err
}
if err := manager.RegisterTransport(metered); err != nil {
return err
}
serverCore, err := newCore(manager)
if err != nil {
return err
}
defer serverCore.Stop()
clientCore, err := newCore(manager)
if err != nil {
return err
}
defer clientCore.Stop()
In this example, both nodes use the same manager and the same loopback network. In a real application, each node will usually live in its own process, with its own manager and its own network.
For one Core to accept a connection from another Core, we need to open a listener:
listenURL, err := url.Parse("metered+tcp://127.0.0.1:0")
if err != nil {
return err
}
listener, err := serverCore.Listen(listenURL, "")
if err != nil {
return err
}
Port 0 means "choose any free port".
Now the client can connect to the server:
peerURL, err := url.Parse("metered+tcp://" + listener.Addr().String())
if err != nil {
return err
}
if err := clientCore.CallPeer(peerURL, ""); err != nil {
return err
}
CallPeer opens a single connection to a peer. If you need a persistent connection with reconnects after failures, use AddPeer instead.
At this point, the two Core instances are already connected. But you still cannot put a normal http.Client directly on top of core.Core.
Core routes Yggdrasil packets. It does not provide the familiar net.Listener/net.Conn interface for user TCP connections.
For that, we need a "tun".
VTun
The normal Yggdrasil daemon creates a system TUN interface. But for an embedded library, we want to keep everything inside the process.
In this fork, that is done through an embedded userspace TCP/IP stack.
Core gives us a stream of IPv6 packets (L3), and VTun turns it into an L4 interface that can be used almost like a normal Go network.
Create a VTun for one Core:
import (
"fmt"
"net/netip"
"github.com/asciimoth/gonnect-netstack/helpers"
"github.com/asciimoth/gonnect-netstack/vtun"
"github.com/asciimoth/ygg/ygglib/core"
"github.com/asciimoth/ygg/ygglib/ipv6rwc"
ygglogger "github.com/asciimoth/ygg/ygglib/logger"
yggtun "github.com/asciimoth/ygg/ygglib/tun"
)
func newVTun(name string, coreNode *core.Core) (*vtun.VTun, *yggtun.TunAdapter, error) {
// ipv6rwc adapts core.Core to an io.ReadWriteCloser-like interface
// for reading and writing IPv6 packets.
rwc := ipv6rwc.NewReadWriteCloser(coreNode)
// TunAdapter connects Yggdrasil Core to a concrete TUN/VTun implementation.
adapter, err := yggtun.New(
rwc,
ygglogger.Discard(),
yggtun.InterfaceMTU(1500),
)
if err != nil {
_ = rwc.Close()
return nil, nil, err
}
// The Core address is the IPv6 address of the node inside the Yggdrasil network.
addr, ok := netip.AddrFromSlice(coreNode.Address())
if !ok {
_ = adapter.Stop()
_ = rwc.Close()
return nil, nil, fmt.Errorf("invalid core address")
}
// VTun lives in process memory and provides Dial/Listen/ListenPacket.
vt, err := (&vtun.Opts{
Name: name,
LocalAddrs: []netip.Addr{addr},
NoLoopbackAddr: true,
NetStackOpts: &helpers.Opts{
MTU: 1500,
},
}).Build()
if err != nil {
_ = adapter.Stop()
_ = rwc.Close()
return nil, nil, err
}
// Attach VTun to the Core packet stream.
if err := adapter.Attach(vt, yggtun.AttachmentType("vtun")); err != nil {
_ = vt.Close()
_ = adapter.Stop()
_ = rwc.Close()
return nil, nil, err
}
return vt, adapter, nil
}
Now create one VTun for each Core:
serverVT, serverAdapter, err := newVTun("server", serverCore)
if err != nil {
return err
}
defer serverAdapter.Stop()
defer serverVT.Close()
clientVT, clientAdapter, err := newVTun("client", clientCore)
if err != nil {
return err
}
defer clientAdapter.Stop()
defer clientVT.Close()
Now we have two in-process IPv6 networks connected through Yggdrasil Core. And we can use ordinary networking primitives on top of them.
TCP over VTun
Let’s start with a simple TCP echo-like exchange. The server listens on its Yggdrasil IPv6 address, and the client connects through its VTun.
func tcpPing(clientVT, serverVT *vtun.VTun, serverCore *core.Core) (string, error) {
// Listen for TCP inside the Yggdrasil network.
// The address comes from serverCore, and the port is selected automatically.
listener, err := serverVT.Listen(
context.Background(),
"tcp6",
net.JoinHostPort(serverCore.Address().String(), "0"),
)
if err != nil {
return "", err
}
defer listener.Close()
serverErr := make(chan error, 1)
go func() {
conn, err := listener.Accept()
if err != nil {
serverErr <- err
return
}
defer conn.Close()
buf := make([]byte, 64)
n, err := conn.Read(buf)
if err != nil {
serverErr <- err
return
}
// Reply with the same payload, but with a prefix.
_, err = conn.Write([]byte("tcp:" + string(buf[:n])))
serverErr <- err
}()
// The client connects to the listener address through its VTun.
conn, err := clientVT.Dial(context.Background(), "tcp6", listener.Addr().String())
if err != nil {
return "", err
}
defer conn.Close()
_ = conn.SetDeadline(time.Now().Add(10 * time.Second))
if _, err := conn.Write([]byte("ping")); err != nil {
return "", err
}
buf := make([]byte, 64)
n, err := conn.Read(buf)
if err != nil {
return "", err
}
if err := <-serverErr; err != nil {
return "", err
}
return string(buf[:n]), nil
}
The result is:
tcp:ping
From the outside, this looks almost like ordinary TCP code. The main difference is that Dial and Listen come not from the standard-library net package, but from the VTun object.
UDP over VTun
The UDP version is almost the same, except that the server uses ListenPacket.
func udpPing(clientVT, serverVT *vtun.VTun, serverCore *core.Core) (string, error) {
packetConn, err := serverVT.ListenPacket(
context.Background(),
"udp6",
net.JoinHostPort(serverCore.Address().String(), "0"),
)
if err != nil {
return "", err
}
defer packetConn.Close()
serverErr := make(chan error, 1)
go func() {
buf := make([]byte, 64)
// For UDP, we need the sender address
// so we can send a response back.
n, addr, err := packetConn.ReadFrom(buf)
if err != nil {
serverErr <- err
return
}
_, err = packetConn.WriteTo([]byte("udp:"+string(buf[:n])), addr)
serverErr <- err
}()
conn, err := clientVT.Dial(
context.Background(),
"udp6",
packetConn.LocalAddr().String(),
)
if err != nil {
return "", err
}
defer conn.Close()
_ = conn.SetDeadline(time.Now().Add(10 * time.Second))
if _, err := conn.Write([]byte("ping")); err != nil {
return "", err
}
buf := make([]byte, 64)
n, err := conn.Read(buf)
if err != nil {
return "", err
}
if err := <-serverErr; err != nil {
return "", err
}
return string(buf[:n]), nil
}
Result:
udp:ping
So, for ordinary application code, Yggdrasil does not really change much. We just use a different Dial/ListenPacket, and then continue working with standard net.Conn and net.PacketConn interfaces.
HTTP over VTun
Since TCP works, HTTP does not require anything special either. The server needs a listener from serverVT, and the client needs an http.Transport whose DialContext points to clientVT.Dial.
func httpPing(clientVT, serverVT *vtun.VTun, serverCore *core.Core) (string, error) {
listener, err := serverVT.Listen(
context.Background(),
"tcp6",
net.JoinHostPort(serverCore.Address().String(), "0"),
)
if err != nil {
return "", err
}
server := &http.Server{
Handler: http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
_, _ = io.WriteString(w, "http:pong")
}),
ReadHeaderTimeout: 10 * time.Second,
}
go func() {
// http.ErrServerClosed during Shutdown is expected,
// so we do not log it in this minimal example.
_ = server.Serve(listener)
}()
defer func() {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
_ = server.Shutdown(ctx)
}()
_, port, err := net.SplitHostPort(listener.Addr().String())
if err != nil {
return "", err
}
// An IPv6 address in a URL must be wrapped in square brackets.
target := fmt.Sprintf("http://[%s]:%s", serverCore.Address().String(), port)
client := http.Client{
Transport: &http.Transport{
// The HTTP client opens TCP connections
// through our VTun instead of net.Dialer.
DialContext: clientVT.Dial,
},
Timeout: 10 * time.Second,
}
resp, err := client.Get(target)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
return string(body), nil
}
Result:
http:pong
Autopeering
So far we connected nodes manually: one listens, the other calls CallPeer.
That is enough for tests, but a normal application usually wants to connect to the global network automatically.
For that, there is autopeer.Manager.
It fetches public peer lists, filters the results, and adds suitable addresses to Core.
func configurePublicAutopeering(coreNode *core.Core, network transport.Network) *autopeer.Manager {
// Fetcher can retrieve public peer lists.
// BUILTIN is the built-in list and does not require a separate URL.
fetcher := autopeer.NewFetcher(ygglogger.Discard(), time.Hour)
fetcher.SetDefaultNetwork(network)
fetcher.SetSources([]string{autopeer.BuiltinSource})
manager := autopeer.NewManager(fetcher)
manager.SetPeerManager(coreNode)
manager.SetConfig(autopeer.ManagerConfig{
CheckInterval: time.Minute,
// If there are fewer than two connected peers,
// the manager will try to add new ones.
MinimumConnected: 2,
// For the example, limit the search to a few countries.
Countries: []string{
"germany",
"france",
"netherlands",
},
// And only these transport schemes.
TransportSchemes: []string{"tcp", "tls"},
})
return manager
}
The manager is started explicitly:
autopeering := configurePublicAutopeering(coreNode, nativeNetwork)
autopeering.Start()
defer autopeering.Close()
It is worth noting that the manager does nothing by default until country and transport scheme filters are configured explicitly.
Internally, it uses core.AddPeer, not CallPeer, so selected peers become persistent and will be reconnected after failures.
Link-local autopeering
Besides public peers, there is also automatic discovery on the local network.
This is handled by the ygglib/multicast package. It listens for local multicast announcements and calls core.CallPeer for discovered nodes.
There is one limitation though: link-local autopeering requires a real network. In-memory loopback networks, SOCKS clients, and other virtual implementations do not have the low-level OS interfaces required for it.
A minimal setup looks like this:
func startLinkLocalAutopeering(
coreNode *core.Core,
network transport.Network,
ifacePattern string,
) (*multicast.Multicast, error) {
if network == nil || !network.IsNative() {
return nil, fmt.Errorf("link-local autopeering requires a native carrier network")
}
return multicast.New(
coreNode,
ygglogger.Discard(),
multicast.ProtocolVersion{
Major: core.ProtocolVersionMajor,
Minor: core.ProtocolVersionMinor,
},
multicast.MulticastInterface{
// For example: ^(eth|en|wlan|wl).*
Regex: regexp.MustCompile(ifacePattern),
Beacon: true,
Listen: true,
Port: 0,
},
)
}
Usage:
mc, err := startLinkLocalAutopeering(
coreNode,
nativeNetwork,
"^(eth|en|wlan|wl).*",
)
if err != nil {
return err
}
defer mc.Stop()
Conclusion
I hope that the more modular approach implemented in this fork will make more people interested in experimenting with Yggdrasil as a component of larger systems, instead of only using it as a standalone daemon.
