ci/woodpecker/push/woodpecker Pipeline was successful
Details
|
4 weeks ago | |
---|---|---|
changelog.d | 8 months ago | |
docs | 11 months ago | |
include | 10 months ago | |
lib | 4 weeks ago | |
scripts | 4 weeks ago | |
subprojects | 1 year ago | |
test | 8 months ago | |
.appveyor.yml | 1 year ago | |
.appveyor_account.yml | 1 year ago | |
.gitignore | 1 year ago | |
.oclint | 1 year ago | |
.semgrepignore | 12 months ago | |
.woodpecker.yml | 4 weeks ago | |
AUTHORS | 10 months ago | |
CODE_OF_CONDUCT.md | 1 year ago | |
CONTRIBUTING.md | 12 months ago | |
DCO.txt | 1 year ago | |
LICENSE | 12 months ago | |
Pipfile | 4 weeks ago | |
Pipfile.lock | 4 weeks ago | |
README.md | 2 months ago | |
build-config.h.in | 1 year ago | |
codemeta.json | 10 months ago | |
conandata.yml | 8 months ago | |
conanfile.py | 4 weeks ago | |
meson.build | 8 months ago | |
meson_options.txt | 8 months ago | |
protoid.py | 2 years ago | |
towncrier.toml | 1 year ago |
README.md
Channeler
This library provides a reference implementation for a multi-channel and -link protocol for peer-to-peer communications.
The rationale for this has several dimensions:
- NAT-piercing is prone to failure. Additionally, the number of available ports on a NAT limits how many peers behind the NAT can be served. To compensate for this, a multi-channel approach effectively allows multiplexing of independent "connections" (aka channels) along the same port.
- In a multi-link (or multi-homed) device, e.g. on mobile devices, application connections should be kept stable even when the link technology changes (e.g. from WiFi to LTE, etc.)
- Finally, encryption parameters can be kept separate per channel, improving recovery times, when the encryption protocol is aware of both of the above.
The library is implemented with readability and extensibility in mind. Other implementations may well opt for stronger optimization instead.
Note: the library is under heavy development, and the README will be updated when the first stab is implemented.
For more details on the protocol design, Connection Reset by Peer has blog posts on the design rationale. Additionally, the architecture overview contains the rationale for the pipe-and-filter approach chosen in the protocol implementation.
Status
This repository is heavily work-in-progress. Currently implemented is:
- Channel negotiation
- Resend/reliability features
- Basic congestion management
- Encryption
- Mult-Link capabilities
- Connection management
- Advanced congestion management
- Finalized API
Quick Background
The reason for this protocol is the realization that in comparison to UDP or plain IP, TCP adds reliablity characteristics that may better be separated along several axes, which are:
- Are packets consumed by the API in-order or out-of-order?
- Are packets resent when lost?
- When a packet is irrecoverably lost, do we close the connection?
Channeler allows to configure each axis separately, leading to configurations that are essentially equivalent to plain UDP, or to TCP - but also allowing for other modes.
The core concern, therefore, is not actually a protocol concern as much as it is a question of how packet buffers are handled: is it overfull? are there gaps in it that indicate a lost packet? etc.
Usage
The current API is for internal use only. It does provide the main parts for verifying the protocol logic.
The following examples are similar to the InternalAPI
test suite.
// A transport address type *placeholder*; this one is enough for IPv4
using address = uint32_t;
// The memmory management allocates packet buffer in blocks of N packets;
// this is that POOL_BLOCK_SIZE.
constexpr std::size_t POOL_BLOCK_SIZE = 20;
// How large are packets? This is currently static, and should be chosen to
// fit the path MTU.
constexpr std::size_t PACKET_SIZE = 1500;
// The node context contains the local peer identifier, and other per-node
// data.
using node = ::channeler::context::node<POOL_BLOCK_SIZE>;
// The connection context contains per-connection data, e.g. the number of
// registered channels, etc.
using connection = ::channeler::context::connection<address, node>;
// Internal API instance
using api = ::channeler::internal::connection_api<connection>;
With these types and constants defined, we can create an API instance:
// Node information
::channeler::peerid self;
node self_node{
self,
PACKET_SIZE,
// A callback returning std::vector<std::byte>; this is a secret used
// for cookie generation.
&secret_callback,
// The sleep function should accept a duration to sleep for, and return
// the duration actually slept for.
&sleep_function
};
// Connection from self to peer
::channeler::peerid peer;
connection conn{self_node, peer};
// API instance
api conn_api{
conn,
// The callback is invoked when a channel is established.
&channel_established_callback,
// The callback is invoked when the API has produced a packet that should be
// sent to the peer.
&packet_available_callback,
// The last callback is invoked when there is data to read from a channel.
&data_to_read_callback
};
First, we need to establish a channel.
auto err = conn_api.establish_channel(now(), peer);
The callback when a packet is available is going to be invoked.
void packet_available_callback(channeler::channeld const & id)
{
// Read the packet from the API instance.
auto packet = conn_api->packet_to_send(id);
// Write the packet to the I/O, e.g. a socket.
write(sockfd, entry.packet.buffer(), entry.packet.buffer_size());
}
When the peer responds, the channel establishment callback is going to be invoked (skipped here). You can now write to the channel.
channelid id; // from callback
size_t written = 0;
auto err = conn_api.write(id, message.c_str(), message.size(), written);
assert(written == message.size());
You can create many channels per connection, and each channel is handled separately. This means that packet loss on one channel will not stall packets on other channels.
When establishing a channel, it is possible to request certain cabilities. These are a bitset composed of individual flags, but shorthands exist for TCP-like, stream-oriented behaviour and UDP-like, datagram-oriented behaviour:
conn_api.establish_channel(now(), peer, capabilities_stream());
conn_api.establish_channel(now(), peer, capabilities_datagram());
Finally, congestion control is more transparent than with TCP, and applies to UDP as well. You can register a callback for being notified when the peer's receive window changes, which also changes the own node's send window:
conn_api.set_channel_window_changed_callback([] (time_point, channelid, std::size_t window_size)
{
// window_size is the number of *packets* the peer can currently receive.
}
);