ci/woodpecker/push/woodpecker Pipeline was successful
Details
ci/woodpecker/tag/woodpecker Pipeline was successful
Details
|
3 months ago | |
---|---|---|
changelog.d | ||
docs | ||
include | ||
json-rpc | ||
lib | ||
subprojects | ||
test | ||
.bumpversion.cfg | ||
.gitignore | ||
.oclint | ||
.semgrepignore | ||
.woodpecker.yml | ||
CHANGES.md | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
COPYING | ||
LICENSE | ||
Pipfile | ||
Pipfile.lock | ||
README.md | ||
build-config.h.in | ||
json-rpc.wrap | ||
meson.build | ||
meson_options.txt | ||
towncrier.toml |
README.md
json-rpc
Minimal JSON-RPC 2.0 implementation in C++.
What's minimal about this?
- It abstracts out I/O, so you can provide your own implementation (you could e.g. use packeteer).
- It doesn't try to do automatic mapping of RPC parameters to functions or some such. That's hard in C++, so we'll just skip and provide parameters as one JSON structured data type.
- It's not thread-safe.
- It does not do any time keeping.
- It has one dependency for JSON parsing (and one for unit testing).
Hang on, so if it doesn't do any of that, what does it do?
- It maps C++ functions to RPC methods. You can set a C++ function for receiving notifications.
- It also maps result callbacks to RPC responses on the client side.
- It provides parsing and serialization (via JSON for Modern C++). This also means it manages raw I/O buffers for you.
- It provides exception classes that are translated to JSON-RPC errors.
- It manages client-side timeouts for you.
- It lets you hook into the I/O subsystem of your choice and still supports multiple communication endpoints for clients and servers.
The upshot should be the best combination of minimalism and flexibility for including JSON-RPC in your C++ project.
License
This code is licensed as GPLv3 (see COPYING). I believe in ensuring that the public will always have access to this code. If you need different license terms, I am sympathetic - we can always discuss those.
Building
The build system is meson. Best install it via pip. This requires Python.
$ pip install meson
$ cd /path/to/sources
$ mkdir build && cd build
$ meson ..
$ ninja
Meson plays nice with subprojects. This project defines a json_rpc_dep
variable for your use.
Integration Into Your Project
Unfortunately, there is no support for meson wrapdb. This is because the maintainer think download URLs must contain "file names" (which don't exist in the URL specs) that also follow a particular naming convention that Codeberg/gitea don't support. Meh.
But fear not! You can always include the json-rpc.wrap
file in your
subprojects folder, and stuff will work just fine. You just need to update
this file manually from time to time.
Development
See Contributing
Example
Let's say you're implementing a JSON-RPC server. Starting with the most abstract concerns first, you need to define functions that handle requests somehow. Let's start with an echo function as an example.
std::optional<jsonrpc::response>
echo_func(jsonrpc::request const & request)
{
// If you don't like the request, don't produce a response:
return {};
// Let's say the request is fine. Just copy the request ID and
// parameters.
return jsonrpc::response{request.id, request.params};
}
There, that wasn't hard, was it? Okay, an echo function is also simple. But
this demonstrates some of the request fields, namely id
and params
.
The former is what you need to pass in the response to associate it with this
request on the client side. The latter is a JSON object which we just echo
back here.
Next up, hook all of this to your I/O subsystem somehow. There are two parts to this. The following is a minimal and non-compiling example for how you might use this with raw sockets.
// Fill socket_fds from accept() or some such. The tag should be a peer
// identifier, so probably an IP address.
std::map<io::peer_tag, int> socket_fds;
void write_func(io::peer_tag const & tag, char const * buf, size_t size)
{
// Find the right socket. This needs to account for missing tags somehow.
auto sock = socket_fds[tag];
// Handle errors however you want, it's your I/O.
write(sock, buf, size);
}
jsonrpc::server serv{write_func};
The above creates a JSON-RPC server, and hooks it up to your sockets. This is unlikely to be enough for production code, but it demonstrates well enough how the I/O interface of this library works for when the server needs to write a response. But how about reading requests from I/O?
// Some event loop; select() is the most portable here.
select(fds, fds, fds, my_timeout);
// Nope, this loop is fake. You know how to use FD_ISSET, though, right?
for (auto fd : fds) {
// You need to provide this lookup.
jsonrpc::io::peer_tag tag = lookup_tag_for_fd(fd);
// Actual I/O
char buf[BUFSIZE];
int size = read(fd, buf, BUFSIZE);
// Push this to the server
serv.consume(tag, buf, size);
}
There, that's a happy little event loop! But how does the server know about the echo function? Silly me, I skipped that.
serv.register_method("echo", echo_func);
Note that you could expose the same function under various names.
serv.register_method("another_echo", echo_func);
And that's it for the server.
The client's I/O interface is actually exactly the same. Maybe you don't want
to use accept()
there. Maybe you want only one socket. The principle is still
sound, so we'll skip all that code. Let's get straight to sending a request!
jsonrpc::client cl{write_func};
cl.send_request(server_tag, "echo", params);
The parameters here are any JSON object or Array. Since we're using a different library for the heavy lifting of JSON parsing, the data types are defined there. Check out that library! You could always use a literal here.
cl.send_request(server_tag, "echo", R"(["hello, world!]")"_json);
That's great! So... how does the client receive a response?
Here, you have a couple of choices. You could receive all responses on the same callback.
void callback(jsonrpc::io::peer_tag tag, jsonrpc::result_state state,
std::optional<jsonrpc::response> response)
{
// Check the state. If it's RE_OK_RESPONSE, then there is a response and
// you can access the optional's value. If it's RE_ERROR_RESPONSE, there
// is also a response, but the value's error field is set.
std::cout << response.value().result << std::endl;
}
cl.on_response(callback);
Alternatively, you can set per-peer callbacks.
cl.on_response(server_tag, callback);
At the finest level of granularity, you can also provide a callback per request.
cl.send_request(server_tag, "echo", params, callback);
There are a few more options in the APIs, for example for setting client timeouts, etc. But this covers most of it.
Timeouts
Timeouts require some kind of time keeping. This library does not keep time for
you, but it supports time keeping via std::chrono
. Technically also via other
means, but since this is C++, it's using chrono data types.
The unfortunate issue with chrono is that time_point
is always relative to
an epoch, and the epoch is defined by a specific clock. As a result, there is
no such thing as a clock-independent time_point
in chrono. Since this library
shouldn't be just templates, it instead introduces a relative_time_point
which is independent of an epoch - it just assumes that your clock keeps
incrementing it. There's a simple helper function for extracting such a
relative time point from a chrono clock.
First, configure your client with a timeout. This is a chrono duration, so you can use chrono literals.
using std::literals::chrono_literals;
jsonrpc::client cl{write_func, 5s};
Next, every function in the client must be passed the current time from your clock.
cl.create_request(tag, "echo", params, {}, relative_now(my_clock));
// ...
cl.consume(tag, buf, size, relative_now(my_clock));
If the clock incremented its time beyond the timeout value, this will now produce a timeout. This is the case if the consumed buffer relates to the scheduled reqeust or not.
You may not have buffer data at the moment, but still want to honour timeouts.
cl.process_timeouts(relative_now(my_clock));
Thread safety
As stated in the beginning, this library is not thread safe. Since all interaction with your code is in-line, a method that takes some time to process blocks the entire server.
The good news is that the worker pattern is pretty simple and effective here.
char buf[BUFSIZE];
int size = read(fd, buf, BUFSIZE);
// Assume some concurrent I/O queue that *copies* this buffer in the push()
// function.
io_queue.push(tag, buf, size);
Now we run a bunch of workers. They each do the same thing.
struct server_thread
{
jsonrpc::server server{write_func};
server_thread()
{
server.register_method("echo", echo_func);
}
void operator()()
{
// Use your own semaphore mechanism or whatnot to avoid the thread from
// busy looping. This assumed still_running() function abstracts that
// out and also signals for the thread to end.
while (still_running()) {
// Hypothetical I/O queue implementation from above.
for (auto [tag, buf, size] : io_queue) {
server.consume(tag, buf, size);
}
}
}
};
using worker_map = std::map<int, std::thread>;
worker_map workers;
for (int i = 0 ; i < NUM_WORKERS ; ++i) {
workers.insert(worker_map::value_type{i, std::thread{server_thread}});
}
Now the only other thing you have to make sure is that the functions you're registering with the server instances are themselves thread safe. Our echo function luckily is not accessing any state.
For what it's worth, there's a concurrent queue in liberate - for a number of reasons, it doesn't have the exact interface above, but it does the trick.
If you want to use asynchronous request processing, but keep I/O synchronous,
then the enable_async_server
compile-time option is for you. It uses the
above concurrent queue indirectly as an interface between the I/O and processing
parts of the server.
The code is not much different from the above. All you do is initalize the server differently, and poll for results.
jsonrpc::async_server_processor asyncproc;
jsonrpc::server server{write_func, asyncproc};
// ...
using namespace std::chrono_literals;
auto processed = asyncproc.poll_results(100ms);
It's arguable which approach is better overall.
Note: The client side has little use of such an asynchronous operation, as all its "processing" consists of I/O and parsing - it's entirely up to the I/O code to not block while waiting for a response.