1

Complete noob here learning c++ through an IoT project using Websocket. So far, somewhat successfully modified this example beast async_client_ssl to handshake with a server.

My problem is ioc.run() runs out of work and exits after the initial callback. I was having the same issue as this post two years ago. Boost.best websocket ios.run() exits after sending a request and receiving one reply

The answers from the linked post above were pretty simple (1. and 2.), but I still have no clue how to implement it.

1. Without reading your code, understand that the run() method terminates if there is no pending work. For instance, your read method needs to queue up a new read.

2. Move async_read() to a separate function, let’s say do_read(), and call it at the end of on_read()as well as where it currently is.

The person who asked the question in the post also seemed puzzled, but after these two answers, there was no further explanation. So, is there anyone who can kindly help me out, perhaps with a simple code snippet?

In on_read() in the code from some other noob's previous post, I added the async_read() like below.

void on_read(boost::system::error_code ec, std::size_t bytes_transferred)
{
     io_in_pr34ogress_ = false; // end of write/read sequence
     boost::ignore_unused(bytes_transferred);

     if(ec)
        return fail(ec, "read");
     else
        std::cout << "on_read callback : " << boost::beast::buffers(buffer_.data()) << std::endl;
           
     // Clear the Buffer
     //~ buffer_ = {};
     buffer_.consume(buffer_.size());
     ws_.async_read(buffer_, std::bind(&session::on_read, shared_from_this(), 
                    std::placeholders::_1, std::placeholders::_2));
}

But no hope. ioc.run just terminates. So how to do the above 1. and 2. answers appropriately?

Thanks!

-----------------UPDATED on 10/25/2021-------------------

The answer from @sehe worked with the executor. I had to upgrade the boost version from 1.67 to above 1.7 (I used 1.74) to do so. This solved my issue but if someone has a working solution for 1.67 for the folks out there, please share the idea:)

2
  • 2
    You need to show a self-contained example, becuase, by definition if on_read calls async_read then the service does not run out of work. Could it be that the connection is actively closed by the other end? Commented Oct 17, 2021 at 13:09
  • @sehe Thanks for the reply. I thought so, but I was unable to figure out what actively closes it. Here is my entire session class and main function. Please see the updated post above. Any idea why ioc is just returning? Commented Oct 17, 2021 at 20:32

1 Answer 1

3

Okay, the simplest thing is to add a work_guard. The more logical thing to do is to have a thread_pool as the execution context.

Slap a work guard on it:

boost::asio::io_context ioc;
boost::asio::executor_work_guard<boost::asio::io_context::executor_type>
    work = make_work_guard(ioc.get_executor());

(or simply auto work = make_work_guard(...);).

If at some point you want the run to return, release the work guard:

work.reset();

A Thread Pool

The previous sort of skimped over the "obvious" fact that you'd need another thread to either run() the service or to reset() the work guard.

Instead, I'd suggest to leave the running of the thread to Asio in the first place:

boost::asio::thread_pool ioc(1);

This, like io_context is an execution context:

int main()
{
    // +-----------------------------+
    // | Get azure access token      |
    // +-----------------------------+
    static std::string const accessToken =
        "wss://sehe797979.webpubsub.azure.com/client/hubs/"
        "Hub?access_token=*************************************"
        "**********************************************************************"
        "**********************************************************************"
        "**********************************************************************"
        "************************************************************";
    // get_access_token();

    // +--------------------------+
    // | Websocket payload struct |
    // +--------------------------+
    struct Payload payload = {0, "", "text", "test", "joinGroup"};

    // +---------------------------------+
    // | Websocket connection parameters |
    // +---------------------------------+
    std::string protocol = "wss://";
    std::string host     = "sehe797979.webpubsub.azure.com";
    std::string port     = "443";
    std::string text     = json_payload(payload);

    auto endpointSubstringIndex = protocol.length() + host.length();

    // Endpoint
    std::string endpoint = accessToken.substr(endpointSubstringIndex);
    //std::cout << "Endpoint : " << endpoint << std::endl;
    
    // The io_context is required for all I/O
    boost::asio::thread_pool ioc(1);

    // The SSL context is required, and holds certificates
    ssl::context ctx{ssl::context::sslv23_client};

    // This holds the root certificate used for verification
    load_root_certificates(ctx);

    // Launch the asynchronous operation
    std::shared_ptr<session> ws_session =
        std::make_shared<session>(ioc.get_executor(), ctx);
    ws_session->open(host, port, endpoint);

    // Run the I/O service. The call will return when the socket is closed.
    // Change the payload type
    payload.type = "sendToGroup";

    // +--------------+
    // | Send Message |
    // +--------------+
    // Get the input and update the payload data
    while (getline(std::cin, payload.data)) {
        // Send the data over WSS
        ws_session->write(json_payload(payload));
    }

    ioc.join();
}

This requires minimal changes to the session constructor to take an executor instead of the io_context&:

template <typename Executor>
explicit session(Executor executor, ssl::context& ctx)
    : resolver_(executor)
    , ws_(executor, ctx)
{
}

Here's a fully self-contained compiling demo Live On Coliru

enter image description here

Sign up to request clarification or add additional context in comments.

13 Comments

My live demo is not "working" as I have no idea about the payload json, I faked it as you can see. However, it is more than enough to demonstrate connectivity and proof of concept with a simplistic aazure web pubsub endpoint.
Thank you so much for taking your time! I tried the thread pool. But it failed to compile. Please see the error message updated in the main post. It says there is no matching function. Looks like you used a different beast version? First of all, I really like the demo video! Wish it was longer so that I could learn more from it. I also liked your profile saying "I got the impression more than once that people think that "we, the experts" use some kind of magic fairy dust and promptly post the solutions without breaking a sweat, I thought it would be nice to show the reality." haha
I wonder if it is possible to set a live tutorial on a different platform on zoom or slack for one time? I have other Boost questions too. Eager to learn but having difficulty learning by myself. If it's too much, please ignore it haha. But I would really appreciate it if you could guide me a little further with this problem. Thanks!
Mmm. thread_pool is boost 1.66+, and 1.70 mentions I/O objects' constructors and functions that previously took an asio::io_context& now accept either an Executor or a reference to a concrete ExecutionContext. So... I patched up an install with Boost 1.69 + Boost JSON from 1.75 and repro'ed.
Your "thread_pool" suggestion worked. I removed the additional question regarding a different error on run-time and restored the post back to the original. Thank you for the help again! - By the way, the error (additional question) was nothing to do with the threading. I put the read_async inside the on_handshake. Basically, it was a buffer issue. The problem is solved as well.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.