Skip to content

feat(connector): implement multitransport bootstrapping handshake#1098

Open
glamberson wants to merge 1 commit intoDevolutions:masterfrom
glamberson:multitransport-connector
Open

feat(connector): implement multitransport bootstrapping handshake#1098
glamberson wants to merge 1 commit intoDevolutions:masterfrom
glamberson:multitransport-connector

Conversation

@glamberson
Copy link
Contributor

This is a design proposal — looking for feedback on the API approach before wiring up the async/blocking drivers.

Makes the MultitransportBootstrapping state functional instead of a no-op pass-through.

After licensing, the server may send 0, 1, or 2 Initiate Multitransport Request PDUs before starting capabilities exchange. This PR reads those PDUs by peeking at the BasicSecurityHeader flags (SEC_TRANSPORT_REQ), then pauses in a new MultitransportPending state so the application can establish UDP transport (RDPEUDP2 + TLS + RDPEMT) or decline.

The API follows the existing should_perform_X() / mark_X_as_done() pattern used by TLS upgrade and CredSSP:

  • should_perform_multitransport() — true when requests have been collected
  • multitransport_requests() — access the server's request PDUs
  • complete_multitransport(responses, output) — send response PDUs, advance
  • skip_multitransport() — decline, advance

Open question: the Demand Active PDU that signals the end of multitransport bootstrapping arrives while we're still in MultitransportBootstrapping. When the connector transitions to MultitransportPending, this PDU needs to be buffered and re-fed after the application completes. Two options:

(a) The driving code (ironrdp-async/ironrdp-blocking) buffers the PDU externally and re-feeds it. This keeps the connector stateless w.r.t. buffering but requires changes to connect_finalize().

(b) The connector buffers the PDU internally in MultitransportPending and replays it when complete_multitransport() / skip_multitransport() is called. This is self-contained but adds buffer state to the connector.

I've gone with (a) in this draft — the connector doesn't buffer. Feedback on which approach is preferred would be helpful before wiring up the async/blocking drivers.

Builds on: #1091

Related: #140

@glamberson
Copy link
Contributor Author

I really need feedback on this so I'll take the draft status off and see where we get.

@glamberson glamberson marked this pull request as ready for review February 14, 2026 14:35
@CBenoit
Copy link
Member

Hi! Good job. I need a little bit more time to dig into this and craft a well thought answer. I’ll come back to you in the next 24h.

@glamberson
Copy link
Contributor Author

Hi! Good job. I need a little bit more time to dig into this and craft a well thought answer. I’ll come back to you in the next 24h.

Thanks! Great! Yes this is pretty important so I knew it would take some time. Much appreciated.

Makes MultitransportBootstrapping functional: reads the server's
optional Initiate Multitransport Request PDUs (0, 1, or 2), then
pauses in MultitransportPending for the application to establish
UDP transport before proceeding to capabilities exchange.

Follows the existing should_perform_X() / mark_X_as_done() pattern
used by TLS upgrade and CredSSP.
@glamberson glamberson force-pushed the multitransport-connector branch from 3585d1b to 6ab896d Compare February 17, 2026 19:24
@CBenoit
Copy link
Member

Benoît Cortier (CBenoit) commented Feb 22, 2026

Hi! I’m sorry for the delay. Monday is a holiday for me, so I may only come back to you on Tuesday!

FYI, I think I may be leaning on b), because the overall architecture stays "sans-IO", it can be thought as of "state-machine responsibility" where the “pause” is part of the protocol state machine. If the state machine consumes the bytes that triggered the pause, it could thus own deferring/replaying them; we avoid duplicating subtle handshake logic in every driver. Otherwise we’ll need the same “connect_finalize must stash last PDU and replay it later” logic in ironrdp-async, ironrdp-blocking, and any third-party driver using the connector directly. Last, this makes the API more misuse resistant, because with a) it’s easy for the caller to forget to buffer, or buffer the wrong input (e.g. after a faulty framing), and so on. This also keeps the API consistent with the existing should_perform_X / complete_X, etc. Once we call complete_multitransport() or skip_multitransport(), the connector can immediately continue processing without requiring extra steps.

But I’m still thinking a bit on this, and double checking other parts of the spec.

@glamberson
Copy link
Contributor Author

But I’m still thinking a bit on this, and double checking other parts of the spec.

That's fine. I relocated for now back to Illinois in the USA, and I'm just now getting up and running here.

Regarding your inclinations, (b) makes sense. Keeps replay logic in one place and the driver API stays clean. I'll rework to buffer the Demand Active internally once you confirm.

Two questions that would help me get the rework right:

  1. Response PDU ownership: should complete_multitransport() accept the caller's MultitransportResponsePdus (connector writes them to output), or should the connector generate them from a success/failure result per request?
  2. Scope of feat(connector,session): dispatch multitransport PDUs on IO channel #1108: if the connector handles multitransport bootstrapping internally during initial connection, does the session layer still need to surface MultitransportRequest for server-initiated multitransport during an active session (MS-RDPBCGR 2.2.15.1), or is that out of scope for now?

No rush, happy to wait until Tuesday.

Greg

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

2 participants