You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+14-7Lines changed: 14 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,16 +30,23 @@ cargo add batched
30
30
Or add this to your `Cargo.toml`:
31
31
```toml
32
32
[dependencies]
33
-
batched = "0.2.7"
33
+
batched = "0.2.8"
34
34
```
35
35
36
+
### Nightly Rust
37
+
Due to the use of advanced features, `batched` requires a nightly Rust compiler.
38
+
39
+
36
40
## #[batched]
37
-
-**limit**: Maximum amount of items that can be grouped and processed in a single batch.
41
+
-**limit**: Maximum amount of items that can be grouped and processed in a single batch. (required)
38
42
-**concurrent**: Maximum amount of concurrent batched tasks running (default: `Infinity`)
39
-
-**window**: Maximum amount of time (in milliseconds) the background thread waits after the first call before processing a batch.
40
-
-**window[x]**: Maximum amount of time (in milliseconds) the background thread waits after the first call before processing a batch, when the buffer size is <= x
43
+
-**asynchronous**: If true, the caller does not wait for the batch to complete, and the return value is `()`. (default: `false`).
44
+
-**window**: Maximum amount of time (in milliseconds) the background thread waits after the first call before processing a batch. (required)
45
+
-**window[x]**: Maximum amount of time (in milliseconds) the background thread waits after the first call before processing a batch, when the buffer size is <= x. (This allows for more granular control of the batching window based on the current load. For example, you might want to use a shorter window when there are fewer items in the buffer to reduce latency, and a longer window when there are more items to maximize batching efficiency.)
46
+
47
+
41
48
42
-
The target function must have a single argument, a vector of items (`Vec<T>`).
49
+
The target function must have a single input argument, a vector of items (`Vec<T>`).
43
50
44
51
The return value of the batched function is propagated (cloned) to all async calls of the batch, unless the batched function returns a `Vec<T>`, in which case the return value for each call is pulled from the iterator in the same order of the input.
45
52
@@ -48,7 +55,7 @@ If the return value is not an iterator, The target function return type must imp
48
55
49
56
## Prerequisites
50
57
- Built for async environments (tokio), will not work without a tokio async runtime
51
-
-Target function must have async
58
+
-The target function must be an async function
52
59
- Not supported inside structs:
53
60
```rust
54
61
structA;
@@ -65,7 +72,7 @@ impl A {
65
72
### [`tracing_span`]
66
73
This feature automatically adds tracing spans to call functions for batched requests (`x`, `x_multiple`).
67
74
68
-
## [`tracing_opentelemetry`]
75
+
###[`tracing_opentelemetry`]
69
76
This feature adds support for linking spans from callers to the inner batched call when using OpenTelemetry. Depending on whether your OpenTelemetry client supports it, you should be able to see the linked span to the batched call.
0 commit comments