Skip to content

Commit 9f486f5

Browse files
philkraatovpekoerimatnor
authored
release 2.24.0 (#4599)
* release 2.24.0 * missing compress_chunk option * review * recompress option * concurrent merge option * Expand on new concurrent mode for merge_chunks() * Remove unnecessary newline * Fix formatting issue that prevented docs build * Fix broken links --------- Co-authored-by: atovpeko <[email protected]> Co-authored-by: Anastasiia Tovpeko <[email protected]> Co-authored-by: Erik Nordström <[email protected]>
1 parent 6027af7 commit 9f486f5

File tree

13 files changed

+183
-49
lines changed

13 files changed

+183
-49
lines changed

_partials/_migrate_self_postgres_timescaledb_compatibility.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ $PG 15 support is deprecated and will be removed from $TIMESCALE_DB in June 2026
66

77
| $TIMESCALE_DB version |$PG 18|$PG 17|$PG 16|$PG 15|$PG 14|$PG 13|$PG 12|$PG 11|$PG 10|
88
|-----------------------|-|-|-|-|-|-|-|-|-|
9+
| 2.24.x |||||||||||
910
| 2.23.x |||||||||||
1011
| 2.22.x |||||||||||
1112
| 2.21.x |||||||||||

api/compression/compress_chunk.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,9 +51,11 @@ SELECT compress_chunk('_timescaledb_internal._hyper_1_2_chunk');
5151

5252
## Optional arguments
5353

54-
|Name|Type|Description|
55-
|---|---|---|
56-
| `if_not_compressed` | BOOLEAN | Disabling this will make the function error out on chunks that are already compressed. Defaults to true.|
54+
| Name | Type | Default | Required | Description |
55+
|----------------------|--|---------|--|----------------------------------------------------------------------------------------------------------------------------------------------------|
56+
| `chunk` | REGCLASS | - || Name of the chunk to add to the $COLUMNSTORE. |
57+
| `if_not_columnstore` | BOOLEAN | `true` || Set to `false` so this job fails with an error rather than a warning if `chunk` is already in the $COLUMNSTORE. |
58+
| `recompress` | BOOLEAN | `false` || Set to true to recompress. In-memory recompression will be attempted first; otherwise it will fall back to internal decompress/compress. |
5759

5860
## Returns
5961

api/continuous-aggregates/add_policies.md

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,19 @@ api:
1010
products: [cloud, self_hosted, mst]
1111
---
1212

13-
import Experimental from "versionContent/_partials/_experimental.mdx";
14-
1513
<!-- markdownlint-disable-next-line line-length -->
1614
# add_policies() <Tag type="community" content="Community" /><Tag type="experimental" content="Experimental" />
1715

1816
Add refresh, compression, and data retention policies to a continuous aggregate
1917
in one step. The added compression and retention policies apply to the
2018
continuous aggregate, _not_ to the original hypertable.
2119

20+
<Highlight type="warning">
21+
22+
This experimental function will be removed in future releases. Please use the [`add_continuous_aggregate_policy()`][add_continuous_aggregate_policy] function to add a policy.
23+
24+
</Highlight>
25+
2226
```sql
2327
timescaledb_experimental.add_policies(
2428
relation REGCLASS,
@@ -30,14 +34,6 @@ timescaledb_experimental.add_policies(
3034
) RETURNS BOOL
3135
```
3236

33-
<Experimental />
34-
35-
<Highlight type="note">
36-
`add_policies()` does not allow the `schedule_interval` for the continuous aggregate to be set, instead using a default value of 1 hour.
37-
38-
If you would like to set this add your policies manually (see [`add_continuous_aggregate_policy`][add_continuous_aggregate_policy]).
39-
</Highlight>
40-
4137
## Samples
4238

4339
Given a continuous aggregate named `example_continuous_aggregate`, add three

api/continuous-aggregates/alter_policies.md

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,19 @@ api:
1111
products: [cloud, self_hosted, mst]
1212
---
1313

14-
import Experimental from "versionContent/_partials/_experimental.mdx";
15-
1614
<!-- markdownlint-disable-next-line line-length -->
1715
# alter_policies() <Tag type="community" content="Community" /><Tag type="experimental" content="Experimental" />
1816

1917
Alter refresh, columnstore, or data retention policies on a continuous
2018
aggregate. The altered columnstore and retention policies apply to the
2119
continuous aggregate, _not_ to the original hypertable.
2220

21+
<Highlight type="warning">
22+
23+
This experimental function will be removed in future releases. Please use the [`alter_job()`][alter_job] function to modify a policy.
24+
25+
</Highlight>
26+
2327
```sql
2428
timescaledb_experimental.alter_policies(
2529
relation REGCLASS,
@@ -31,8 +35,6 @@ timescaledb_experimental.alter_policies(
3135
) RETURNS BOOL
3236
```
3337

34-
<Experimental />
35-
3638
## Samples
3739

3840
Given a continuous aggregate named `example_continuous_aggregate` with an
@@ -70,3 +72,7 @@ time bucket is based on integers.
7072
## Returns
7173

7274
Returns true if successful.
75+
76+
<!-- vale Vale.Terms = NO -->
77+
[alter_job]: /api/:currentVersion:/jobs-automation/alter_job/
78+
<!-- vale Vale.Terms = YES -->

api/continuous-aggregates/remove_all_policies.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,24 +10,26 @@ api:
1010
products: [cloud, self_hosted, mst]
1111
---
1212

13-
import Experimental from "versionContent/_partials/_experimental.mdx";
14-
1513
<!-- markdownlint-disable-next-line line-length -->
1614
# remove_all_policies() <Tag type="community" content="Community" /><Tag type="experimental" content="Experimental" />
1715

1816
Remove all policies from a continuous aggregate. The removed columnstore and
1917
retention policies apply to the continuous aggregate, _not_ to the original
2018
hypertable.
2119

20+
<Highlight type="warning">
21+
22+
This experimental function will be removed in future releases. Please use the [`delete_job()`][delete_job] function to delete policies.
23+
24+
</Highlight>
25+
2226
```sql
2327
timescaledb_experimental.remove_all_policies(
2428
relation REGCLASS,
2529
if_exists BOOL = false
2630
) RETURNS BOOL
2731
```
2832

29-
<Experimental />
30-
3133
## Samples
3234

3335
Remove all policies from a continuous aggregate named
@@ -54,4 +56,4 @@ SELECT timescaledb_experimental.remove_all_policies('example_continuous_aggregat
5456

5557
Returns true if successful.
5658

57-
59+
[delete_job]: /api/:currentVersion:/jobs-automation/delete_job/

api/continuous-aggregates/remove_policies.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,19 @@ api:
1010
products: [cloud, self_hosted, mst]
1111
---
1212

13-
import Experimental from "versionContent/_partials/_experimental.mdx";
14-
1513
<!-- markdownlint-disable-next-line line-length -->
1614
# remove_policies() <Tag type="community" content="Community" /><Tag type="experimental" content="Experimental" />
1715

1816
Remove refresh, columnstore, and data retention policies from a continuous
1917
aggregate. The removed columnstore and retention policies apply to the
2018
continuous aggregate, _not_ to the original hypertable.
2119

20+
<Highlight type="warning">
21+
22+
This experimental function will be removed in future releases. Please use the [`delete_job()`](delete_job) function to delete policies.
23+
24+
</Highlight>
25+
2226
```sql
2327
timescaledb_experimental.remove_policies(
2428
relation REGCLASS,
@@ -30,8 +34,6 @@ timescaledb_experimental.remove_policies(
3034
To remove all policies on a continuous aggregate, see
3135
[`remove_all_policies()`][remove-all-policies].
3236

33-
<Experimental />
34-
3537
## Samples
3638

3739
Given a continuous aggregate named `example_continuous_aggregate` with a refresh
@@ -66,4 +68,4 @@ SELECT timescaledb_experimental.remove_policies(
6668

6769
Returns true if successful.
6870

69-
[remove-all-policies]: /api/:currentVersion:/continuous-aggregates/remove_all_policies/
71+
[delete_job]: /api/:currentVersion:/jobs-automation/delete_job/

api/continuous-aggregates/show_policies.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,21 +10,23 @@ api:
1010
products: [cloud, self_hosted, mst]
1111
---
1212

13-
import Experimental from "versionContent/_partials/_experimental.mdx";
14-
1513
<!-- markdownlint-disable-next-line line-length -->
1614
# show_policies() <Tag type="community" content="Community" /><Tag type="experimental" content="Experimental" />
1715

1816
Show all policies that are currently set on a continuous aggregate.
1917

18+
<Highlight type="warning">
19+
20+
This experimental function will be removed in future releases. Please query the [`timescaledb_information.jobs`][jobs-view] view.
21+
22+
</Highlight>
23+
2024
```sql
2125
timescaledb_experimental.show_policies(
2226
relation REGCLASS
2327
) RETURNS SETOF JSONB
2428
```
2529

26-
<Experimental />
27-
2830
## Samples
2931

3032
Given a continuous aggregate named `example_continuous_aggregate`, show all the
@@ -56,4 +58,4 @@ show_policies
5658
|-|-|-|
5759
|`show_policies`|`JSONB`|Details for each policy set on the continuous aggregate|
5860

59-
61+
[jobs-view]: /api/:currentVersion:/informational-views/jobs/

api/hypercore/convert_to_columnstore.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ CALL convert_to_columnstore('_timescaledb_internal._hyper_1_2_chunk');
3838
|----------------------|--|---------|--|----------------------------------------------------------------------------------------------------------------------------------------------------|
3939
| `chunk` | REGCLASS | - || Name of the chunk to add to the $COLUMNSTORE. |
4040
| `if_not_columnstore` | BOOLEAN | `true` || Set to `false` so this job fails with an error rather than a warning if `chunk` is already in the $COLUMNSTORE. |
41-
| `recompress` | BOOLEAN | `false` || Set to `true` to add a chunk that had more data inserted after being added to the $COLUMNSTORE. |
41+
| `recompress` | BOOLEAN | `false` || Set to true to recompress. In-memory recompression will be attempted first; otherwise it will fall back to internal decompress/compress. |
4242

4343
## Returns
4444

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
---
2+
api_name: _timescaledb_functions.chunk_rewrite_cleanup()
3+
excerpt: Clean up state from an aborted chunk rewrite operation
4+
topics: [hypertables]
5+
keywords: [hypertables, chunk, merge]
6+
api:
7+
license: community
8+
type: procedure
9+
products: [cloud, mst, self_hosted]
10+
---
11+
12+
# _timescaledb_functions.chunk_rewrite_cleanup()
13+
14+
Chunks can be rewritten by, for example, a [merge][merge-chunks] or a
15+
[split][split-chunk] operation. When such a rewrite runs in concurrent mode it
16+
happens across two transactions: the first one rewrites the data to new
17+
temporary relations without blocking reads, while the second transaction
18+
completes the operation by swapping the relations using heavy locks. If the
19+
second transaction does not complete successfully there might be temporary
20+
relations left on disk. These relations can take up a significant amount of
21+
disk space so they need to be cleaned up using this procedure.
22+
23+
The procedure only cleans up relations that:
24+
25+
* the current user has owner privileges for
26+
* the current user can lock without blocking
27+
28+
## Samples
29+
30+
* Check for any non-completed rewrite operations:
31+
32+
```sql
33+
SELECT * FROM _timescaledb_catalog.chunk_rewrite;
34+
chunk_relid | new_relid
35+
----------------------------------------+-------------------------------------
36+
_timescaledb_internal._hyper_1_2_chunk | _timescaledb_internal.pg_temp_18942
37+
_timescaledb_internal._hyper_1_1_chunk | _timescaledb_internal.pg_temp_18942
38+
(2 rows)
39+
```
40+
41+
* Clean up non-completed rewrite operations:
42+
43+
```sql
44+
CALL _timescaledb_functions.chunk_rewrite_cleanup();
45+
NOTICE: cleaned up 2 orphaned rewrite relations, skipped 0
46+
47+
SELECT * FROM _timescaledb_catalog.chunk_rewrite;
48+
chunk_relid | new_relid
49+
-------------+-----------
50+
(0 rows)
51+
```
52+
53+
[merge-chunks]: /api/:currentVersion:/hypertable/merge_chunks
54+
[split-chunk]: /api/:currentVersion:/hypertable/split_chunk

api/hypertable/merge_chunks.md

Lines changed: 35 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -11,45 +11,68 @@ products: [cloud, mst, self_hosted]
1111

1212
# merge_chunks()
1313

14-
Merge two or more chunks into one.
14+
Merge two or more chunks into one.
1515

16-
The partition boundaries for the new chunk is the union of all partitions of the merged chunks.
16+
The partition boundaries for the new chunk is the union of all partitions of the merged chunks.
1717
The new chunk retains the name, constraints, and triggers of the _first_ chunk in the partition order.
1818

19-
You can only merge chunks that have directly adjacent partitions. It is not possible to merge
20-
chunks that have another chunk, or an empty range between them in any of the partitioning
19+
You can only merge chunks that have directly adjacent partitions. It is not possible to merge
20+
chunks that have another chunk, or an empty range between them in any of the partitioning
2121
dimensions.
2222

2323
Chunk merging has the following limitations. You cannot:
2424

25-
* Merge chunks with tiered data
26-
* Read or write from the chunks while they are being merged
25+
* Merge chunks with tiered data
26+
* Write to chunks that are being merged
2727

28-
<Since2180 />
28+
## Concurrent mode
29+
30+
When a merge is executed using the `concurrently` option, other processes can
31+
simultaneously read from the chunks being merged and insert into other chunks.
32+
The merge happens across two transactions: the first one rewrites the chunks
33+
into a temporary relation without taking any locks that prevent reads, while
34+
the second transaction locks out all other operations before swapping the old
35+
relations for the new one. The second operation completes quickly so it
36+
should not significantly affect other operations.
37+
38+
If a concurrent merge fails or is aborted during the second transaction, the
39+
temporary relation might be left on disk. This could consume significant disk
40+
space. To clean up such non-completed merges, use the procedure
41+
[`_timescaledb_functions.chunk_rewrite_cleanup()`][chunk-rewrite-cleanup].
2942

3043
## Samples
3144

32-
- Merge two chunks:
45+
* Merge two chunks:
3346

3447
```sql
3548
CALL merge_chunks('_timescaledb_internal._hyper_1_1_chunk', '_timescaledb_internal._hyper_1_2_chunk');
3649
```
3750

38-
- Merge more than two chunks:
51+
* Merge more than two chunks:
3952

4053
```sql
4154
CALL merge_chunks('{_timescaledb_internal._hyper_1_1_chunk, _timescaledb_internal._hyper_1_2_chunk, _timescaledb_internal._hyper_1_3_chunk}');
4255
```
4356

57+
* Merge two chunks concurrently, allowing reads:
58+
59+
```sql
60+
CALL merge_chunks('_timescaledb_internal._hyper_1_1_chunk', '_timescaledb_internal._hyper_1_2_chunk', concurrently => true);
61+
```
62+
63+
* To merge more than two chunks concurrently, use [`merge_chunks_concurrently()`][merge-chunks-concurrently].
4464

4565
## Arguments
4666

4767
You can merge either two chunks, or an arbitrary number of chunks specified as an array of chunk identifiers.
48-
When you call `merge_chunks`, you must specify either `chunk1` and `chunk2`, or `chunks`. You cannot use both
68+
When you call `merge_chunks`, you must specify either `chunk1` and `chunk2`, or `chunks`. You cannot use both
4969
arguments.
5070

51-
5271
| Name | Type | Default | Required | Description |
5372
|--------------------|-------------|--|--|------------------------------------------------|
5473
| `chunk1`, `chunk2` | REGCLASS | - || The two chunk to merge in partition order |
55-
| `chunks` | REGCLASS[] |- || The array of chunks to merge in partition order |
74+
| `chunks` | REGCLASS[] | - || The array of chunks to merge in partition order |
75+
| `concurrently` | BOOL. | false || If set to `true` allow reads on the merging chunks |
76+
77+
[chunk-rewrite-cleanup]: /api/:currentVersion:/hypertable/chunk_rewrite_cleanup
78+
[merge-chunks-concurrently]: /api/:currentVersion:/hypertable/merge_chunks_concurrently

0 commit comments

Comments
 (0)