Skip to content

Conversation

@oyvindlr
Copy link
Contributor

Downsampling, especially with peak mode and "auto", is a way to significantly speed up the zooming performance of line plots, because it limits the number of points that need to be drawn at each redraw. However, when a lot of data is shown, the computation of the downsampled signal itself becomes a bottleneck.
In this pull request, I have implemented caching of a downsampled signal, so that it does not have to be re-calculated for each change in view. For fixed downsampling, this has no drawbacks besides a slight increase in memory.

For auto-dowsampling, it is a bit more complicated. I have made it so that one cached downsampled signal is used for all zoom levels until it is zoomed enough that there are too few samples to show the signal in a nice way, at which point it starts using the existing "real-time" computation. Since, at that point, there are much fewer samples in the clipped signal to be processed, the performance of the real time downsampling is usually good enough.

This solves issue #3301

@pijyoi pijyoi mentioned this pull request Jun 24, 2025
@oyvindlr
Copy link
Contributor Author

I see there is a bit more to add, such as tests, documentation fixes, and implementing methods to change these settings in PlotItem. I'm willing to do that if there is interest in merging this change.
Here is a demo of how the performance improves for very long plots (couldn't attach code files unfortunately). Use right-button zoom, and try with and without caching.

import sys
import numpy as np
from PyQt6.QtWidgets import (
    QApplication,
    QMainWindow,
    QVBoxLayout,
    QToolButton,
    QWidget,
    QLabel,
)
from PyQt6 import QtCore

import pyqtgraph as pg


class wait_cursor:
    def __enter__(self):
        QApplication.setOverrideCursor(QtCore.Qt.CursorShape.WaitCursor)

    def __exit__(self, exc_type, exc_value, traceback):
        QApplication.restoreOverrideCursor()


class TimeSeriesPlot(QMainWindow):
    def __init__(self):
        super().__init__()

        self.setWindowTitle("Demo PlotDataItem downsampling")
        self.setGeometry(100, 100, 800, 500)

        # Main widget and layout
        main_widget = QWidget()
        layout = QVBoxLayout()

        # PlotWidget
        self.plot_widget = pg.PlotWidget()
        self.plot_widget.setClipToView(True)
        self.plot_widget.setDownsampling(ds=1, auto=True, mode="peak")
        layout.addWidget(self.plot_widget)

        # Selectable ToolButton
        self.tool_button = QToolButton()
        self.tool_button.setText("Use downsampling cache")
        self.tool_button.setCheckable(True)
        self.tool_button.clicked.connect(self.on_tool_button_toggled)
        cache_label = QLabel("Not using cache")
        cache_label.setStyleSheet("color: red")
        layout.addWidget(cache_label)
        self.cache_label = cache_label
        layout.addWidget(self.tool_button)

        # Apply layout
        main_widget.setLayout(layout)
        self.setCentralWidget(main_widget)

        # Plot data
        self.plot_random_time_series(1, 400000000)

    def plot_random_time_series(self, num_lines, length):
        x = np.arange(0, length)
        y = np.random.normal(size=length)
        self.items = []
        for i in range(num_lines):
            y = np.random.normal(size=length)
            item = self.plot_widget.plot(x, y, useDownsamplingCache=False, clear=False)
            self.items.append(item)

    def on_tool_button_toggled(self):
        with wait_cursor():
            for item in self.items:
                item.setDownsamplingCacheMode(useCache=self.tool_button.isChecked())
        if self.tool_button.isChecked():
            self.cache_label.setText("Using cache")
            self.cache_label.setStyleSheet("color: green")
        else:
            self.cache_label.setText("Not using cache")
            self.cache_label.setStyleSheet("color: red")


if __name__ == "__main__":
    app = QApplication(sys.argv)
    window = TimeSeriesPlot()
    window.show()
    sys.exit(app.exec())

@j9ac9k
Copy link
Member

j9ac9k commented Nov 15, 2025

Hi @oyvindlr

Sorry for not following up sooner on this. There is definitely a desire to merge this, as a feature that this library is intended to handle well is displaying data quickly (even large amounts of it), and this change certainly helps in that area!

parser.add_argument(
'--signal-length', '-l',
type=int,
default=500_000_000, #Huge signal tha actually shows benefit of cache
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure that 500M is an appropriate or safe default. That's 8GB just for the x and y inputs. Another 12GB for the QPainterPath if that gets instantiated. How many RAM does it show being used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It uses 8 GB. I've seen that you need to get huge signals before the caching starts to matter, since it only matters when the time spent computing the downsampled signal rivals the time spent drawing the downsampled signal. I reduced it to 10M points now, where you'll still notice a difference, but not as much.
I must admit, at the time when I was implementing this, I thought I saw much more of an improvement. I have been using this feature in an app for a long time though, so I consider it "field proven".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants