Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions site/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,10 @@ disqusShortname = "checkly"
pre = "/learn/icons/errors.svg"
weight = 350

[[menu.learn]]
name = "OpenTelemetry"
pre = "/learn/icons/opentelemetry.svg"
weight = 360

[markup.goldmark.renderer]
unsafe = true
14 changes: 7 additions & 7 deletions site/content/guides/empowering-developers-with-checkly.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,14 @@ In today’s fast-paced development environment, engineers are under pressure to

Checkly solves this problem by providing a solution that bridges the gap between application developers and platform teams. By leveraging Checkly’s codified MaC approach, both groups can collaborate efficiently to create, configure, and manage monitors in a seamless way that fits within existing workflows.

Empowering Developers with Code-First Monitoring
## Empowering Developers with Code-First Monitoring
With the rise of shift-left and the age of empowering engineers, more teams are using code to configure tests, infrastructure, and deployment models. They are finding benefits like increased collaboration, auditability, and automation in these new paradigms that are revolutionizing the way they ship software.

Checkly fits neatly into this trend, offering software teams a codified approach to building and configuring their monitors and alerts. This means that monitors can be:

Created faster within the software delivery lifecycle
Tested and reviewed in CI/CD pipelines
Automated across services and teams
* Created faster within the software delivery lifecycle
* Tested and reviewed in CI/CD pipelines
* Automated across services and teams

No more throwing monitoring over the proverbial wall. Rather than relying on a separate platform or operations team to set up monitors, engineers can take full control of what gets monitored, when, and how. This can save time, reduce errors, and make the entire process more efficient.

Expand Down Expand Up @@ -89,7 +89,7 @@ curl --request POST \
```
As seen here, engineers can quickly configure an API check that runs every minute to ensure that the status code is 200. If there’s a failure, Checkly will immediately notify the team, allowing them to address the issue promptly.

Collaboration Between Dev and Platform Teams
## Collaboration Between Dev and Platform Teams
While a code-first approach to monitoring empowers application engineers, many teams include both developers and platform engineers who work together to build and operate complex systems. This is where Checkly’s flexibility and extensibility truly shines.

Platform teams often handle the configuration of complex alerts, thresholds, and scheduling across multiple environments. By codifying these aspects, platform engineers can provide a consistent monitoring “wrapper” around the application teams’ checks. This allows developers to focus on building and shipping code and adding simple checks without worrying about the operational intricacies of monitoring.
Expand Down Expand Up @@ -130,7 +130,7 @@ resource "checkly_check" "example_check_2" {
```
In this example, the platform team has set up detailed monitoring parameters, including response time thresholds and a retry strategy in case of failure. By wrapping these details into reusable configurations, the platform team allows application engineers to create new monitors that are consistent with the organization’s standards—without having to worry about the operational details.

Codified Alerts and Notifications
### Codified Alerts and Notifications
Checkly also integrates alert channels into the code, allowing teams to manage alerts for different monitors via a code-first approach. You can specify email alerts, Slack notifications, or other channels to ensure that the right team members are notified when something goes wrong.

For instance, here’s how to set up email alerts for a check:
Expand All @@ -150,7 +150,7 @@ resource "checkly_check" "example_check" {
}
```
By codifying alert configurations, platform engineers can ensure that the organization’s monitoring rules and notification protocols are followed consistently, even as application teams create new monitors.
Conclusion
## Conclusion
Checkly’s approach to monitoring via code gives engineering and operations teams the tools they need to keep applications running smoothly. Application engineers can take ownership of their monitors, ensuring that they’re set up efficiently and integrated into their workflows. Meanwhile, platform teams can manage and maintain a higher-level view, providing the necessary configurations and support for more complex systems.

Whether your team is fully developer-led, or you have a more traditional split between development and platform engineering, Checkly’s code-first monitoring solution ensures that everyone can collaborate smoothly and efficiently. As modern applications continue to grow in complexity, tools like Checkly are becoming essential to manage the intricacies of monitoring in a fast-moving development environment.
87 changes: 87 additions & 0 deletions site/content/learn/playwright/otel-getting-started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
---
title: Learn How to Observe and Monitor your Software with OpenTelemetry
subTitle: A beginner's guide to OpenTelemetry
displayTitle: Getting started with OpenTelemetry
description: Learn OpenTelemetry with Checkly. Add monitoring to every piece of your stack with the open standards and open-source tools.
date: 2024-10-17
author: Nocnica Mellifera
githubUser: serverless-mom
displayDescription:
Learn more about Playwright & Monitoring with Checkly. Explore how to automate your web with a reliable, programmable monitoring workflow.
metatags:
title: Learn OpenTelemetry - modern monitoring and observability

menu:
learn:
parent: "OpenTelemetry"

---

# An Introduction to Observability with OpenTelemetry

**Observability** is the practice of understanding the internal state of a system by examining the outputs it generates—such as logs, metrics, and traces. OpenTelemetry (OTel) plays a key role in modern observability by offering open standards for instrumenting code, gathering telemetry data, and managing this data through centralized collectors.



### Why Observability Matters

In systems that adopt **microservices** architecture, tracking system health becomes challenging. Unlike monolithic systems, where a few experts can oversee the whole system, microservices distribute responsibilities across many independent services. This fragmentation makes it difficult to pinpoint issues and monitor end-to-end system behavior. Observability addresses these gaps by enabling better monitoring and faster resolution of incidents.



### The Three Pillars of Observability

OpenTelemetry enables observability through three core data types:

1. **Metrics**:
- Numerical summaries of system behavior (e.g., CPU usage, request counts).
- Provide high-level insights into trends and overall performance.
- Metrics are efficient to collect and store, making them suitable for monitoring at scale.
2. **Logs**:
- Detailed records of events or states within a system.
- Offer a complete picture of system operations but can become unwieldy in large volumes.
- While useful for post-mortem analysis, starting with logs during a live incident may slow down troubleshooting.
3. **Traces**:
- Capture the lifecycle of a request as it moves through various services in a system.
- Tracing helps identify the components involved in a request and their performance (e.g., through waterfall charts).
- Distributed tracing extends this concept to microservices, ensuring that spans from different services are correlated correctly.

---

### The Role of OpenTelemetry

OpenTelemetry simplifies observability by standardizing how telemetry data is generated, collected, and transmitted. Its **open standards** ensure compatibility across diverse languages and platforms. In addition to standard libraries, the OpenTelemetry project provides tools like:

- **Instrumentation SDKs**: Automate the generation of telemetry data in supported languages (e.g., Java, Python, .NET).
- **OpenTelemetry Collector**: A flexible service that aggregates, processes, and exports telemetry data. The collector allows users to filter, batch, or transform data before sending it to observability backends such as Prometheus or Grafana.

---

### Distributed Tracing with OpenTelemetry

Distributed tracing relies on propagating a **trace context** across services. Each service contributes spans to the trace, which are visualized in sequence to understand the request's journey. OpenTelemetry makes this possible by defining trace headers that are passed across service boundaries. The **OpenTelemetry Collector** plays a crucial role in collecting, stitching, and processing these spans to provide a comprehensive view of distributed transactions.

---

### Getting Started with OpenTelemetry

To begin using OpenTelemetry, you can either:

- Send telemetry data directly to a backend (e.g., Prometheus) for quick experimentation.
- Use the OpenTelemetry Collector to manage data flow and apply advanced processing, such as removing personally identifiable information (PII) or optimizing data batching.

The flexibility of the collector enables smooth transitions between direct reporting and more complex data pipelines as your observability needs grow.



### Relevant Resources on OpenTelemetry

- **Metrics Overview**: Learn how OpenTelemetry handles metrics to provide high-level insights into your system's performance [here](https://opentelemetry.io/docs/specs/otel/metrics/).
- **Logging with OpenTelemetry**: Discover how OpenTelemetry integrates with existing logging libraries and enhances log data correlation across microservices [here](https://opentelemetry.io/docs/specs/otel/logs/).
- **Quick Start Guide**: A guide for setting up OpenTelemetry quickly to start monitoring your applications [here](https://opentelemetry.io/docs/quickstart/).

These resources explain the core pillars of observability, as well as how to use OpenTelemetry’s **Collector** to manage and export telemetry data to observability platforms like Prometheus and Grafana.

## Conclusion

Observability with OpenTelemetry empowers teams to quickly detect, understand, and resolve issues in microservices environments. By adopting this open framework, organizations gain the ability to monitor complex systems without relying on proprietary tools, ensuring scalability and interoperability across platforms. By implementing OpenTelemetry, you gain a unified view of system health across microservices and can ensure your monitoring solutions are scalable and vendor-neutral.
1 change: 1 addition & 0 deletions site/static/learn/icons/opentelemetry.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading