Skip to content

Commit 332c53e

Browse files
Don't merge yet. PR'ed as a sanity check on how we're adding OTel
1 parent 600633c commit 332c53e

File tree

4 files changed

+99
-7
lines changed

4 files changed

+99
-7
lines changed

site/config.toml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -201,6 +201,10 @@ disqusShortname = "checkly"
201201
pre = "/learn/icons/errors.svg"
202202
weight = 350
203203

204+
[[menu.learn]]
205+
name = "OpenTelemetry"
206+
pre = "/learn/icons/opentelemetry.svg"
207+
weight = 360
204208

205209
[markup.goldmark.renderer]
206210
unsafe = true

site/content/guides/empowering-developers-with-checkly.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,14 @@ In today’s fast-paced development environment, engineers are under pressure to
1313

1414
Checkly solves this problem by providing a solution that bridges the gap between application developers and platform teams. By leveraging Checkly’s codified MaC approach, both groups can collaborate efficiently to create, configure, and manage monitors in a seamless way that fits within existing workflows.
1515

16-
Empowering Developers with Code-First Monitoring
16+
## Empowering Developers with Code-First Monitoring
1717
With the rise of shift-left and the age of empowering engineers, more teams are using code to configure tests, infrastructure, and deployment models. They are finding benefits like increased collaboration, auditability, and automation in these new paradigms that are revolutionizing the way they ship software.
1818

1919
Checkly fits neatly into this trend, offering software teams a codified approach to building and configuring their monitors and alerts. This means that monitors can be:
2020

21-
Created faster within the software delivery lifecycle
22-
Tested and reviewed in CI/CD pipelines
23-
Automated across services and teams
21+
* Created faster within the software delivery lifecycle
22+
* Tested and reviewed in CI/CD pipelines
23+
* Automated across services and teams
2424

2525
No more throwing monitoring over the proverbial wall. Rather than relying on a separate platform or operations team to set up monitors, engineers can take full control of what gets monitored, when, and how. This can save time, reduce errors, and make the entire process more efficient.
2626

@@ -89,7 +89,7 @@ curl --request POST \
8989
```
9090
As seen here, engineers can quickly configure an API check that runs every minute to ensure that the status code is 200. If there’s a failure, Checkly will immediately notify the team, allowing them to address the issue promptly.
9191

92-
Collaboration Between Dev and Platform Teams
92+
## Collaboration Between Dev and Platform Teams
9393
While a code-first approach to monitoring empowers application engineers, many teams include both developers and platform engineers who work together to build and operate complex systems. This is where Checkly’s flexibility and extensibility truly shines.
9494

9595
Platform teams often handle the configuration of complex alerts, thresholds, and scheduling across multiple environments. By codifying these aspects, platform engineers can provide a consistent monitoring “wrapper” around the application teams’ checks. This allows developers to focus on building and shipping code and adding simple checks without worrying about the operational intricacies of monitoring.
@@ -130,7 +130,7 @@ resource "checkly_check" "example_check_2" {
130130
```
131131
In this example, the platform team has set up detailed monitoring parameters, including response time thresholds and a retry strategy in case of failure. By wrapping these details into reusable configurations, the platform team allows application engineers to create new monitors that are consistent with the organization’s standards—without having to worry about the operational details.
132132

133-
Codified Alerts and Notifications
133+
### Codified Alerts and Notifications
134134
Checkly also integrates alert channels into the code, allowing teams to manage alerts for different monitors via a code-first approach. You can specify email alerts, Slack notifications, or other channels to ensure that the right team members are notified when something goes wrong.
135135

136136
For instance, here’s how to set up email alerts for a check:
@@ -150,7 +150,7 @@ resource "checkly_check" "example_check" {
150150
}
151151
```
152152
By codifying alert configurations, platform engineers can ensure that the organization’s monitoring rules and notification protocols are followed consistently, even as application teams create new monitors.
153-
Conclusion
153+
## Conclusion
154154
Checkly’s approach to monitoring via code gives engineering and operations teams the tools they need to keep applications running smoothly. Application engineers can take ownership of their monitors, ensuring that they’re set up efficiently and integrated into their workflows. Meanwhile, platform teams can manage and maintain a higher-level view, providing the necessary configurations and support for more complex systems.
155155

156156
Whether your team is fully developer-led, or you have a more traditional split between development and platform engineering, Checkly’s code-first monitoring solution ensures that everyone can collaborate smoothly and efficiently. As modern applications continue to grow in complexity, tools like Checkly are becoming essential to manage the intricacies of monitoring in a fast-moving development environment.
Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
---
2+
title: Learn How to Observe and Monitor your Software with OpenTelemetry
3+
subTitle: A beginner's guide to OpenTelemetry
4+
displayTitle: Getting started with OpenTelemetry
5+
description: Learn OpenTelemetry with Checkly. Add monitoring to every piece of your stack with the open standards and open-source tools.
6+
date: 2024-10-17
7+
author: Nocnica Mellifera
8+
githubUser: serverless-mom
9+
displayDescription:
10+
Learn more about Playwright & Monitoring with Checkly. Explore how to automate your web with a reliable, programmable monitoring workflow.
11+
metatags:
12+
title: Learn OpenTelemetry - modern monitoring and observability
13+
14+
menu:
15+
learn:
16+
parent: "OpenTelemetry"
17+
18+
---
19+
20+
# An Introduction to Observability with OpenTelemetry
21+
22+
**Observability** is the practice of understanding the internal state of a system by examining the outputs it generates—such as logs, metrics, and traces. OpenTelemetry (OTel) plays a key role in modern observability by offering open standards for instrumenting code, gathering telemetry data, and managing this data through centralized collectors.
23+
24+
25+
26+
### Why Observability Matters
27+
28+
In systems that adopt **microservices** architecture, tracking system health becomes challenging. Unlike monolithic systems, where a few experts can oversee the whole system, microservices distribute responsibilities across many independent services. This fragmentation makes it difficult to pinpoint issues and monitor end-to-end system behavior. Observability addresses these gaps by enabling better monitoring and faster resolution of incidents.
29+
30+
31+
32+
### The Three Pillars of Observability
33+
34+
OpenTelemetry enables observability through three core data types:
35+
36+
1. **Metrics**:
37+
- Numerical summaries of system behavior (e.g., CPU usage, request counts).
38+
- Provide high-level insights into trends and overall performance.
39+
- Metrics are efficient to collect and store, making them suitable for monitoring at scale.
40+
2. **Logs**:
41+
- Detailed records of events or states within a system.
42+
- Offer a complete picture of system operations but can become unwieldy in large volumes.
43+
- While useful for post-mortem analysis, starting with logs during a live incident may slow down troubleshooting.
44+
3. **Traces**:
45+
- Capture the lifecycle of a request as it moves through various services in a system.
46+
- Tracing helps identify the components involved in a request and their performance (e.g., through waterfall charts).
47+
- Distributed tracing extends this concept to microservices, ensuring that spans from different services are correlated correctly.
48+
49+
---
50+
51+
### The Role of OpenTelemetry
52+
53+
OpenTelemetry simplifies observability by standardizing how telemetry data is generated, collected, and transmitted. Its **open standards** ensure compatibility across diverse languages and platforms. In addition to standard libraries, the OpenTelemetry project provides tools like:
54+
55+
- **Instrumentation SDKs**: Automate the generation of telemetry data in supported languages (e.g., Java, Python, .NET).
56+
- **OpenTelemetry Collector**: A flexible service that aggregates, processes, and exports telemetry data. The collector allows users to filter, batch, or transform data before sending it to observability backends such as Prometheus or Grafana.
57+
58+
---
59+
60+
### Distributed Tracing with OpenTelemetry
61+
62+
Distributed tracing relies on propagating a **trace context** across services. Each service contributes spans to the trace, which are visualized in sequence to understand the request's journey. OpenTelemetry makes this possible by defining trace headers that are passed across service boundaries. The **OpenTelemetry Collector** plays a crucial role in collecting, stitching, and processing these spans to provide a comprehensive view of distributed transactions.
63+
64+
---
65+
66+
### Getting Started with OpenTelemetry
67+
68+
To begin using OpenTelemetry, you can either:
69+
70+
- Send telemetry data directly to a backend (e.g., Prometheus) for quick experimentation.
71+
- Use the OpenTelemetry Collector to manage data flow and apply advanced processing, such as removing personally identifiable information (PII) or optimizing data batching.
72+
73+
The flexibility of the collector enables smooth transitions between direct reporting and more complex data pipelines as your observability needs grow.
74+
75+
76+
77+
### Relevant Resources on OpenTelemetry
78+
79+
- **Metrics Overview**: Learn how OpenTelemetry handles metrics to provide high-level insights into your system's performance [here](https://opentelemetry.io/docs/specs/otel/metrics/).
80+
- **Logging with OpenTelemetry**: Discover how OpenTelemetry integrates with existing logging libraries and enhances log data correlation across microservices [here](https://opentelemetry.io/docs/specs/otel/logs/).
81+
- **Quick Start Guide**: A guide for setting up OpenTelemetry quickly to start monitoring your applications [here](https://opentelemetry.io/docs/quickstart/).
82+
83+
These resources explain the core pillars of observability, as well as how to use OpenTelemetry’s **Collector** to manage and export telemetry data to observability platforms like Prometheus and Grafana.
84+
85+
## Conclusion
86+
87+
Observability with OpenTelemetry empowers teams to quickly detect, understand, and resolve issues in microservices environments. By adopting this open framework, organizations gain the ability to monitor complex systems without relying on proprietary tools, ensuring scalability and interoperability across platforms. By implementing OpenTelemetry, you gain a unified view of system health across microservices and can ensure your monitoring solutions are scalable and vendor-neutral.
Lines changed: 1 addition & 0 deletions
Loading

0 commit comments

Comments
 (0)