Skip to content

Commit f815c57

Browse files
committed
> fix notes
1 parent 1c1eb5d commit f815c57

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

packages/2025-06-11-kubecon-hk/slides.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1098,7 +1098,7 @@ clicks: 4
10981098
v-for="(step, idx) in [
10991099
'Parse Dataset CRD & validate spec',
11001100
'Check source type & credentials',
1101-
'Create/update PVC',
1101+
'Create/update PVC(Any CSI)',
11021102
'Download/sync data from source to PV',
11031103
'Configure mount options',
11041104
'Update dataset status'
@@ -1116,13 +1116,13 @@ clicks: 4
11161116
</div>
11171117

11181118
<!--
1119-
Let me walk you through how this actually works under the hood. When you create a Dataset CRD, our controller springs into action.
1119+
Okay, let's take a look at how the CRD works once it is created.
11201120

1121-
[click] First, we parse and validate your spec - making sure everything's properly defined. Then we check what type of source you're using and handle any credentials securely.
1121+
[click] First, we parse and validate your spec - making sure everything's properly defined. Then we check what type of source and handle any credentials securely.
11221122

11231123
[click] Here's where it gets interesting - we create a PVC, We are almost compatible with all CSIs.
11241124

1125-
[click] Then we deploy a job - downloading your models, setting up your conda environment, installing all those C++ libraries. Once it's done, boom! Your dataset is ready to be mounted by any pod.
1125+
[click] Then we deploy a job - downloading your models, setting up your conda environment, installing all those libraries. Once it's done, your dataset is ready to be mounted by any pod.
11261126

11271127
[click] The beauty is - this happens once. After that, everyone just mounts the ready-to-use environment. No more waiting!
11281128
-->
@@ -1393,11 +1393,11 @@ spec:
13931393
<!--
13941394
But Datasets isn't just about Python environments - it's also about models and data! Here's an example of loading a model from HuggingFace.
13951395

1396-
Look at this - we're pulling the Qwen 32B model directly from HuggingFace. But here's where it gets smart - see those filtering options? You can exclude the files you need.
1396+
[click] Look at this - we're pulling the Qwen 32B model directly from HuggingFace. But here's where it gets smart - see those filtering options? You can exclude the files you need.
13971397

1398-
[click] And check out those advanced features - need to use a regional mirror because HuggingFace is slow in your region? Just change the endpoint. Got private models? We handle token authentication securely through Kubernetes secrets.
1398+
And check out those advanced features - need to use a regional mirror because HuggingFace is slow in your region? Just change the endpoint. Got private models? We handle token authentication securely through Kubernetes secrets.
13991399

1400-
This means you can have your models ready and waiting, right alongside your environments. No more downloading gigabytes every time you start a training job!
1400+
This means you can have your models ready and waiting, right alongside your environments. No more downloading gigabytes every time you start an inference or job!
14011401
-->
14021402

14031403
---

0 commit comments

Comments
 (0)