You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Typically , dark moment for : was working , but not here"
262
+
263
+
Look at those | I believe they [停顿几秒]
264
+
265
+
【重复】Sounds familiar. 10年前, Docker was born to address that
266
+
你会问 : Why NOT 用docker 管理 all those DEP?
267
+
You know,, system Lib -> CUDA->Python -> PyTorch -> Transformers , and so many other libs.
268
+
计算 Permutation and combination , 镜像总数 Astronomical huge number
269
+
-->
270
+
250
271
---
251
272
class: py-10
252
273
glowSeed: 175
@@ -362,6 +383,17 @@ glowSeed: 175
362
383
</div>
363
384
</div>
364
385
386
+
<!--
387
+
After你勤奋工作, you've tidy the python libs in 开发环境,
388
+
[click] But when shift 开发 -> 训练 , then 推理 stage
389
+
当切换环境, 不仅 wasting | reinstall ,
390
+
the worst nightmare is dep relationship breaks, from here to there.
391
+
392
+
[click] So , we DO need a solution : define once , consistent from end to end, and reusable, unattended , and well integrated with Jupyter ,vscode.
393
+
394
+
Ok, Neko, Can you shed some light for us?
395
+
-->
396
+
365
397
---
366
398
clicks: 3
367
399
---
@@ -1316,7 +1348,7 @@ Okay, let's take a look at how the CRD works once it is created.
1316
1348
1317
1349
[click] Here's where it gets interesting - we create a PVC, We are almost compatible with all CSIs.
1318
1350
1319
-
[click] Then we deploy a job - downloading your models, setting up your conda environment, installing all those libraries. Once it's done, your dataset is ready to be mounted by any pod.
1351
+
[click] Then we deploy a job - it will setting up your conda environment, installing all those libraries. Once it's done, your dataset is ready to be mounted by any pod.
1320
1352
1321
1353
[click] The beauty is - this happens once. After that, everyone just mounts the ready-to-use environment. No more waiting!
1322
1354
-->
@@ -1460,9 +1492,10 @@ spec:
1460
1492
</div>
1461
1493
1462
1494
<!--
1463
-
Here's the same Dataset spec, but now I want to highlight something really important - we support multiple package managers!
1495
+
Here's the Dataset spec, it's very simple and easy to unstandard, the env config stored in options, like conda's environment.yaml and pip's requirements.txt. This is a very typical declarative approach to environment definition.
1496
+
but now I want to highlight something really important - we support multiple package managers!
1464
1497
1465
-
[click] You can use conda for full environment control with CUDA integration. Need something from PyPI? No problem, just add it to your pip requirements. Want blazing fast installs? We've got Pixi integration - it's Rust-powered and incredibly fast. Or if you prefer, use Mamba which is 10x faster than traditional conda.
1498
+
[click] Besides conda, we can also integrate well with pixi and pip only. Or if you prefer, use Mamba which is 10x faster than traditional conda.
1466
1499
1467
1500
The key is flexibility - use whatever works best for your workflow. We handle all the complexity behind the scenes, making sure everything plays nicely together.
1468
1501
-->
@@ -1596,7 +1629,7 @@ We use a three-layer caching approach.
1596
1629
1597
1630
[click] Third, we can auto create metadata - environment configs, dependency resolution results. This allows you to use the environment we created in Dataset directly when you open your Notebook, without having to execute specified commands to activate it. It was a wonderful experience!
1598
1631
1599
-
[click] Look at the time difference! Traditional CUDA setup takes 45-60 minutes. PyTorch another 20-30. With our caching? First setup is 10-15 minutes, and after that? Seconds! Just seconds to spin up a complete ML environment. That's the power of intelligent caching!
1632
+
[click] Look at the time difference! Traditional CUDA setup takes 45-60 minutes. PyTorch another 20-30. With our solution? First setup is 10-15 minutes, and after that? Seconds! Just seconds to spin up a complete ML environment. That's the power of intelligent dependency approach!
0 commit comments