Skip to content

Commit 0fb6d37

Browse files
authored
Merge pull request #306 from grondo/faq-statedir
improve FAQ entry about filling up `/tmp`
2 parents 58e0782 + fa602d1 commit 0fb6d37

File tree

3 files changed

+128
-117
lines changed

3 files changed

+128
-117
lines changed

conf.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -364,5 +364,6 @@ def setup(app):
364364
)
365365

366366
linkcheck_ignore = [
367-
r'https://github.com/flux-framework/flux-core\?tab\=readme-ov-file\#build-requirements'
367+
r'https://github.com/flux-framework/flux-core\?tab\=readme-ov-file\#build-requirements',
368+
r'https://www.mcs.anl.gov/papers/P1760.pdf'
368369
]

faqs.rst

Lines changed: 125 additions & 115 deletions
Original file line numberDiff line numberDiff line change
@@ -67,9 +67,9 @@ F58 to another using the :core:man1:`flux-job` ``id`` subcommand, e.g.
6767

6868
.. code-block:: sh
6969
70-
$ flux submit sleep 3600 | flux job id --to=words
71-
airline-alibi-index--tuna-maximum-adam
72-
$ flux job cancel airline-alibi-index--tuna-maximum-adam
70+
$ flux submit sleep 3600 | flux job id --to=words
71+
airline-alibi-index--tuna-maximum-adam
72+
$ flux job cancel airline-alibi-index--tuna-maximum-adam
7373
7474
With copy-and-paste, auto-completion, globbing, etc., it shouldn't be necessary
7575
to *type* a job ID with the ``ƒ`` prefix that often, but should you need to,
@@ -108,7 +108,7 @@ we have 64 nodes of resources and are at depth 1.
108108

109109
.. code-block:: console
110110
111-
[s=64,d=1] $
111+
[s=64,d=1] $
112112
113113
To add this prompt into your shell, you can cut and paste the below or use it to
114114
adjust your current shell prompt. Note that the initial call to ``flux getattr size``
@@ -118,19 +118,19 @@ Cut and paste for ``.bashrc``
118118

119119
.. code-block:: sh
120120
121-
flux getattr size > /dev/null 2>&1
122-
if [ $? -eq 0 ]; then
123-
export PS1="[s=$(flux getattr size),d=$(flux getattr instance-level)] $"
124-
fi
121+
flux getattr size > /dev/null 2>&1
122+
if [ $? -eq 0 ]; then
123+
export PS1="[s=$(flux getattr size),d=$(flux getattr instance-level)] $"
124+
fi
125125
126126
Cut and paste for ``.cshrc``
127127

128128
.. code-block:: sh
129129
130-
flux getattr size >& /dev/null
131-
if ( $? == 0 ) then
132-
set prompt="[s=`flux getattr size`,d=`flux getattr instance-level`] $"
133-
endif
130+
flux getattr size >& /dev/null
131+
if ( $? == 0 ) then
132+
set prompt="[s=`flux getattr size`,d=`flux getattr instance-level`] $"
133+
endif
134134
135135
.. _bug_report_how:
136136
@@ -209,21 +209,21 @@ In earlier versions, the same effect can be achieved by setting the
209209
210210
.. code-block:: console
211211
212-
$ flux run -o per-resource.type=node -o per-resource.count=100 -N2 COMMAND
212+
$ flux run -o per-resource.type=node -o per-resource.count=100 -N2 COMMAND
213213
214214
Another method to more generally oversubscribe resources is to launch
215215
multiple Flux brokers per node. This can be done locally for testing, e.g.
216216
217217
.. code-block:: console
218218
219-
$ flux start -s 4
219+
$ flux start -s 4
220220
221221
or can be done by launching a job with multiple ``flux start`` commands
222222
per node, e.g. to run 8 brokers across 2 nodes
223223
224224
.. code-block:: console
225225
226-
$ flux submit -o cpu-affinity=off -N2 -n8 flux start SCRIPT
226+
$ flux submit -o cpu-affinity=off -N2 -n8 flux start SCRIPT
227227
228228
One final method is to use the ``alloc-bypass``
229229
`jobtap plugin <https://flux-framework.readthedocs.io/projects/flux-core/en/latest/man7/flux-jobtap-plugins.html>`_, which allows a job to bypass the
@@ -236,14 +236,14 @@ a job with another job, e.g. to run debugger or other services.
236236
237237
.. code-block:: console
238238
239-
$ flux jobtap load alloc-bypass.so
240-
$ flux submit -N4 sleep 60
241-
ƒ2WU24J4NT
242-
$ flux run --setattr=system.alloc-bypass.R="$(flux job info ƒ2WU24J4NT R)" -n 4 flux getattr rank
243-
3
244-
2
245-
1
246-
0
239+
$ flux jobtap load alloc-bypass.so
240+
$ flux submit -N4 sleep 60
241+
ƒ2WU24J4NT
242+
$ flux run --setattr=system.alloc-bypass.R="$(flux job info ƒ2WU24J4NT R)" -n 4 flux getattr rank
243+
3
244+
2
245+
1
246+
0
247247
248248
.. _node_memory_exhaustion:
249249
@@ -256,20 +256,30 @@ systems, ``/tmp`` is a RAM-backed file system with limited space, and in
256256
some situations such as long running, high throughput workflows, Flux may
257257
use a lot of it.
258258
259+
When the Flux database fills up the disk, errors like the following may
260+
appear and the instance of Flux will get stuck or otherwise not function
261+
properly
262+
263+
.. code-block:: console
264+
265+
content-sqlite.err[0]: store: executing stmt: database or disk is full(13)
266+
content.crit[0]: content store: No space left on device
267+
content-sqlite.err[0]: store: executing stmt: database disk image is malformed(11)
268+
259269
Flux may be launched with the database file redirected to another location
260270
by setting the *statedir* broker attribute. For example:
261271
262272
.. code-block:: sh
263273
264-
$ mkdir -p /home/myuser/jobstate
265-
$ rm -f /home/myuser/jobstate/content.sqlite
266-
$ flux batch --broker-opts=-Sstatedir=/home/myuser/jobdir -N16 ...
274+
$ mkdir -p /home/myuser/jobstate
275+
$ rm -f /home/myuser/jobstate/content.sqlite
276+
$ flux batch --broker-opts=-Sstatedir=/home/myuser/jobstate -N16 ...
267277
268278
Or if launching via :core:man1:`flux-start` use:
269279
270280
.. code-block:: sh
271281
272-
$ flux start -o,-Sstatedir=/home/myuser/jobdir
282+
$ flux start -Sstatedir=/home/myuser/jobstate
273283
274284
Note the following:
275285
@@ -497,15 +507,15 @@ like ``xargs -I``, substitute the input with ``{}``. For example:
497507
498508
.. code-block:: console
499509
500-
$ seq 1 4 | flux bulksubmit --watch echo {}
501-
ƒ2jBnW4zK
502-
ƒ2jBoz4Gf
503-
ƒ2jBoz4Gg
504-
ƒ2jBoz4Gh
505-
1
506-
2
507-
3
508-
4
510+
$ seq 1 4 | flux bulksubmit --watch echo {}
511+
ƒ2jBnW4zK
512+
ƒ2jBoz4Gf
513+
ƒ2jBoz4Gg
514+
ƒ2jBoz4Gh
515+
1
516+
2
517+
3
518+
4
509519
510520
As an alternative to reading from ``stdin``, the ``bulksubmit`` utility can
511521
also take inputs on the command line separated by ``:::``.
@@ -515,10 +525,10 @@ see what would be submitted to Flux without actually running any jobs
515525
516526
.. code-block:: console
517527
518-
$ flux bulksubmit --dry-run echo {} ::: 1 2 3
519-
bulksubmit: submit echo 1
520-
bulksubmit: submit echo 2
521-
bulksubmit: submit echo 3
528+
$ flux bulksubmit --dry-run echo {} ::: 1 2 3
529+
bulksubmit: submit echo 1
530+
bulksubmit: submit echo 2
531+
bulksubmit: submit echo 3
522532
523533
For more help and examples, see :core:man1:`flux-bulksubmit`.
524534
@@ -530,7 +540,7 @@ TL;DR: Use:
530540
531541
.. code-block:: console
532542
533-
$ flux batch --conf=tbon.topo=kary:0 -o exit-timeout=none ...
543+
$ flux batch --conf=tbon.topo=kary:0 -o exit-timeout=none ...
534544
535545
.. note::
536546
@@ -614,7 +624,7 @@ Example: launch a Spectrum MPI job with PMI tracing enabled:
614624
615625
.. code-block:: console
616626
617-
$ flux run -ompi=spectrum -overbose=2 -n4 ./hello
627+
$ flux run -ompi=spectrum -overbose=2 -n4 ./hello
618628
619629
.. _openmpi_versions:
620630
@@ -627,7 +637,7 @@ with the Flux plugins enabled. Your installed version may be checked with:
627637
628638
.. code-block:: console
629639
630-
$ ompi_info|grep flux
640+
$ ompi_info|grep flux
631641
MCA pmix: flux (MCA v2.1.0, API v2.0.0, Component v4.0.3)
632642
MCA schizo: flux (MCA v2.1.0, API v1.0.0, Component v4.0.3)
633643
@@ -681,13 +691,13 @@ integer verbosity level, e.g.
681691
682692
.. code-block:: console
683693
684-
$ flux run --env=OMPI_MCA_btl_base_verbose=99 -N2 -n4 ./hello
694+
$ flux run --env=OMPI_MCA_btl_base_verbose=99 -N2 -n4 ./hello
685695
686696
To list available MCA parameters containing the string ``_verbose`` use:
687697
688698
.. code-block:: console
689699
690-
$ ompi_info -a | grep _verbose
700+
$ ompi_info -a | grep _verbose
691701
692702
.. _mvapich2_config:
693703
@@ -723,76 +733,76 @@ something like this:
723733
724734
.. code-block:: console
725735
726-
$ flux run -o verbose=2 -N2 ./hello
727-
0.731s: flux-shell[1]: DEBUG: 1: tasks [1] on cores 0-3
728-
0.739s: flux-shell[1]: DEBUG: Loading /usr/local/etc/flux/shell/initrc.lua
729-
0.744s: flux-shell[1]: TRACE: Successfully loaded flux.shell module
730-
0.744s: flux-shell[1]: TRACE: trying to load /usr/local/etc/flux/shell/initrc.lua
731-
0.757s: flux-shell[1]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/intel_mpi.lua
732-
0.758s: flux-shell[1]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/mvapich.lua
733-
0.782s: flux-shell[1]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/openmpi.lua
734-
0.906s: flux-shell[1]: DEBUG: libpals: jobtap plugin not loaded: disabling operation
735-
0.721s: flux-shell[0]: DEBUG: 0: task_count=2 slot_count=2 cores_per_slot=1 slots_per_node=1
736-
0.722s: flux-shell[0]: DEBUG: 0: tasks [0] on cores 0-3
737-
0.730s: flux-shell[0]: DEBUG: Loading /usr/local/etc/flux/shell/initrc.lua
738-
0.739s: flux-shell[0]: TRACE: Successfully loaded flux.shell module
739-
0.739s: flux-shell[0]: TRACE: trying to load /usr/local/etc/flux/shell/initrc.lua
740-
0.753s: flux-shell[0]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/intel_mpi.lua
741-
0.758s: flux-shell[0]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/mvapich.lua
742-
0.784s: flux-shell[0]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/openmpi.lua
743-
0.792s: flux-shell[0]: DEBUG: output: batch timeout = 0.500s
744-
0.921s: flux-shell[0]: DEBUG: libpals: jobtap plugin not loaded: disabling operation
745-
1.054s: flux-shell[0]: TRACE: pmi: 0: C: cmd=init pmi_version=1 pmi_subversion=1
746-
1.054s: flux-shell[0]: TRACE: pmi: 0: S: cmd=response_to_init rc=0 pmi_version=1 pmi_subversion=1
747-
1.054s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get_maxes
748-
1.054s: flux-shell[0]: TRACE: pmi: 0: S: cmd=maxes rc=0 kvsname_max=64 keylen_max=64 vallen_max=1024
749-
1.055s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get_appnum
750-
1.055s: flux-shell[0]: TRACE: pmi: 0: S: cmd=appnum rc=0 appnum=0
751-
1.055s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get_my_kvsname
752-
1.055s: flux-shell[0]: TRACE: pmi: 0: S: cmd=my_kvsname rc=0 kvsname=ƒABRxM89qL3
753-
1.055s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get kvsname=ƒABRxM89qL3 key=PMI_process_mapping
754-
1.055s: flux-shell[0]: TRACE: pmi: 0: S: cmd=get_result rc=0 value=(vector,(0,2,1))
755-
1.056s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get_my_kvsname
756-
1.056s: flux-shell[0]: TRACE: pmi: 0: S: cmd=my_kvsname rc=0 kvsname=ƒABRxM89qL3
757-
1.059s: flux-shell[0]: TRACE: pmi: 0: C: cmd=put kvsname=ƒABRxM89qL3 key=P0-businesscard value=description#picl6$port#41401$ifname#192.168.88.251$
758-
1.059s: flux-shell[0]: TRACE: pmi: 0: S: cmd=put_result rc=0
759-
1.060s: flux-shell[0]: TRACE: pmi: 0: C: cmd=barrier_in
760-
1.059s: flux-shell[1]: TRACE: pmi: 1: C: cmd=init pmi_version=1 pmi_subversion=1
761-
1.059s: flux-shell[1]: TRACE: pmi: 1: S: cmd=response_to_init rc=0 pmi_version=1 pmi_subversion=1
762-
1.060s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get_maxes
763-
1.060s: flux-shell[1]: TRACE: pmi: 1: S: cmd=maxes rc=0 kvsname_max=64 keylen_max=64 vallen_max=1024
764-
1.060s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get_appnum
765-
1.060s: flux-shell[1]: TRACE: pmi: 1: S: cmd=appnum rc=0 appnum=0
766-
1.060s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get_my_kvsname
767-
1.060s: flux-shell[1]: TRACE: pmi: 1: S: cmd=my_kvsname rc=0 kvsname=ƒABRxM89qL3
768-
1.061s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get kvsname=ƒABRxM89qL3 key=PMI_process_mapping
769-
1.061s: flux-shell[1]: TRACE: pmi: 1: S: cmd=get_result rc=0 value=(vector,(0,2,1))
770-
1.062s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get_my_kvsname
771-
1.062s: flux-shell[1]: TRACE: pmi: 1: S: cmd=my_kvsname rc=0 kvsname=ƒABRxM89qL3
772-
1.065s: flux-shell[1]: TRACE: pmi: 1: C: cmd=put kvsname=ƒABRxM89qL3 key=P1-businesscard value=description#picl7$port#35977$ifname#192.168.88.250$
773-
1.065s: flux-shell[1]: TRACE: pmi: 1: S: cmd=put_result rc=0
774-
1.065s: flux-shell[1]: TRACE: pmi: 1: C: cmd=barrier_in
775-
1.069s: flux-shell[1]: TRACE: pmi: 1: S: cmd=barrier_out rc=0
776-
1.066s: flux-shell[0]: TRACE: pmi: 0: S: cmd=barrier_out rc=0
777-
1.084s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get kvsname=ƒABRxM89qL3 key=P1-businesscard
778-
1.084s: flux-shell[0]: TRACE: pmi: 0: S: cmd=get_result rc=0 value=description#picl7$port#35977$ifname#192.168.88.250$
779-
1.093s: flux-shell[0]: TRACE: pmi: 0: C: cmd=finalize
780-
1.093s: flux-shell[0]: TRACE: pmi: 0: S: cmd=finalize_ack rc=0
781-
1.093s: flux-shell[0]: TRACE: pmi: 0: S: pmi finalized
782-
1.093s: flux-shell[0]: TRACE: pmi: 0: C: pmi EOF
783-
1.089s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get kvsname=ƒABRxM89qL3 key=P0-businesscard
784-
1.089s: flux-shell[1]: TRACE: pmi: 1: S: cmd=get_result rc=0 value=description#picl6$port#41401$ifname#192.168.88.251$
785-
1.094s: flux-shell[1]: TRACE: pmi: 1: C: cmd=finalize
786-
1.094s: flux-shell[1]: TRACE: pmi: 1: S: cmd=finalize_ack rc=0
787-
1.094s: flux-shell[1]: TRACE: pmi: 1: S: pmi finalized
788-
1.095s: flux-shell[1]: TRACE: pmi: 1: C: pmi EOF
789-
1.099s: flux-shell[1]: DEBUG: task 1 complete status=0
790-
1.107s: flux-shell[1]: DEBUG: exit 0
791-
1.097s: flux-shell[0]: DEBUG: task 0 complete status=0
792-
ƒABRxM89qL3: completed MPI_Init in 0.084s. There are 2 tasks
793-
ƒABRxM89qL3: completed first barrier in 0.008s
794-
ƒABRxM89qL3: completed MPI_Finalize in 0.003s
795-
1.116s: flux-shell[0]: DEBUG: exit 0
736+
$ flux run -o verbose=2 -N2 ./hello
737+
0.731s: flux-shell[1]: DEBUG: 1: tasks [1] on cores 0-3
738+
0.739s: flux-shell[1]: DEBUG: Loading /usr/local/etc/flux/shell/initrc.lua
739+
0.744s: flux-shell[1]: TRACE: Successfully loaded flux.shell module
740+
0.744s: flux-shell[1]: TRACE: trying to load /usr/local/etc/flux/shell/initrc.lua
741+
0.757s: flux-shell[1]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/intel_mpi.lua
742+
0.758s: flux-shell[1]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/mvapich.lua
743+
0.782s: flux-shell[1]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/openmpi.lua
744+
0.906s: flux-shell[1]: DEBUG: libpals: jobtap plugin not loaded: disabling operation
745+
0.721s: flux-shell[0]: DEBUG: 0: task_count=2 slot_count=2 cores_per_slot=1 slots_per_node=1
746+
0.722s: flux-shell[0]: DEBUG: 0: tasks [0] on cores 0-3
747+
0.730s: flux-shell[0]: DEBUG: Loading /usr/local/etc/flux/shell/initrc.lua
748+
0.739s: flux-shell[0]: TRACE: Successfully loaded flux.shell module
749+
0.739s: flux-shell[0]: TRACE: trying to load /usr/local/etc/flux/shell/initrc.lua
750+
0.753s: flux-shell[0]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/intel_mpi.lua
751+
0.758s: flux-shell[0]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/mvapich.lua
752+
0.784s: flux-shell[0]: TRACE: trying to load /usr/local/etc/flux/shell/lua.d/openmpi.lua
753+
0.792s: flux-shell[0]: DEBUG: output: batch timeout = 0.500s
754+
0.921s: flux-shell[0]: DEBUG: libpals: jobtap plugin not loaded: disabling operation
755+
1.054s: flux-shell[0]: TRACE: pmi: 0: C: cmd=init pmi_version=1 pmi_subversion=1
756+
1.054s: flux-shell[0]: TRACE: pmi: 0: S: cmd=response_to_init rc=0 pmi_version=1 pmi_subversion=1
757+
1.054s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get_maxes
758+
1.054s: flux-shell[0]: TRACE: pmi: 0: S: cmd=maxes rc=0 kvsname_max=64 keylen_max=64 vallen_max=1024
759+
1.055s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get_appnum
760+
1.055s: flux-shell[0]: TRACE: pmi: 0: S: cmd=appnum rc=0 appnum=0
761+
1.055s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get_my_kvsname
762+
1.055s: flux-shell[0]: TRACE: pmi: 0: S: cmd=my_kvsname rc=0 kvsname=ƒABRxM89qL3
763+
1.055s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get kvsname=ƒABRxM89qL3 key=PMI_process_mapping
764+
1.055s: flux-shell[0]: TRACE: pmi: 0: S: cmd=get_result rc=0 value=(vector,(0,2,1))
765+
1.056s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get_my_kvsname
766+
1.056s: flux-shell[0]: TRACE: pmi: 0: S: cmd=my_kvsname rc=0 kvsname=ƒABRxM89qL3
767+
1.059s: flux-shell[0]: TRACE: pmi: 0: C: cmd=put kvsname=ƒABRxM89qL3 key=P0-businesscard value=description#picl6$port#41401$ifname#192.168.88.251$
768+
1.059s: flux-shell[0]: TRACE: pmi: 0: S: cmd=put_result rc=0
769+
1.060s: flux-shell[0]: TRACE: pmi: 0: C: cmd=barrier_in
770+
1.059s: flux-shell[1]: TRACE: pmi: 1: C: cmd=init pmi_version=1 pmi_subversion=1
771+
1.059s: flux-shell[1]: TRACE: pmi: 1: S: cmd=response_to_init rc=0 pmi_version=1 pmi_subversion=1
772+
1.060s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get_maxes
773+
1.060s: flux-shell[1]: TRACE: pmi: 1: S: cmd=maxes rc=0 kvsname_max=64 keylen_max=64 vallen_max=1024
774+
1.060s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get_appnum
775+
1.060s: flux-shell[1]: TRACE: pmi: 1: S: cmd=appnum rc=0 appnum=0
776+
1.060s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get_my_kvsname
777+
1.060s: flux-shell[1]: TRACE: pmi: 1: S: cmd=my_kvsname rc=0 kvsname=ƒABRxM89qL3
778+
1.061s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get kvsname=ƒABRxM89qL3 key=PMI_process_mapping
779+
1.061s: flux-shell[1]: TRACE: pmi: 1: S: cmd=get_result rc=0 value=(vector,(0,2,1))
780+
1.062s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get_my_kvsname
781+
1.062s: flux-shell[1]: TRACE: pmi: 1: S: cmd=my_kvsname rc=0 kvsname=ƒABRxM89qL3
782+
1.065s: flux-shell[1]: TRACE: pmi: 1: C: cmd=put kvsname=ƒABRxM89qL3 key=P1-businesscard value=description#picl7$port#35977$ifname#192.168.88.250$
783+
1.065s: flux-shell[1]: TRACE: pmi: 1: S: cmd=put_result rc=0
784+
1.065s: flux-shell[1]: TRACE: pmi: 1: C: cmd=barrier_in
785+
1.069s: flux-shell[1]: TRACE: pmi: 1: S: cmd=barrier_out rc=0
786+
1.066s: flux-shell[0]: TRACE: pmi: 0: S: cmd=barrier_out rc=0
787+
1.084s: flux-shell[0]: TRACE: pmi: 0: C: cmd=get kvsname=ƒABRxM89qL3 key=P1-businesscard
788+
1.084s: flux-shell[0]: TRACE: pmi: 0: S: cmd=get_result rc=0 value=description#picl7$port#35977$ifname#192.168.88.250$
789+
1.093s: flux-shell[0]: TRACE: pmi: 0: C: cmd=finalize
790+
1.093s: flux-shell[0]: TRACE: pmi: 0: S: cmd=finalize_ack rc=0
791+
1.093s: flux-shell[0]: TRACE: pmi: 0: S: pmi finalized
792+
1.093s: flux-shell[0]: TRACE: pmi: 0: C: pmi EOF
793+
1.089s: flux-shell[1]: TRACE: pmi: 1: C: cmd=get kvsname=ƒABRxM89qL3 key=P0-businesscard
794+
1.089s: flux-shell[1]: TRACE: pmi: 1: S: cmd=get_result rc=0 value=description#picl6$port#41401$ifname#192.168.88.251$
795+
1.094s: flux-shell[1]: TRACE: pmi: 1: C: cmd=finalize
796+
1.094s: flux-shell[1]: TRACE: pmi: 1: S: cmd=finalize_ack rc=0
797+
1.094s: flux-shell[1]: TRACE: pmi: 1: S: pmi finalized
798+
1.095s: flux-shell[1]: TRACE: pmi: 1: C: pmi EOF
799+
1.099s: flux-shell[1]: DEBUG: task 1 complete status=0
800+
1.107s: flux-shell[1]: DEBUG: exit 0
801+
1.097s: flux-shell[0]: DEBUG: task 0 complete status=0
802+
ƒABRxM89qL3: completed MPI_Init in 0.084s. There are 2 tasks
803+
ƒABRxM89qL3: completed first barrier in 0.008s
804+
ƒABRxM89qL3: completed MPI_Finalize in 0.003s
805+
1.116s: flux-shell[0]: DEBUG: exit 0
796806
797807
************************
798808
Flux Developer Questions

quickstart.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ variant: ``spack install flux-sched+cuda``. This builds a CUDA-aware
108108
version of hwloc.
109109

110110

111-
For instructions on installing spack, see `Spack's installation documentation <https://spack.readthedocs.io/en/latest/getting_started.html#installation>`_.
111+
For instructions on installing spack, see `Spack's installation documentation <https://spack.readthedocs.io/en/latest/getting_started.html>`_.
112112

113113
.. _manual_installation:
114114

0 commit comments

Comments
 (0)