You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/comments-and-docs.markdown
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ Comments in Unison can be either line comments or block comments. It’s probabl
14
14
Line comments can be introduced in code with a special token. For example, if we want Haskell-like syntax, the `--` token introduces a comment:
15
15
16
16
```
17
-
foo x y =
17
+
foo x y =
18
18
-- This is a comment
19
19
x + y
20
20
```
@@ -39,16 +39,16 @@ Could also have a renderer for these comments that interprets the text as markdo
39
39
Block comments can be introduced with special brackets. For example, if we want Haskell-like syntax, the `{-``-}` brackets delimit a block comment:
40
40
41
41
```
42
-
foo x y =
42
+
foo x y =
43
43
{- This is a comment. -} x + y
44
44
45
45
foo x y = {- comment -} (x + y)
46
46
47
-
foo x y =
47
+
foo x y =
48
48
{- comment -}
49
49
(x + y)
50
50
51
-
foo x y =
51
+
foo x y =
52
52
{- comment -}
53
53
x + y
54
54
```
@@ -59,15 +59,15 @@ Block comments follow these syntactic rules:
59
59
2. The comment is attached to the abstract syntax tree node that is BEGUN by the token following the comment. If that's not defined, could be an error, or could just use some ad hoc heuristic to find "nearest" AST node.
60
60
3. When rendering comments, the indentation should be the same as the token that follows the comment.
61
61
62
-
<!--
62
+
<!--
63
63
Question: what exactly is the grammar and how is it parsed? Just some details to work out here.
64
64
-->
65
65
66
66
### Comments and code structure
67
67
68
68
Comments should not have any effect on the hash of a Unison term or type. I propose that comments be kept as an annotation on the AST rather than as part of the AST itself. This way, comments can be edited, added, or removed, without touching the AST.
69
69
70
-
<!--
70
+
<!--
71
71
I like this idea a lot, multiple people can comment on the same definition in different ways!
72
72
73
73
Question: how do you pick which comments are rendered when viewing a definition? (If there multiple sets of comments?)
@@ -82,19 +82,19 @@ Comments should be stored in the codebase as annotations on the syntax tree. For
82
82
83
83
A future version might allow for multiple comment sets (commentary with different purposes or audiences) by adding e.g. a tag field to the comments, or having a whole `comments` directory instead of just one file.
84
84
85
-
<!--
85
+
<!--
86
86
Seems good, key is that comments are attached to AST node, question is how do you refer to a specific AST node? Probably some sort of root to leaf path.
87
87
88
88
Should all the comments be in one file? In separate files? To avoid git merges, the file has to be called `<hash>.comments.ub` or something. And then the code viewer will look up all the `.comments` for a definition and let you pick one or more based on metadata something something.
89
-
-->
89
+
-->
90
90
91
91
## API documentation
92
92
93
93
Any hash in the codebase can have formal API documentation associated with it. This might include basic usage, free-text explanations, examples, links to further reading, and links to related hashes.
94
94
95
95
Probably some flavor of Markdown is ideal for API docs.
96
96
97
-
<!--
97
+
<!--
98
98
Sounds good.
99
99
100
100
What about things like examples and doctests?
@@ -122,7 +122,7 @@ Links to further reading - just use a section for this, with links in it, as a c
122
122
123
123
### The Unison CLI and API docs
124
124
125
-
Ultimately we’ll want to have a more visual codebase editor (see e.g. Pharo Smalltalk), but for now we have the Unison CLI. So there ought to be a special syntax for indicating that you want to associate API docs to a definition when you `add` it to the codebase (or`update`). This syntax should be light-weight and easy to type.
125
+
Ultimately we’ll want to have a more visual codebase editor (see e.g. Pharo Smalltalk), but for now we have the Unison CLI. So there ought to be a special syntax for indicating that you want to associate API docs to a definition when you add it to the codebase (with`update`). This syntax should be light-weight and easy to type.
I like that you can add API docs later to a definition.
145
145
146
146
For docbase documentation, nothing special needed, just write a new docbase page that references existing definitions. Unison can surface these "tracebacks" automatically.
@@ -167,7 +167,7 @@ There should be some syntax to exclude a code block from this processing.
167
167
Alternatively, we could have special syntax to indicate that something should be parsed as a Unison name. E.g.
168
168
169
169
```
170
-
{|
170
+
{|
171
171
Usage: `@foo x y` adds `x` to `y`.
172
172
|}
173
173
```
@@ -188,7 +188,7 @@ Note that author name, time stamp, etc, can be inferred from the codebase. These
188
188
<!-- Like that other metadata is just known to Unison and can displayed or not. -->
189
189
190
190
## Docbase/Wiki
191
-
Separately from API documentation, it would be good to be able to write tutorials or long-form explanations of Unison libraries, with links into the codebase API docs.
191
+
Separately from API documentation, it would be good to be able to write tutorials or long-form explanations of Unison libraries, with links into the codebase API docs.
192
192
193
193
We’d need to write a tool that can process e.g. Github-flavoured Markdown together with a Unison codebase. The markdown format would have Unison-specific extensions to allow hyperlinking Unison hashes as well as Tut-style evaluation of examples.
194
194
@@ -199,7 +199,7 @@ Processing has to have two distinct phases, authoring and rendering.
199
199
**Authoring*: you write the markdown document and use Unison human-readable names in your code. When you add your document to the docbase, all the names get replaced with Unison hashes before being stored.
200
200
**Rendering*: A document stored in the docbase could then be rendered as e.g. HTML (or Markdown) where Unison hashes are turned back to human-readable names from the codebase, and hyperlinked to the API documentation for the hashes.
201
201
202
-
<!--
202
+
<!--
203
203
How is this stored? Maybe docs are first-class, just like any other definition. If I'm documenting `foo`, some of its dependents could be documentation values.
204
204
205
205
Will need metadata system to be able to pick out docs for a definition, otherwise no changes to codebase format.
> Did you consider just keying off the type of the watch, like if it's of type `Test.Status`, assume it's a test? Yes we did, but we decided being explicit was better. Also by communicating your intent up front, you can get better feedback from the tool ("er, looks like this isn't a test, here's how you can make it one") vs silently ignoring the thing the user thought was a test and just not adding it to the branch.
34
34
35
-
On `add`, these `test>` watches are added to the codebase. Watch expressions marked as `test>` are also added to the namespace of the branch and given some autogenerated unique name (perhaps just computed from the hash of the test itself), unless the watch expression picks a name as in `test> test.sortEx1 = ...`. The user is told these names on `add`/`update` and can always rename them later if they like. Don't forget that in the event of a test failure, Unison can also show you the full source of the failed watch expression. Also note that the `Passed` and `Failed` cases might include the name of the "scope" of the test or other relevant info. So I'm not sure how important these names will be in practice
35
+
On `update`, these `test>` watches are added to the codebase. Watch expressions marked as `test>` are also added to the namespace of the branch and given some autogenerated unique name (perhaps just computed from the hash of the test itself), unless the watch expression picks a name as in `test> test.sortEx1 = ...`. The user is told these names on `update` and can always rename them later if they like. Don't forget that in the event of a test failure, Unison can also show you the full source of the failed watch expression. Also note that the `Passed` and `Failed` cases might include the name of the "scope" of the test or other relevant info. So I'm not sure how important these names will be in practice
36
36
37
37
There's a directory, `tests/`, containing files of the form `<hashXYZ>.ub`. The `hashXYZ` is a reference to the source of the original watch expression (in this case, the `Test.equal (sort [3,1,2]) [1,2,3]`), and the `.ub` file itself is a serialized `Test.Status`. We can ask if a branch is passing just by taking the intersection of the hashes in the branch with the hashes in this directory and seeing if all the `Test.Status` values for the branch are `Passed`. Notice this doesn't involve running any of the tests!
38
38
@@ -52,6 +52,6 @@ This caching can be done by default, but I suggest that the `watches` directory
52
52
53
53
### Implementation notes and remarks
54
54
55
-
We will neeed the list of watches in `UnisonFile` to include extra information: what kind of watch expression is it? A test or a regular watch? We'll then need to make use of this information on `add` and `update`. And we might want to expose other commands for rerunning tests anyway.
55
+
We will neeed the list of watches in `UnisonFile` to include extra information: what kind of watch expression is it? A test or a regular watch? We'll then need to make use of this information on `update`. And we might want to expose other commands for rerunning tests anyway.
56
56
57
-
Aside: I kinda like the "trust but occasionally reverify" model for this kind of caching. So every once in a while, pick a random test to rerun and make sure it checks out. With statistics, over time, it becomes exceedingly likely that the cache is good and any somehow incorrect results will be caught. Pessimistically rerunning all the tests, all the time, is Right Out. :)
57
+
Aside: I kinda like the "trust but occasionally reverify" model for this kind of caching. So every once in a while, pick a random test to rerun and make sure it checks out. With statistics, over time, it becomes exceedingly likely that the cache is good and any somehow incorrect results will be caught. Pessimistically rerunning all the tests, all the time, is Right Out. :)
0 commit comments