You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you use Rust, it is sometimes outright preposterous how much knowledge of language, and how much of programming ingenuity and curiosity you need in order to accomplish the most trivial things. When you feel particularly desperate, you go to [rust/issues] and search for a solution for your problem. Suddenly, you find an issue with an explanation that it is theoretically impossible to design your API in this way, owing to some subtle language bug. The issue is <span style="background-color: rgb(35, 134, 54); color: white; display: inline-block; padding: 5px 12px; border-radius: 28px; font-size: 16px; font-family: sans-serif;"><svg style="vertical-align: middle; margin-bottom: 3px;" height="16" class="octicon octicon-issue-opened" viewBox="0 0 16 16" version="1.1" width="16" aria-hidden="true"><path fill="#FFFFFF" d="M8 9.5a1.5 1.5 0 100-3 1.5 1.5 0 000 3z"></path><path fill="#FFFFFF" fill-rule="evenodd" d="M8 0a8 8 0 100 16A8 8 0 008 0zM1.5 8a6.5 6.5 0 1113 0 6.5 6.5 0 01-13 0z"></path></svg> Open</span> and dated Apr 5, 2017.
When I was novice in Rust, I used to think that references are simpler than smart pointers. Now I am using `Rc`/`Arc` almost everywhere where using lifetimes causes too much pain and performance is not a big deal. Believe or not, all of the aforementioned problems were caused by that single lifetime in `type Handler`, `'a`.
365
365
@@ -521,7 +521,7 @@ So if I "figured out it all", why should not I develop a sublime version of Rust
521
521
522
522
If you still want to create a PL of the future, I wish you good luck and strong mental health. You are endlessly courageous and hopelessly romantic.
Copy file name to clipboardExpand all lines: content/sat-supercompilation.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -71,13 +71,13 @@ A supercompiler's input is a pair of an expression and program, the latter being
71
71
Let the task for a hypothetic supercompiler be the expression `add(S(Z), S(S(Z)))` together with the definition of `add` above. In this case, the work of a supercompiler is as simple as sequential reduction of the initial expression to the target expression `S(S(S(Z)))`, according to the rules of `add`:
However, to think of a supercompiler as of a mere expression evaluator is a grave mistake. Let us consider what happens when it encounters a _variable_ that does not stand for some concrete expression. For example, let the task be `add(S(S(Z)), b)` with the same definition of `add`, where `b` is understood as "any" expression:
@@ -87,21 +87,21 @@ A supercompiler saw the variable `b` when trying to reduce `S(S(add(Z, b)))`, an
87
87
Now consider what happens if there is a need to _pattern-match_ on an unknown variable. In this case, we cannot just proceed with "direct" computation since there are several possibilities of the form the variable may take. Suppose that the task is `add(a, 2)`[^naturals] with the same function `add`. What a supercompiler does is that it _analyze_ the expression `add(a, 2)` according to all the possibilities of `a`, which are either `Z` or `S(v1)`, where `v1` is some fresh variable identifier. The situation looks like this:
A supercompiler has built an (incomplete) _process tree_ that describes the execution of `add(a, 2)` in a general sense. In the first branch, `a` is substituted for `Z` according to the first rule of `add`; in the second branch, `a` is substituted for `S(v1)` according to the second rule of `add`. The resulting two nodes are labelled with expressions that resulted in reducing a particular substitution of the parent expression.
94
94
95
95
However, the supercompilation is not complete yet: there is still a node labelled as `S(add(v1, 2))`. A supercompiler decides to _decompose_ it, meaning to move `add(v1, 2)` out of `S(...)` in the following way:
After that, if we proceed with supercompiling `add(v1, 2)`, we will eventually arrive at the initial expression `add(a, 2)`. This is because the expressions `add(v1, 2)` and `add(a, 2)` are _alpha equivalent_, meaning that they only differ in the names of variables. A supercompiler should be smart enough to detect this situation of alpha equivalence and, instead of continuing infinite supercompilation, just draw a back arrow from `add(v1, 2)` to the initial node as depicted below:
A similar example can be found in _"Rethinking Supercompilation"_ by Neil Mitchell [^rethinking-supercomp] and in [^supercompiler-concept] (section 6, _"Examples of supercompilation"_).
@@ -232,7 +232,7 @@ $$
232
232
Then `OR(x, OR(y, OR(NOT z, F)))` would correspond to "x OR y OR NOT z":
We have replaced the lower `if x` node with its first child `F` because `x` was already assigned `T` in this path. Since there are no `T` leafs in the resulting tree, it is correct to say that the formula is unsatisfiable: with any value of `x` we will arrive at `F`.
269
269
270
270
One more example is `AND(OR(x, F), AND(OR(x, OR(y, F)), AND(OR(NOT x, F), T)))`, which is equivalent to "x AND (y OR z) AND NOT x":
The general observation is that, after encoding a CNF formula as an if-tree and removing dead paths from it, if there is at least one `T` leaf, the initial formula is satisfiable; otherwise, the formula is unsatisfiable because there is no path from the root that will take us to `T`. Think about it for a moment.
@@ -290,13 +290,13 @@ _Positive supercompilation_ is a particular model of supercompilation that propa
290
290
In this post, we only deal with positive supercompilation. Consider the schematic representation of the CNF formula “x AND NOT x” again:
Imagine that `if`, `T`, and `F` are SLL constructors, with `if` holding three arguments: two branches and a variable, which is `T` in the first branch and `F` in the second. If we analyze the uppermost `if x`, we will get the following "process tree" [^if-process-tree]:
Supercompilation acted as a dead code eliminator! This is because `x=T` was propagated to the first branch of the uppermost `if x`, resulting in the elimination of the branch `T` of the innermost `if x`. The second uppermost branch remains unchanged.
Copy file name to clipboardExpand all lines: content/whats-the-point-of-the-c-preprocessor-actually.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -163,7 +163,7 @@ Not really.
163
163
When I started designing Metalang99, I was aware of how metaprogramming can go insane. _Metalang99 is an attempt to make it less insane_. With some unhealthy curiosity, you might accidentally call Satan, and he will kindly produce gigabytes of error messages for you, dear. Not kidding, I experienced it on my own:
In the above error, I asked a compiler to show a full backtrace of macro expansions. Most of the time, it is just a senseless bedsheet of macro definitions, so I always turn it down by `-ftrack-macro-expansion=0` (GCC) or `-fmacro-backtrace-limit=1` (Clang).
@@ -248,7 +248,7 @@ Looks less nice?
248
248
249
249
Bad news: it is impossible to handle all kinds of errors in macros gracefully. But we do not need to handle _all_ of them. It would be sufficient to handle _most of them_. Now I shall convince you that even Rust, a language that sells itself as a language with comprehensible errors, even Rust sometimes produces complete nonsense:
People in the programming language design community strive to make their languages more expressive, with a strong type system, mainly to increase ergonomics by avoiding code duplication in final software; however, the more expressive their languages become, the more abruptly duplication penetrates the language itself.
27
27
@@ -167,7 +167,7 @@ However, sometimes we may want to apply type-level computation to ordinary `stru
The purpose of this writeup is only to convey the intuition behind the statics-dynamics biformity and not to provide a formal proof -- for the latter, please refer to an awesome library called [`type-operators`] (by the same person who implemented Smallfuck on types). In essence, it is an algorithmic macro eDSL that boils down to type-level manipulation with traits: you can define algebraic data types and perform data manipulations on them similar to how you normally do in Rust, but in the end, the whole code will dwell on the type-level. For more details, see the [translation rules](https://github.com/sdleffler/type-operators-rs/blob/master/src/lib.rs) and an [excellent guide](https://github.com/sdleffler/type-operators-rs/blob/master/README.md) by the same author. Another noteworthy project is [Fortraith], which is a "compile-time compiler that compiles Forth to compile-time trait expressions":
This is woefully to say, but it seems that an "expressive" PL nowadays means "Hey there, I have seriously messed up with the number of features, but that is fine!"
590
590
@@ -722,7 +722,7 @@ Yeah, Idris detects the error and produces a type mismatch! This is basically ho
I already anticipate the question: what is the problem of implementing `printf` with macros? After all, [`println!`] works just fine in Rust. The problem is macros. Think for yourself: why a programming language needs heavy-duty macros? Because we may want to extend it. Why may we want to extend it? Because a programming language does not fit our needs: we cannot express something using regular linguistic abstractions, and this is why we decide to extend the language with ad-hoc meta-abstractions. In the main section, I provided an argumentation why this approach sucks -- because a macro system has no clue about a language being manipulated; in fact, procedural macros in Rust is just a fancy name for the [M4 preprocessor]. You guys integrated M4 into your language. Of course, this is [better than external M4], but it is nevertheless a method of the 20'th century; proc. macros even cannot manipulate an [_abstract_ syntax tree], because [`syn::Item`], a common structure used to write proc. macros, is indeed known as a [_concrete_ syntax tree], or "parse tree". On the other hand, types are a natural part of a host language, and this is why if we can express a programmatic abstraction using types, we _reuse_ linguistic abstractions instead of resorting to ad-hoc machinery. Ideally, a programming language should have either no macros or only a lightweight form of syntax rewriting rules (like Scheme's [`extend-syntax`] or [syntax extensions] of Idris), in order to keep the language consistent and well-suited to solve expected tasks.
728
728
@@ -820,7 +820,7 @@ The only inconvenience I experienced during the development of `printf` is [mass
Everything is good except that Zig is a systems language. On [their official website], Zig is described as a "general-purpose programming language", but I can hardly agree with this statement. Yes, you can write virtually any software in Zig, but should you? My experience in maintaining high-level code in Rust and C99 says **NO**. The first reason is safety: if you make a systems language safe, you will make programmers deal with borrow checker and ownership (or equivalent) issues that have absolutely nothing to do with business logic (believe me, I know the pain); otherwise, if you choose the C-way manual memory management, you will make programmers debugging their code for long hours with the hope that `-fsanitize=address` would show something meaningful. Moreover, if you want to build new abstractions atop of pointers, you will end up with `&str`, `AsRef<str>`, `Borrow<str>`, `Box<str>`, and the similar. Come on, I just want a UTF-8 string; most of the time, I do not really care whether it is one of those alternatives.
0 commit comments