A friend at work today was telling me about his wife's C++ class, wherein the students were asked to predict the behaviour of the following snippet.

x = 10; y = ++x + --x;

For simplicity, let's assume the type of both `x`

and `y`

is `int`

.

int x, y;

We can inspect the final state of `x`

and `y`

with a simple `printf`

call.

```
printf("x = %d, y = %d\n", x, y);
```

Now, let's see how different compilers handle our program.

#include <stdio.h> int main(int argc, char **argv) { x = 10; y = ++x + --x; printf("x = %d, y = %d\n", x, y); }

At first glance, I would expect the right-hand side of the
initialization of `y`

to evaluate to `11 + 10`

, i.e. `21`

.

gcc (GCC) 7.3.0 incdec.c: In function ‘main’: incdec.c:9:9: warning: operation on ‘x’ may be undefined [-Wsequence-point] y = ++x + --x; ^~~ x = 10, y = 20

Unsurprisingly, GCC computes the final value of `x`

to be `10`

, the
same as its initial value, after being incremented and
decremented. It's markedly **less** intuitive to me that the final value
of `y`

is `20`

.

Clang, on the other hand, agrees with me, i.e. computes the final
value of `y`

to be `21`

.

clang version 5.0.1 (tags/RELEASE_501/final) incdec.c:9:9: warning: multiple unsequenced modifications to 'x' [-Wunsequenced] y = ++x + --x; ^ ~~ 1 warning generated. x = 10, y = 21

Both compilers warn about unsequenced modifications and the resultant undefined behaviour. Digging around a bit leads to some fascinating discussions. The GCC team maintains a list of further reading, including a few formal models and analyses of sequence points in C. Further exploration is left as an exercise for the reader.

**tl;dr** (from https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html)

It is not specified when between sequence points modifications to the values of objects take effect.

]]>The standard is worded confusingly, therefore there is some debate over the precise meaning of the sequence point rules in subtle cases.

First off, I think Idris is pretty great. Over the past several months, I've
been thoroughly enjoying reading and working through *Type-Driven Development
with Idris*, as well as chatting with the community on IRC (`#idris`

on
freenode).

For every MEAP book, Manning provides a forum. To be honest, though I've
partaken in quite a few MEAPs, I've never really participated in the fora. In
the *Type-Driven Development with Idris* forum, however, I've made a whopping
**two** posts so far. One was a clarifying question about a possible error in the
text and the other was an answer to a fellow reader's question. My answer seemed
like a good example of how Idris works, so I thought I'd turn it into a blog
post. Here goes.

In this example, we're trying to prove that addition of natural numbers is commutative, i.e. \[\forall n,m \in \mathbb{N}\ (n + m \equiv m + n)\] … which we can express in Idris as follows:

myPlusCommutative : (n, m : Nat) -> n + m = m + n

The type signature for `myPlusCommutative`

can be read as something like, "For
all natural numbers `n`

and `m`

, `n`

plus `m`

is exactly equivalent to `m`

plus
`n`

."

Over-simplifying the Curry-Howard correspondence, if we can write a total function that satisfies the type signature, then we've effectively written a proof for the corresponding universal quantification.

Idris has some great interactive editing tools, so I'll try to show those off a
bit here too. For a start, we can call `idris-add-clause`

(`M-RET d`

in
Spacemacs) to get:

myPlusCommutative : (n, m : Nat) -> n + m = m + n myPlusCommutative n m = ?myPlusCommutative_rhs

The prefix `?`

signifies a hole and in this case, the type of that hole is:

n : Nat m : Nat -------------------------------------- myPlusCommutative_rhs : plus n m = plus m n

Next, we can case split (`idris-case-split`

, `M-RET c`

) on `n`

:

myPlusCommutative : (n, m : Nat) -> n + m = m + n myPlusCommutative Z m = ?myPlusCommutative_rhs_1 myPlusCommutative (S k) m = ?myPlusCommutative_rhs_2

For `?myPlusCommutative_rhs_1`

we need to prove that \(m = m + 0\).

myPlusCommutative_rhs_1 : m = plus m 0

Fortunately, the `Prelude`

has a proof that's almost exactly what we want:

plusZeroRightNeutral : (left : Nat) -> left + 0 = left

Since we need to swap the left- and right-hand sides, we can use `sym`

:

sym : (left = right) -> right = left

So the complete definition for the *base case* of `myPlusCommutative`

is:

myPlusCommutative Z m = sym (plusZeroRightNeutral m)

Next, we've got to tackle the *inductive step*. Since we know we'll need to
reduce to the *base case*, let's define a local variable `inductiveHypothesis`

:

myPlusCommutative (S k) m = let inductiveHypothesis = myPlusCommutative k m in rewrite inductiveHypothesis in ?myPlusCommutative_rhs_2

Examining the type of `?myPlusCommutative_rhs_2`

:

k : Nat m : Nat inductiveHypothesis : plus k m = plus m k _rewrite_rule : plus k m = plus m k -------------------------------------- myPlusCommutative_rhs_2 : S (plus m k) = plus m (S k)

… we see we need to prove the successor of \(k\) plus \(m\) is exactly equivalent to \(m\) plus the successor of \(k\), i.e. \[\forall m \in \mathbb{N}\ (S(k + m) \equiv m + S(k))\]

Prelude to the rescue, yet again:

plusSuccRightSucc : (left : Nat) -> (right : Nat) -> S (left + right) = left + S right

Now we can `rewrite`

the hole using `plusSuccRightSucc`

with `m`

as `left`

and
`k`

as `right`

:

myPlusCommutative (S k) m = let inductiveHypothesis = myPlusCommutative k m in rewrite inductiveHypothesis in rewrite plusSuccRightSucc m k in ?myPlusCommutative_rhs_2

Eliding the repetitive bits, we get:

myPlusCommutative_rhs_2 : plus m (S k) = plus m (S k)

… which looks like our old friend, `Refl`

:

Refl : x = x

At this point, Idris can fill in the hole for us, via `idris-proof-search`

(`M-RET p`

):

myPlusCommutative (S k) m = let inductiveHypothesis = myPlusCommutative k m in rewrite inductiveHypothesis in rewrite plusSuccRightSucc m k in Refl

Tidying up a bit for my taste, we arrive at the complete (and total) definition:

||| Addition of natural numbers is commutative. total myPlusCommutative : (n, m : Nat) -> n + m = m + n myPlusCommutative Z m = sym (plusZeroRightNeutral m) myPlusCommutative (S k) m = rewrite myPlusCommutative k m in plusSuccRightSucc m k

\(\square\)

]]>Back in December of 2014, I set up a blog using Ghost and Buster. That was all fine and good, but left quite a bit to be desired. I'm not a Python user and the whole workflow was a bit too clunky for my taste.

In July of 2015, I decided to migrate to Jekyll. It's pretty widely used and the
integration with GitHub Pages promised to be smooth and easy. In practice, I
found its workflow to be similarly annoying. Worst of all, it made me install,
write and use Ruby^{1}. What's more, GitHub seems to have a predilection for
breaking things so often and dramatically, that dragon was born.

Learning Haskell has been a fantastic voyage. Although I've not yet used it for any "real" projects, I'm sold on its expressivity and general mind-expanding-ness. I had seen Hakyll pop up on my radar a few times and considered migrating to it for quite a while. Last night, I finally made the leap. The experience was so great, I can't believe I didn't do it sooner.

From now on^{2}, my blog will be powered by Hakyll. You can find my site.hs,
which will undoubtedly grow and change over time, on GitHub.

Good riddance.

I'm a long-time Ruby hater. #sorrynotsorry

… at least until dragon matures or I find an even better alternative.

What follows is an LFE translation of Roberto Aloi's *Tracing Erlang Functions*.

Tracing LFE functions for debugging purposes is quite simple.

Let's say you have the following module and want to trace one of its functions.

(defmodule maths (export (sum 2) (diff 2))) (defun sum (a b) (+ a b)) (defun diff (a b) (- a b))

Before we get started, make sure you compile the `maths`

module:

(c "/path/to/maths.lfe")

Just start the tracer:

(dbg:tracer)

#(ok <0.46.0>)

Tell the tracer you are interested in all calls for all processes:

(dbg:p 'all 'c)

#(ok (#(matched nonode@nohost 26)))

Finally, tell it you want to trace the function, `sum`

, from the `maths`

module:

(dbg:tpl 'maths 'sum [])

#(ok (#(matched nonode@nohost 1)))

Now, try to call the function, as usual. The tracer is active!

(maths:sum 2 3)

5 (<0.29.0>) call maths:sum(2,3)

To trace all functions from the `maths`

module:

> (dbg:tpl 'maths [])

To trace the return value for a given function:

(dbg:tpl 'maths 'sum (match-spec ([_] (return_trace))))

#(ok (#(matched nonode@nohost 1) #(saved 1)))

(maths:sum 19 23)

42 (<0.56.0>) call maths:sum(19,23) (<0.56.0>) returned from maths:sum/2 -> 42

To stop the trace:

(dbg:stop)

ok

Since much of my time these days is spent ```
(or 'writing 'developing
'evangelizing)
```

LFE, it should come as no surprise that I use the rebar3
compiler plugin extensively. In doing so, I noticed it always recompiles every
`.lfe`

file it finds, irrespective of the staleness of the corresponding `.beam`

file. Having compiled a fair amount of vanilla Erlang via `rebar3`

, I knew it
didn't have to be that way.

To further my quest, tristan__ pointed me to run/7, specifically its use of
`[check_last_mod]`

. Before I forked lfe-rebar3/compile, it was using run/4,
which does **NOT** make use of `[check_last_mod]`

, hence the unnecessary
recompilation.

lfe-rebar3/compile/src/lr3_comp.erl:74

rebar_base_compiler:run(Opts, [], Files, DoCompile).

After some experimentation, and _checkouts symlinking tricks, I came up with an
appropriate `run/7`

incantation and made a pull request.

quasiquoting/lfe-compile/src/lr3_comp.erl:37-39

rebar_base_compiler:run(Config, FirstFiles, SourceDir, ".lfe", TargetDir, ".beam", fun compile/3).

With those changes, `rebar3 lfe compile`

will now skip unmodified `.lfe`

files. Not only is this The Right Thing™ to do, but it should be time saver in
the long run.

Here's an example of the difference when compiling Lodox…

… using `run/4`

via 0.2.1:

$ time rebar3 lfe compile

~~> Compiling ./src/lodox-html-writer.lfe ... ~~> Compiling ./src/lodox-p.lfe ... ~~> Compiling ./src/lodox-parse.lfe ... ~~> Compiling ./src/lodox-util.lfe ... ~~> Compiling ./src/lodox.lfe ... ~~> Compiling ./src/pandoc.lfe ... ~~> Compiling ./test/unit-lodox-tests.lfe ... 1.39 real 1.14 user 0.24 sys

… using `run/7`

via 0.3.0:

$ time rebar3 lfe compile

0.94 real 0.73 user 0.20 sys

A little more :cow::poop: microbenchmarking on a proprietary work project:

# run/4 3.58 real 3.02 user 0.58 sys # run/7 1.83 real 1.46 user 0.35 sys

(→ This (is (why (→ I (love Lisp Flavoured Erlang)))))

(progn (include-lib "clj/include/compose.lfe") ; Clojure threading macros (-> (lodox-parse:docs #"lodox") ; lodox docs map, generated by lodox (mref 'modules) ; lodox modules (Maclisp) (cadddr) ; 4th module =:= lodox-parse (CL) (mref 'exports) ; lodox-parse exports (Maclisp) (hd) ; first function =:= docs/1 (Erlang) (mref 'doc) ; docstring of docs/1 (list_to_binary) ; string->binary (Elixir.Markdown:to_html ; Elixir app wrapping C lib '[#(fenced_code true)]))) ; as in GitHub Flavored Markdown

Gott nytt år!

]]>
While writing an LFE solution for Day 6 of Advent of Code, I found myself
wanting to write `parse_instruction/1`

like this:

parse_instruction("toggle " ++ Rest) -> toggle(parse_coordinates(Rest)); parse_instruction("turn on " ++ Rest) -> turn_on(parse_coordinates(Rest)); parse_instruction("turn off " ++ Rest) -> turn_off(parse_coordinates(Rest)). parse_coordinates(String) -> {ok,[X0,Y0,X1,Y2],[]} = io_lib:fread("~d,~d through ~d,~d", String), {{X0,Y0},{X1,Y1}}. toggle({{X0,Y0},{X1,Y1}}) -> undefined. turn_on({{X0,Y0},{X1,Y1}}) -> undefined. turn_off({{X0,Y0},{X1,Y1}}) -> undefined.

But the literal LFE translation doesn't work as desired.

(defun parse-instruction ([(++ "turn off " rest)] ...))

Instead, invocation of a `defun`

of that form throws a `function_clause`

error.

> (defun f ([(++ "prefix" suffix)] suffix)) f > (f "prefixsuffix") exception error: function_clause

After this discovery, I took to #erlang-lisp and tried to figure out why.
Discussing the issue with `@rvirding`

for a few minutes, we decided adding `++*`

and having patterns like `(++* "prefix" suffix)`

expand to nested `cons`

-es was
a solid approach.

N.B. In v0.10.1, `exp_append/1`

had the following clause, commented out.

%% Cases with lists of numbers (strings). [[N|Ns]|Es] when is_number(N) -> [cons,N,['++',Ns|Es]];

Rather than take the overly complicated approach of counting and limiting the
number of expanded `cons`

-es and bottoming out to a call to `erlang:++`

, we
decided to keep it simple and just let `++*`

patterns do their own thing.

The solution we came up with is as follows:

%% exp_predef(...) -> ...; exp_predef(['++*'|Abody], _, St) -> Exp = exp_prefix(Abody), {yes,Exp,St}; %% exp_predef(...) -> .... exp_prefix([[N|Ns]|Es]) when is_number(N) -> [cons,N,['++*',Ns|Es]]; exp_prefix([[]|Es]) -> ['++*'|Es]; exp_prefix(Args) -> exp_append(Args).

Now in the develop branch, you can do the following:

> (defun f ([(++* "prefix" suffix)] suffix)) f > (f "prefixsuffix") "suffix"

or even:

> (defun f ([(++* "p" "r" "e" "f" "i" "x" suffix)] suffix)) f > (f "prefixsuffix") "suffix"

Using a fresh install of iTerm2, I found that `⌥⌫`

wasn't working as
expected. Within minutes, I noticed a **significant** decrease in productivity,
since (apparently) I use `werase`

all the time.

Without further ado, here's the time-saving fix I settled on.

Shout out to to Key Codes (and Homebrew) for making it easy to find the appropriate hex code.

brew cask install key-codes

Just as you'd expect from Erlang or another Lisp, in LFE, lists are composed of
*cons cells* and when you use pattern matching to bind the head and tail of a
singleton list, `head`

holds the single element and `tail`

is the empty list.

> (let ((`(,head . ,tail) '(a-single-element))) (lfe_io:format '"Head: ~w~nTail: ~w~n" `(,head ,tail)))

Head: a-single-element Tail: () ok

We can confirm this by checking that a *cons cell* of the atom
`a-single-element`

and the empty list is exactly equal to a singleton list of
the same atom.

> (=:= (cons 'a-single-element '()) '(a-single-element))

true]]>

So, lately I've been getting increasingly into Lisp Flavoured Erlang (LFE). If you haven't tried it yet, or even if you have, check out the Exercism.io track I helped organize. My latest endeavour is porting Robert Levy's swiss-arrows from Clojure to LFE. It's been going well so far, despite having to rename it to pynchon, since Erlang wasn't down with the hyphenated name and a few people on the LFE Google group suggested it.

Without further ado, here's a contrived example:

(-<> "testing" (-!<>> (string:substr <> 1 4) (lists:duplicate 3) (compose #'list/1 #'lists:flatten/1) (lfe_io:format '"non-updating: ~p\n")) (string:substr 5) (++ "winn" <>))

**Note**: `#'compose/2`

comes from Duncan McGreggor's clj, Clojure functions and
macros for LFE.

The wild-looking form above expands to something more like:

(-<> (progn (lfe_io:format '"non-updating: ~p\n" (list (lists:flatten (lists:duplicate 3 (string:substr "testing" 1 4))))) "testing") (string:substr 5) (++ "winn" <>))

After that, it becomes apparent the "return track" is rather simple:

(++ "winn" (string:substr 5 "testing"))

> (-<> "testing" (-!<>> (string:substr <> 1 4) (lists:duplicate 3) (compose #'list/1 #'lists:flatten/1) (lfe_io:format '"non-updating: ~p\n")) (string:substr 5) (++ "winn" <>)) non-updating: "testtesttest" ; printed "winning" ; returned

Show: Clojure Java REPL Tooling Duplicates All (65 frames hidden) 2. Unhandled clojure.lang.Compiler$CompilerException 1. Caused by java.lang.RuntimeException Unable to resolve symbol: dad in this context Util.java: 221 clojure.lang.Util/runtimeException

Whoa, that's heavy, man.

]]>