rebol2>> 1 + 2 * 3
== 9
If we are going to measure complexity vs simplicity, 9 is a more complex answer than 7.
The reason it is more complex is because it means there are distinct evaluation modes.
-
In "NOT running an infix function" mode, an evaluator step is willing to do lookahead. That's the mode 1 is evaluated in above... it isn't running infix yet, so it's willing to "see the
+" -
In "running an infix function" mode, it foregoes looking ahead. That's the mode 2 is evaluated in above, so it does NOT look ahead to "see the
*"
This introduces a flag, or some "awareness" into EVAL:STEP (the internal version used to fulfill arguments) to request running an evaluation in the mode of "not looking ahead".
Good And Bad: Alternative Semantics
There's a good and bad of this, in that it means you get different things from the infix vs. the prefix forms:
rebol2>> 1 + 2 * 3
== 9
rebol2>> add 1 2 * 3
== 7
rebol2>> 1 + multiply 2 3
== 7
It's good if you like variety. It's not good if you think working the same under substitution is desirable.
There are cases in Ren-C where this has to be overruled, e.g.
10 = length of block
Although OF is infix, we don't want that interpreted as:
(10 = length) of block
The reason the current implementation doesn't get bit by this is because literal-left arguments are done at a different point in the evaluation. LENGTH is a WORD! taken literally and not evaluated. So the "lookahead suppression" flag isn't heeded in the code that does literal lookback. But while this has worked around the rule for this case, it makes the rule seem even more suspect.
If we always saw 10 = ... as meaning ... is evaluated as if there were nothing on its left, that is a simpler model--both mentally and in the evaluator--than having to worry about exceptions.
Theoretical Benefit: It Makes Stacks Shallower
The idea that it "folds" values on the left means that you have a shallower stack if you write something like:
1 + 2 + 3 + 4 + 5 + 6
This does 1 + 2 and gets 3, then does 3 + 3 and gets 6, then etc.
Whereas if you just allow it to recurse, then 1 + will stay on the stack waiting while 2 + is evaluated, and that will run 3 + etc.
But there's more than one way to deal with such things. I've proposed things like allowing + to switch modes when nothing is on the left, so you can write:
(+ 1 2 3 4 5 6)
In practice, I don't think long strings of infix gaining efficiency through shallower stacks is a good argument.
Ren-C's Infix Makes The Exception More Complicated
The idea of an "INFIX" function is simply any function that gets its first argument from the left. It doesn't have to take just one argument on the right. It can take any number (including zero, so being effectively postfix).
Hence if you have an infix-three function that is an arity-3 infix, what should it do if it sees:
1 infix-three 2 + 3 4 * 5
How should that be interpreted? Does the "don't look ahead" kick in immediately while gathering the first argument, causing an error?
(1 infix-three 2) + 3 4 * 5 ; too few arguments
Or should the "don't look ahead rule" only apply to the last argument?
(1 infix-three (2 + 3) 4) * 5
Or does having more than one argument mean the rule isn't applied at all?
(1 infix-three (2 + 3) (4 * 5))
There's More Challenges With Reevaluation
I ran into an assertion failure involving the "don't look ahead" flag with a demo of an INLINER:
plus-two: infix inliner [left] [spread compose [(left) + 2]]
1 plus-two = 3
So the concept of this inliner is to react as infix, and rewrite the code so you get:
1 + 2 = 3
But rewriting after you've already dispatched an infix function leads to questions about "what mode are you in now". Are you in the looking ahead mode, or the non-looking ahead mode?
The answer to make this work is clearly "it should be in looking ahead mode", but the assert points to the questions about "why are we doing this".
And we have to write 3 = add 1 2 and not add 1 2 = 3. So what is the great value in being able to say 1 + 2 = 3 instead of 3 = 1 + 2
Is It Worth It?
I feel like seeing examples like length of really does show where "being less irregular" helps the language.
Cognitively, I think if we could say that add 1 2 * 3 and 1 + 2 * 3 behaved identically, there is value in that.
Does anyone want to speak up in favor of "irregular infix"?