In working with things like ANY and ALL, I realized that these constructs needed to decay things like PACK! in order to LOGICALLY test them.
But they were also returning that decayed result:
>> all [1 2 pack [null 20]]
== \~null~\ ; antiform <-- not all the conditions were true
>> all [1 2 pack [20 null]]
== 20 ; <-- did we actually need to *return* a decayed result?
Wouldn't it be more useful if the ALL saved the PACK! before it decayed it for the test, and returned that?
>> all [1 2 pack [20 null]]
== \~('20 ~null~)~\ ; antiform (pack!)
If you can't do that, then it's not meaningful to use ANY or ALL with a SET-BLOCK!:
[x y]: all [... whatever] ; if ALL always decays packs, can't work...
You would always have to write the pattern:
[x y]: unlift all [... lift (whatever)]
That's no fun. So ANY and ALL Started Returning The Pre-Tested Value
Should Loops Follow Suit?
For example:
>> for-each 'x [1 2 3] [print "Looping" pack [x * 10, x * 20]]
Looping
Looping
Looping
== \~('30 '60)~\ ; antiform (pack!)
It seems better. But it got me to thinking about the style of writing loop-like constructs which do something like:
result: () ; start result off as VOID!
while [...] [
...
^result: something something
]
return ^result
You might ask: "what if this glossed over a FAILURE!... or a PACK! containing FAILURE!... on some intermediate loop iteration?" The offending ^result could get overwritten, and you'd never find out about it.
And I would have said "that's exactly why (^result: ...) doesn't suppress such states and the expression still evaluates to the offending value." You'd have to consciously (ignore ^result: ...) to overlook a bad state.
...but then... what if somebody does THIS:
while [...] [
...
if veto? ^result: something something [
return null
]
]
Uh-oh. The VETO? test has effectively become an IGNORE in disguise.
Now if the result was a FAILURE!, it just gets wiped if the loop goes through another iteration. You never find out about it.
![]()
Things Like ANY and ALL Dodge This Due To Their Test
You can't LOGICALLY test something that isn't stable. So that forces constructs like ANY and ALL to guarantee the values could be decayed if you wanted to.
But some constructs just do piping of things without testing them. And there's kind of an inherent risk in letting people store FAILURE! states in variables without forcing some kind of "conscious acknowledgment" before discarding them.
I thought that the propagation out of SET would cover it, but there may be too many ways to sink that signal besides IGNORE.
Conscious Ignoring: ignore $var
One line of thinking would be that if you're going to overwrite a variable containing a FAILURE!-like state, you have to prove you've handled that state.
>> veto? ^x: fail "Demo Error"
== \~null~\ ; antiform (logic!)
>> x: 10
** PANIC: Overwriting unhandled error in X: "Demo Error"
You'd have to do something like:
>> veto? ^x: fail "Demo Error"
== \~null~\ ; antiform (logic!)
>> ignore $x
== \~,~\ ; antiform (ghost!)
>> ^x
== \~,~\ ; antiform (ghost!)
>> x: 10
== 10
Maybe this seems a bit paranoid, but I don't think it should be easy to lose FAILURE! states.
As far as I can imagine at the moment, requiring the IGNORE step before overwriting failures wouldn't break any currently well-formed code. The only thing I can see it doing is helping stop bugs.