Beating REPEND: A New Parameter Convention?

When you do an append a reduce b, the REDUCE generates a new series... let's call it rb. Then rb is spliced into a. And then rb needs to be GC'd.

The idea behind repend a b is that you never make rb. Instead, expressions are evaluated one by one and put onto a as you go. The savings are twofold...reduced memory overhead and reduced tax on the GC by not making extra series nodes.

That might sound like a great savings, but here is a heated debate in Red about the questionable benefit of REPEND (as well as /INTO):

`repend` vs `append reduce` differences and regression · Issue #3340 · red/red · GitHub

I guess I'm halfway on DocKimbel's side there...in that if REPEND isn't showing a benefit it's probably more to do with a bug in REPEND vs. that the idea doesn't represent a savings.

But I hate the word REPEND. Things like REMOLD are double monstrous, and REFORM? Give me a break. These make a terrible impression.

More generally, I don't like the idea that every function would have to come in two flavors and create anxiety on the part of the caller as to if they're using the optimized one or not. I'd like any optimization to be more "under the hood" so the caller doesn't have to fret about it.

2 Likes

Could An Unstable Antiform Solve It?

Let's imagine something like antiform GET-BLOCK!s would reduce... and were--in fact--the result of REDUCE:

>> lift reduce [1000 + 20, 300 + 4]
== ~:[1000 + 20, 300 + 4]~

I used a LIFT there because I'm presuming the console could be one of the things that forces reduction, so that if you don't use a LIFT you see the reduced result.

>> reduce [1000 + 20, 300 + 4]
== [1020 304]

So REDUCE would be a very cheap operation with a fixed cost, regardless of how big the block you pass in is. (I've thought these might be "intrinsics", and not even create frames).

Then APPEND, INSERT, CHANGE, etc. can accept the antiform, and blend in the reduction with their new series creation. You can thus avoid the creation of large intermediate series.

And... the parameter fulfillment to any function that doesn't understand the convention, can just reduce it during argument fulfillment... passing the function the normal block.

The only downside I can think of is that if something ^METAs the REDUCE and pokes it off somewhere, it could wind up performing the operation at a later time than you would think.

It also gives more information to a function than you'd think you were, if it has this parameter convention then the receiving function gets the pre-reduced information.

If these kinds of things are problems, we could call it REDUCES and say it is used by the optimization-minded, vs. trying to make it the pervasive default of REDUCE. People who disagree could say reduce: :reduces and see if everything runs the same...

Hmmm... it seems that deferring the REDUCE is generally only useful if you splice, so maybe it should be an isotopic GET-GROUP!?

Anyway... this is a kind of out-there idea. It seems to point a bit to my generic LAZY ideas pertaining to antiform OBJECT!, that were maybe more trouble than they were worth...

UPDATE: Just in general this kind of thing has problems, because it gives side effects to decay; I've wondered if decay should be allowed to have any side effects. This is a big design point.

2 Likes