Design for IMPORT

So almost all languages are now at the point where modules are insulated from each other... and when they import definitions, these imports apply only to them and not other places.

The way some import functions work actually forces you to give a new namespace for the imported definitions. For instance, in Node.js

const some_lib = require('./some-library')

Then if you want to access any of the functions, you have to say some_lib.an_exported_api(...).

Of course if you didn't want to need to qualify it with the library, you could make local definitions:

const some_lib = require('./some-library')
const an_exported_api = some_lib.an_exported_api

In the special case that you only want exactly one function, you can duck the variable for the library:

const an_exported_api = require('./some-library').an_exported_api

But usually you want more than just one definition. So JavaScript has evolved an import statement:

import defaultExport from "module-name";
import * as name from "module-name";
import { export1 } from "module-name";
import { export1 as alias1 } from "module-name";
import { export1 , export2 } from "module-name";
import { foo , bar } from "module-name/path/to/specific/un-exported/file";
import { export1 , export2 as alias2 , [...] } from "module-name";
import defaultExport, { export1 [ , [...] ] } from "module-name";
import defaultExport, * as name from "module-name";
import "module-name";
var promise = import("module-name");

So What Should Our IMPORT Look Like?

I'm wondering if to think of IMPORT as something that takes a bunch of refinements is to think of it as single arity. If your intent for specifying what to import is simple enough to express in a FILE! or URL! or TAG!, then do that...otherwise dialect a block:

; import from a file
import %../modules/module-name.r

; import from url
import http://example.com/module-name.r

; import from package registry
import <json>

; import a particular registered version (maybe allow PATH!?)
;
import [<json> 1.2.0]
import <json>/1.2.0

; import a specific git hash
; (Note: short hashes are 7 digits long, which are illegal BINARY!, so that
; has made me think that we should be using ISSUE!/TOKEN! for Git hashes)
;
import [<json> #ccb4914]

Notice how I'm leaning toward using a BLOCK! vs trying to use refinements. I think this is better than trying to say IMPORT/VERSION/ITEMS or similar.

So when you have a complex list of specific things to import as well as their aliases, that just folds into the block:

import [
    <json> #ccb4914
    some-function
    aliased-name: some-other-function
]

We can't really use any WORD!s in this dialect, like as or from or ->, because those could be the words of what you are importing. :-/

So that's why I went with this idea of using SET-WORD!s.

But That's Actually... Kind of Ugly

Looking at the IMPORT statement taking a block like that doesn't feel great.

import [
    <lib1> #ccb4914
    some-function
    aliased-name: some-other-function
]
import [
    http://example.com/some-library.r #1c37714
    some-function2
    aliased-name2: some-other-function2
]

Even breaking the library out as a first argument improves it:

import <lib1> [
    #ccb4914
    some-function
    aliased-name: some-other-function
]
import http://example.com/some-library.r [
    #1c37714
    some-function2
    aliased-name2: some-other-function2
]

I feel like having the library name next to the IMPORT statement is important. And I don't think you should have to use a refinement to get the block.

This makes me feel that a single-argument IMPORT is probably something to be avoided. This combines with my remarks on precriptivism regarding wildcard imports.

Detecting Assignment

The question of being able to tell whether there's an output is something I think we should certainly not shy away from in designing features. JavaScript does it.

We could make an IMPORT that can tell when you've assigned it:

zip-stuff: import <ZipModule>

Such a form could then avoid consuming a BLOCK! after it naming individual things to export, since you have named an aggregator:

import <ZipModule> [zip unzip]

But that would be foiled by the idea of putting more than just word mappings in the block (version info, etc.). This would suggest winding up with refinements, like /VERSION, that would get harder to read:

import/version <SomeModule> [
     blah-blah,
     long-list
] 1.0.4

The clunkiness of that is what made me favor using the block as a dialect, which gives more control over the ordering and presentation.

Anyway, point is that detecting a variable on the left to assign to is an interesting idea but might be hard to exploit to make the simple arity-1 case work without a block.

Rethinking Prejudices About "Inline Dialecting"

Historically it was frowned upon to try and use the logic of literal searching for keywords to make things look "prettier", because they contaminated the function interface.

This was early on given by the avoidance of IF...ELSE and using EITHER, because ELSE didn't have a way to make sense.

There's a good infix logic to them now, and ELSE is a function in its own right. But what if it weren't, and someone just wanted to write a variadic that looked and saw the word "ELSE" and decided to read another argument? Is that bad?

I ask because it seems that a Redbol IMPORT has a tough time looking as nice if it can't pull from the playbook of "recognize when the next word at the callsite is FROM". If everything has to be in a block or cued from refinements, you get a sort of stark regularity that can be a bit of a turn off.

It just seems a bit like restrictive thinking to say you could start a BLOCK! and then do any dialect you want, but then forbidding "inline dialecting" where you recognize keywords without introducing a nesting level.

Just something extra to think about while considering how IMPORT might be in striking distance of as good as JavaScript (!)

1 Like

Well, variadics have been added for a reason.
So the question is: should ren-c restrict itself to some sort of basic features? Or should it be a first user and poster child of all available features?

1 Like

An argument for arity-1 import could be the suggestion that you could IMPORT any old OBJECT! and it would act as if you'd declared that object's field as top-level variables.

That could be another construct...like WITH. But there's something kind of interesting about IMPORT...which makes me wonder if you want to load a module as an object if you might just use another word for that, e.g.

mod: load %some-module.reb

You get back to the "loading multiple copies question", of whether LOAD should cache if the thing it' loading is a module. And then you have to worry about all the questions of parameter and dialect duplication (what about version numbers, etc.?)

But maybe it could be worked out that IMPORT just propagates options off of load, and

 import %some-module.reb
 => 
 import load %some-module.reb

And IMPORT would inherit all the refinements. Perhaps the specification of narrowing down what you imported could be done like:

import @[zip unzip] %some-module.reb

The reason I suggest an @ parameter instead of a plain block is because you might want to say:

 import (expression producing thing to import)
 import (expression producing import list) expression producing thing to import

So using import @[zip unzip] %some-module.reb gives you the option of instead saying:

 import @(import-list) %some-module.reb

But this is all based on me being afraid of the traditional-looking:

 import/partial %some-module.reb [zip unzip]

Because it feels like it makes an ugly first impression, and it gets worse with

 import/partial/version %some-module.reb [zip unzip] 3.0.3

We'd certainly be better off with:

 import [%some-module.reb 3.0.3, zip unzip]

The point is to demonstrate something that seems like something people would actually want to use. It's important to be self-aware when the very first impressions made at the top of a file are bad.

I think the first lines of the file need to be as good as they can.

So the idea of a construct that behaves differently based on whether there's something to its left was lost when <skip>-able parameters were killed off.

This means today when you write something like:

vm: import %ws-runtime.reb

...you do get the module that was imported in the vm: variable, but it also brings all the exports into the local space as words, too. It had no detection to tell it not to.

I'll re-emphasize that modern languages are stressing the idea that you have to namespace things in this way.

If you want to bring all of those exports into "scope", that could be done with the USE meaning I've been suggesting:

use import %ws-runtime.reb

But that's kind of an all-or-nothing, where you'd imagine wanting to be able to name just some things to use:

use:some [foo bar] import %ws-runtime.reb

Something like use:some [foo bar] import seems to drift a bit from that.

Losing the detection-of-assignment-on-left does set this back a bit...and isn't something I realized was going away when <skip> was tossed. But there were good reasons for killing that off, and those good reasons don't go away just because it helped one usage.

Making USE arity-2 could avoid the use:some, to be more like:

use <*> import %ws-runtime.reb
use [foo bar] import %ws-runtime.reb
vm: import %ws-runtime.reb

What to say for "opt all in" could be something else, maybe #. Or maybe it would look nicer as a specialization?

use # import %ws-runtime.reb
use-all import %ws-runtime.reb

I dunno. Anyway, the generic USE construct would be something you could apply to any object, to bring its definitions into "scope" for word lookup.

It's probably better to go this direction than have IMPORT use a <skip> parameter.

Anyway, the design grind goes on.

1 Like

I like the idea of separating USE and IMPORT like that. It feels like a very clean design to me.

As for ‘opt all in’, I like the syntax use <*> object or use # object. I don’t feel like a specialisation or refinement is necessary. There’s probably further dialecting possibilities for the first argument of USE, if we just think about what people would want to do.

1 Like

Getting to be around the time where I have to do something about this. (I need the form of import that gives back a module without pulling it into scope...)


I can't shake the feeling that when you have a list of configuration things that is possibly infinite, you don't want that potentially infinite list as the first argument.

Looking at this flipped (and switching back to historical .r)

use import %ws-runtime.r <*> 
use import %ws-runtime.r [foo bar] 
vm: import %ws-runtime.r

Now we can ask: how would USE interpret a filename if not as a module? Could the import be implied?

use %ws-runtime.r <*> 
use %ws-runtime.r [foo bar] 
vm: import %ws-runtime.r

Bringing me back to: does the arity-2 nature of USE convey the grouping well enough? If not, we're back to a single arity default:

use %ws-runtime.r 
use [%ws-runtime.r foo bar]
vm: import %ws-runtime.r

Sidenote here, what (if anything) is the result of a USE operation? It's importing symbols into a context, so maybe the aggregate context? Seems to make the most sense, and it's something it has on hand.

let a: 10
let b: 20
aggregate: use make object! [c: 30 d: 4] 
let e: 30

>> aggregate.a
== 10

>> aggregate.c
== 30

>> aggregate.e
** Error e is not in there

Seems vaguely sane. So in this world use %foo.r would be a synonym of use import %foo.r

It's possible--perhaps--that USE could be an arity-1 equivalent of BIND.

So for example, you might want to import a module's definitions so just one function uses it. You don't want to do that every time you run the function, so:

something: func [...spec...] (use %some-module.r [...body...])
something-else: ... ; no %some-module.r influence, GROUP! ended

The use is injecting that module's influence in the evaluation stream, so when the body block evaluates it gets that binding wave added on. But perhaps you could also write that as:

something: func [...spec...] bind %some-module.r [...body...]
something-else: ... ; no %some-module.r influence, BIND arity-2

So you wouldn't need to throw up a GROUP! or something to stop the USE propagation from spreading. It seems logical, that anything you might want to do one way you might want to do the other way.

Sounds nice on paper. :page_with_curl: But there's a big wide open design space there about how to subset a binding operation to just the things you want.

use [
    %some-module.r [x y z]
    %some-other-module.r [a b ccc: c]  ; alias c as ccc?
]

And maybe not all the options have to go on all the things. The above could cover the common IMPORT case, but perhaps you could break it apart:

some-mod: import [%some-module.r <lots> /of #crazy op.t:i/o:n.s]

use [
    (some-mod) [x y z]
    %some-other-module.r [a b ccc: c]
]

It's starting to look palatable... especially if USE and BIND can be unified.

What About Old Rebol2 USE?

What old Rebol2 USE used to do, was effectively EVAL BIND:

x: "outside"
use [x] [  ; old meaning of USE
     x: "inside"
     print [x]  ; prints inside
]
print [x]  ; prints outside

This form of USE tended to be used to do some prep work and return blocks, e.g. a BIND operation that wanted to do some additional work:

make-thing: func [...] [
    return use [x] [
        x: 1 + 2  ; concept is there's setup code, so you can't just bind
        [some block referring to x]
    ]
]

Things are a lot different today, and you can accomplish this with LET and other constructs.

    return eval bind [x] [
        x: 1 + 2
        [some block referring to x]
    ]

    return (
        let x: 1 + 2
        [some block referring to x]
    )

    return (
        use {x: 1 + 2}  ; FENCE! behavior not decided, but probably this works
        [some block referring to x]
    )

I think the idea of being able to slipstream bindings into the current evaluation stream is a higher calling for USE, and I don't think this arity-2-evaluating form is a big priority.