People suggest this from time to time. One of the good reasons not to do it is that adding refinements to functions can break previously working code.
foo bar /refine1 baz /refine2 mumble
Let's say you start out with a situation where foo is an arity-1 function, which has two refinements that take arguments: /refine1 and /refine2. And bar, baz, and mumble are all arity-0 functions that
take no refinements.
Then one day, someone comes along and adds a /refine1 refinement to bar, that takes no argument (or an argument). And someone adds a /refine2 argument to baz (or to bar...) that takes an argument (or not). Etc.
You wind up in a situation where existing callsites can be invalidated by adding refinements to functions. It's not robust, compared with:
foo/refine1/refine2 bar baz mumble
That only suffers breakage of callsites if you add required arguments to the functions, not new optional ones.
In practice, there are lots of other reasons to prefer the idea that a single function call can be "configured" in a single block-item's worth of space. In Ren-C in particular, you can GET that function as a specialization with what's called Refinement Promotion (note: this has problems at time of writing when refinements have arguments...which some headway is being made on fixing)
>> ap: get $ first [append "abc" "d"]
== \~&[frame! [series value part dup line]]~\ ; antiform (action!)
>> apd: get $ first [append:dup "abc" "d" 2]
== \~&[frame! [series value part line]]~\ ; antiform (action!)
>> ap [a b c] spread [d e]
== [a b c d e]
>> apd [a b c] spread [d e] 2
== [a b c d e d e]
This is very helpful when building higher level dialects that want to incorporate function calls: you can GET a configured call (unrefined or refined) as a black box.