You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
REDUCE/into and COMPOSE/into are primarily meant for efficiency, by making intermediate blocks less necessary when doing incremental block construction. However, due to implementation constraints they only reduce the overhead of allocating intermediate blocks in the heap, which reduces GC pressure. They actually use the stack as an intermediate block, and just don't allocate another intermediate block in the heap.
Nonetheless, getting rid of this second intermediate block may increase efficiency in the operations that are equivalent to the INSERT something-stringish REDUCE-or-COMPOSE block code pattern. As long as we can do this efficiently, it shouldn't break the model to allow it. And it would allow us to make other mezzanine functions more efficient or flexible as well, like (RE)JOIN or REPEND. It probably won't help much with the other */into functions since they're using chained inserts, but we'll see.
We would need to figure out a way to share code with the existing INSERT action, to make it able to take its value parameter from the stack. It's a reshuffle, but it could help increase Rebol's efficiency and flexibility.
It would be better if the reducing /into was not just returning value at tail of insertion, but also set the position in the target, so one could write just:
>> data: make block! 2000000 delta-time [loop 1000000 [reduce/into [10 + 20 30 + 40] data]]
== ??? suppose better result than above, because eliminating the `data:` part.
But as it was your wish... feel free to have it closed ;-)
Btw... the difference with append in my R3:
>> data: make block! 2000000 delta-time [loop 1000000 [append data reduce [10 + 20 30 + 40]]]
== 0:00:00.355581
and in Ren-C (version: 2.102.0.3.1 build: 21-Apr-2018/7:48:18)
>> data: make block! 2000000 delta-time [loop 1000000 [append data reduce [10 + 20 30 + 40]]]
== 0:00:00.569506
PS: I'm not for optimizations in all cases and agree that sometimes is better to have the code readable. Still why limit yourself to write more optimal code?
Hostilefork commented on Jun 27, 2018:
But as it was your wish... feel free to have it closed ;-)
You may feel free to reopen it (or open a new issue) if it's something you are passionate about and want. But there's enough to worry about without worrying about my own wishes that are no longer my wishes.
Still why limit yourself to write more optimal code?
I explicitly said, that going around looking for cases where it makes things faster is not the point.
It's good it to look for optimizations, but not at the cost of ruining the system or creating problems for eternity with it. Getting rid of this particular virus and looking to find performance benefits another way is the right answer.
What if series allowed the data to be broken up, and REDUCE segments could go directly into (possibly slightly too large) memory segments directly? They could form linked lists, and could be enumerated fine that way...but these linked lists would be on-demand flattened out when operations that required the flattening were needed?
Ren-C has an improvement that makes it orders of magnitude faster in the creation of derived objects. It came from deep thinking and problem-solving with non-trivial designs. Giving up on looking for fundamentally better designs and doing hacks like /INTO at the cost of the language present-and-future is a sign of "we couldn't think of anything good"--and lets the whole thing slide into gibberish and mediocrity.
Hostilefork added the Ren.dismissed on Jun 27, 2018
The text was updated successfully, but these errors were encountered:
Submitted by: fork
Given that you can do:
You should be able to do:
As error message shows, you currently can't.
Imported from: CureCode [ Version: r3 master Type: Wish Platform: All Category: Native Reproduce: Always Fixed-in:none ]
Imported from: metaeducation#2081
Comments:
Submitted by: BrianH
REDUCE/into and COMPOSE/into are primarily meant for efficiency, by making intermediate blocks less necessary when doing incremental block construction. However, due to implementation constraints they only reduce the overhead of allocating intermediate blocks in the heap, which reduces GC pressure. They actually use the stack as an intermediate block, and just don't allocate another intermediate block in the heap.
Nonetheless, getting rid of this second intermediate block may increase efficiency in the operations that are equivalent to the INSERT something-stringish REDUCE-or-COMPOSE block code pattern. As long as we can do this efficiently, it shouldn't break the model to allow it. And it would allow us to make other mezzanine functions more efficient or flexible as well, like (RE)JOIN or REPEND. It probably won't help much with the other */into functions since they're using chained inserts, but we'll see.
We would need to figure out a way to share code with the existing INSERT action, to make it able to take its value parameter from the stack. It's a reshuffle, but it could help increase Rebol's efficiency and flexibility.
Resolved in Ren-C because it has eliminated the /INTO feature, explanation here (Closing this as it was my wish in the first place, I retract it.)
@hostilefork Being able to reuse series is a very important optimization.
As the
/into
was added for efficiency, instead of removing it it would be better to make it even more powerful.I mean... (when I use modified examples from your explanation), while current (not Ren-C) behaviour was:
It would be better if the reducing /into was not just returning value at tail of insertion, but also set the position in the target, so one could write just:
But as it was your wish... feel free to have it closed ;-)
Btw... the difference with
append
in my R3:and in Ren-C (version: 2.102.0.3.1 build: 21-Apr-2018/7:48:18)
PS: I'm not for optimizations in all cases and agree that sometimes is better to have the code readable. Still why limit yourself to write more optimal code?
You may feel free to reopen it (or open a new issue) if it's something you are passionate about and want. But there's enough to worry about without worrying about my own wishes that are no longer my wishes.
I explicitly said, that going around looking for cases where it makes things faster is not the point.
It's good it to look for optimizations, but not at the cost of ruining the system or creating problems for eternity with it. Getting rid of this particular virus and looking to find performance benefits another way is the right answer.
What if series allowed the data to be broken up, and REDUCE segments could go directly into (possibly slightly too large) memory segments directly? They could form linked lists, and could be enumerated fine that way...but these linked lists would be on-demand flattened out when operations that required the flattening were needed?
Ren-C has an improvement that makes it orders of magnitude faster in the creation of derived objects. It came from deep thinking and problem-solving with non-trivial designs. Giving up on looking for fundamentally better designs and doing hacks like /INTO at the cost of the language present-and-future is a sign of "we couldn't think of anything good"--and lets the whole thing slide into gibberish and mediocrity.
The text was updated successfully, but these errors were encountered: