Attendees:
Name | Abbreviation | Organization |
---|---|---|
Waldemar Horwat | WH | Invited Expert |
Jack Works | JWK | Sujitech |
Nicolò Ribaudo | NRO | Igalia |
James M Snell | JLS | Cloudflare |
Dmitry Makhnev | DJM | JetBrains |
Gus Caplan | GCL | Deno Land |
Jordan Harband | JHD | HeroDevs |
Sergey Rubanov | SRV | Invited Expert |
Michael Saboff | MLS | Apple |
Samina Husain | SHN | Ecma International |
Chris de Almeida | CDA | IBM |
Keith Miller | KM | Apple |
Istvan Sebestyen | IS | Ecma |
Jesse Alama | JMN | Igalia |
Eemeli Aro | EAO | Mozilla |
Ron Buckton | RBN | Microsoft |
Daniel Minor | DLM | Mozilla |
Presenter: Guy Bedford (GB)
GB: So we are starting with import sync today. And to give the background on this proposal, it was recently a PR on the Node.js project, #55730, for a import.meta.require function inside of ES modules. The only runtime today that supports a syntactical require
inside of the ES modules is Bun and what makes this possible is that we have synchronous require of modules inside of Node.js and this I guess seemed like a useful feature to users to be able to have this requirement. On further investigation and discussion, we were able to determine that really the only reason this was desired since you can import CommonJS modules and require ES modules, the only feature that was really wanted out of this is to synchronously obtain a module. And so this is kind of like maybe could be thought of in some sense as like the last feature of CommonJS that Node.JS is struggling with. I wanted to bring it to the committee out of this discussion to make sure we’re having the discussion in TC39 because there’s a clear demand for some kind of synchronous ability to get access to a module. And there’s also possibly a risk that if TC39 were to consistently ignore this demand, that there could be ways in which platforms could continue to work around it and potentially create new importers which are basically going to be having different semantics to the ones that exist in TC39.
GB: So what are the use cases here? A very simple one is when you want to get access to the node.JS modules or FS or any of those. Node.JS had an interface module to the built in modules to solve this use case. That’s clearly not the use case that’s in particular being tackled here although may be a cross platform version of it. Also synchronous conditional loading and then getting dependencies that have already been loaded. If a dependency has been imported and available, it’s available synchronously if you had the ability to check for it. So kind of like traditional registry.get
use cases. And then there’s the sort of all in sync executor use case where there could be benefit in having a sync executor when we do module virtualization and also module instances and module expression and exploration.
GB: And what is different about this conversation today versus in the past? One of the big changes that happened—another one of the big changes that happened recently in Node for the require refactoring is that module resolution is fully synchronous. This is now pretty well set in stone. That is a recent development. That was not true up until recently with Node.JS had a async hook pipeline and the ability to have asynchronous resolvers running on the separate thread and various asynchronous folks. All of that has been made synchronous now. In the process of being made fully synchronous now. In addition, browsers implemented fully synchronous resolvers which means that we can do the resolve part of an import sync fully synchronously, and we know that in reality of all of the platforms today, all of them use synchronous resolution, that was never a given. That’s one of the changes. Another change that I think is worth bringing up is that a sync import was never a possible discussion because it went against so many of the fundamental semantics of the module system working between browsers and Node.JS and the difficulty in bridging the module system between the very different environments. But now that the baseline asynchronous behaviors are fully baked and fully reliable and implemented and, for example, the Node.js module story has come together, it is possible to consider synchronous additions which don’t sacrifice the core semantics but can layer on top at this point. And import defer
has actually already done that work. So the semantics that import defer defines are basically exactly the same semantics that you would want in many ways for a synchronous execution. And then also we think about what we want for virtualization use cases dynamic import is the thing we always talk about as the executor in virtualization and compartments where that would always require virtualization being async. Having a synchronous of import, a synchronous virtualization could be useful.
GB: The design I’m proposing here, this is a proposal if someone thinks there is a better design and I tried to discuss a few. We shouldn’t move forward with this design, but this is the design I’m bringing forward for discussion today, which is just an import.sync
. It would not be a phase. Sync is not phase. And semantics would be roughly what import defer
has as of today. You would do the synchronous resolution throwing any resolution errors. If there’s already a registry module available providing that, if not doing host loading. There’s a question here about whether host loading should be included in import sync. So should we actually do the creation and compilation and substantiation of the module work inside of import sync that obviously something that the browsers won’t be able to do and convergence of behavior of modules and node and it can do the full pipeline and that can’t. And up to bundlists to bridge that. Effectively not available sync error and browsers could throw and Node.js could succeed, or TLA could throw, et cetera, to get to completion. It’s this new error that would be "not sync or not available synchronously", or some kind of error like that. Very similar to Node.JS does to require—when you require something using TLA, it will give you this error and start to try to load it and give this error and maybe leaves some stuff partially loaded or something. That says that you should use the async. We can have some kind of host error. Maybe host error, maybe fully TC39 error and we decide what error it is.
GB: And then to explain how this could be useful in module expressions and declarations is that instead of using to use an async function to get instance for module expression, if you have a module expression available synchronously no reason you couldn’t synchronously evaluate the module. You have that. With module expressions you have everything available in the synchronous context. Maybe this justifies having a synchronous executor. For module declarations with the dependency graph with the module declaration you could synchronously execute the modules as long as not having top level await or third party dependencies. What if they have external dependencies? Trying to import sync with module exploration with external dependencies and in this example an outer import get that in the registry and get it to execution. In the module tries to load the same import specifying string it’s already in the registry and available. That can actually work. If you just bubble up all of the string specifiers to the upper scope and know they are executed, that will be fine. There is nice interaction with import defer
. If you have deferred loading of something, it will be import unless in the cycle NRO reminds me. Imports defer readiness is exactly import sync readiness. And the question then is would it be worth considering for the import defer proposal some kind of name space deferred because in this example we never use the deferred name space, we want to be able to access it through other means. For example, it could have been in the nested module declaration. So with the namespace-free defer you guarantee the semantic, you guarantee everything and done all the work before you get here and import. And here is the example with module declarations. You would execute name
and lib
together late on the synchronous executor and the defer would have done all the upfront async loading.
GB: That’s the semantics for import.sync
. And to consider some alternatives of what else could be done, registry getter you could have just a plain registry.get
with an import.meta.resolve. In general, the registry probably belongs contextually. So you probably want it to be import.registry
or some kind of local thing. And then you probably do want to resolveGet. And so I guess my point here is that the ergonomic API you want for registry look ups is something that will be doing the resolution and being able to check the registry. So just thinking about different APIs that could be possible. This ends up with the semantic that’s very similar to import.sync
. But overall, the alternative design space seems to include firstly do we want to do this divergence between node and browsers and node could maybe do more loading than browsers can? Or try to be stricter and say this is a very strict semantics and only import something that has very specific properties with that availability in the registry or do something like registry capabilities? I do generally think that the use cases here get most interesting when we think about the interactions with module expressions and module declarations and virtualization and registry APIs might not be suitable. But registry API exploration could be in the space of alternatives for exploration as well.
GB: And then across the use cases, for other alternatives to sync module executors, maybe you have .deferredNS
property on the module source or something like that on the instance. Maybe it’s some kind of function on the module source, for instance. Of course dependencies might have other solutions like conditionally calling import.meta.resolve
or weak imports. We have the built in getter in node and sync conditional execution is kind of solved by import defer already. But it could be worth having discussions around.
GB: And the risk is: would import sync be something that pollutes applications and people start creating much less analyzable applications or applications that have different semantics between browsers and Node? So bundlers could still analyze that and make it work. Doesn’t seem like that exists in the ecosystem. If we had this from day 1 it would have been a more attempting proposal. But today it seems like it would be hard for this to prove itself as more ergonomic than the static import system we have in place.
GB: And then deadlocks a cycle of import syncs is a deadlock. This is already effectively possible with import defer, I believe. And so, yeah, that’s a risk. And then I mentioned there’s this browser-server divergence that kind of seems to come to that question do we want to say that all modules that are import synced must already be present in the registry in some way shape or form or description or allow it to do the loading and substantiation? Might be some ways to define that way to more closely match the defer model. Other weaker import semantics could be possible to explore. I will just end there. And hopefully we can get like a few minutes for the next presentation. I will go to the queue.
NRO: With a cycle of executions, import defer would throw instead of deadlocking.
GB: Thanks. We would probably have the same behavior, that makes sense.
ACE: So the relation of import defer while import defer does define some aspect of this, it’s as you know, it’s crucially different to that it does split up the explicitness of when the async work can happen and the sync work can happen. It puts things in the spec that allows theoretical sync execution, they develop intention is not that, it must be synchronous. It allows browsers to load things asynchronously and allows top level await to still happen. I see the relation, I don’t think import defer gives us like a free pass of this to just naturally follow on. There’s still such a fundamental difference between the two.
GB: If I can follow up on that briefly, yeah, I think there’s maybe like—there’s a bunch of loading phases and maybe it would help to put the phases of modules loading down and mark which belong to defer and which belong to asynchronous and which belong to import sync. I think there’s a lot of crossover so far as when you do an import defer, you are doing execution which is exactly what we want to do for import sync. And so far as there might be a model of import.sync
that we want to specify which is that should not be weaker than the import defer used by defer. We could even say that we actually have exactly the same semantics in a strict version of this implementation and then there is the question about how much it should be weakened?
ACE: So if import.sync
is just executing the evaluation phase of the last part of import defer, that’s only going to work when the host can call the load callback synchronously or already in the –
GB: It can’t call the load or callback.
ACE: So then if we’re saying people need to ensure something else is adding something to the registry for this to work. A big issue we had trying to modernize our module system at bloomberg to be in the situation to do import defer is stopping code making assumptions about what other modules are being loaded. So there will be commenting saying this is safe because we know someone else has already loaded this. We really want—we have been trying to make everything statically and we have look ups and environment base and load this on the server and this on the test. We’re trying to make all of those things statically analyzable and limit the things that are relying on the interaction at a distance.
GB: So within this proposal, there’s a kind of gradient from the very strict version that is defer level strictly. I don’t think we would get stricter than what defer is today. And then there’s maybe some slight weakening so we could say we are going to permit some host loading or something. But I think the strict definition that matches defer is very much a viable implementation. So far as you would actually ban any host loading. And that could be supported within the proposal as currently proposed.
SYG: So I am concerned about the complexity of all the ESM stuff like adding sync back in particular , it concerns me. Spent a lot of effort with TLA to move the infrastructure in the spec to implementations to everything async. We will add another sync path that threads through everything that makes me very unhappy. Also from the browser perspective, the divergence problem concerns me. If we diverge that seems bad. If we don’t diverge, the value of the proposal seems much less motivated to me from the browser perspective certainly. If node is disallowed from synchronously loading then why would node want this is my question?
GB: That’s a good question. Yeah, just to be clear, like, I personally have no desire to see import sync today. I am not looking to progress this proposal unless others want to progress it. I am presenting it because it is something that people are doing, it is something that there is a demonstrated use case around. And because there is this kind of risk that if we don’t show exploration of the space to solve use cases for users and demonstrate we’re interested in having those discussions and we shut down those discussions, we risk other risks.
SYG: Okay. Well, then let my crankiness be noted in the notes.
GB: Your crankiness is noted.
CDA: Let the record show that SYG is cranky. Thank you.
MM: Make sure to coordinate with XS importNow.
GB: That’s definitely been an input into the design. We’ll follow up with some discussions.
GB: So I’m not asking for Stage 1. What I’m asking for is if anyone thinks I should ask for stage 1 or if anyone thinks I should not ask for Stage 1?
MM: I think you should ask for Stage 1. I think that Stage 1 is weak enough and that in terms of what it implies with regard to committee commitment and signal and I think that, you know, the issues you’ve raised are perfectly reasonable to explore in a Stage 1 exploration. And I would support Stage 1.
JSL: Just to the point I’m not particularly happy with having a sync option for import either. But I also am very unhappy if node is disallowed to go off and do this on their own and no one else follows suit. If this exists in the ecosystem, I would rather it be part of the standard. Just on the point of view of someone who has to make their runtime compatible with node and other runtimes and that kind of thing. I don’t want to be chasing incompatible and noncompatible extensions to stay compatible with the ecosystem. While I absolutely sympathize with the concern ant adding sync back into this picture in the standard, I would rather it be done here rather than in node. If that makes sense.
DE: I’m not sure whether this should go to Stage 1. This is very different from the design of eS modules generally and maybe we should be hesitant before giving this sort of positive signal to this direction. But I’m not blocking it.
JHD: I mean, there are a lot of different desires around this stuff. Desire expressed on the node PR as I understand it is something about being able to statically determine import like points where new code is brought in. There are some folks who want synchronous imports. Some folks who want to be able to import JS without the type boilerplate. There’s some people who want the cJS algorithm in node. A lot of the use cases for sync imports that I see is something that the conditional imports proposal from many years ago, conditional static imports might have ? and worth looking into that. Simply being able to put a static import in a not top level scope position like allowing it to appear in blocks or ifs or things like that, that would I believe provide a similar—like, the same amount of staticness but would also—and the same amount of seeming synchronous—what’s the word I’m looking for? Apparently synchronous imports. And it may drastically remove the desire for a synchronous import. So I do think it is worth going to Stage 1. I think it’s worth exploring all these possibilities. But I mean the reality is that there might be some use cases that we can’t solve in ESM because of the decisions made ten years ago including doing it all asynchronously. If we can solve them, it is definitely better to solve them in TC39 than to have every individual engine or implementation deciding to make their own version. Do I think it’s worth continuing the discussion if only to avoid that risk.
CDA: Noting we are almost out of time, SYG.
SYG: Just to respond to James’ point earlier, if the thing we standardize is not good enough for the sync use cases on the server side that motivated the server run times to come up with their own nonstandard solutions in the first place, we will then just have another standardized thing that people don’t use and they will continue to use the not standard thing. I don’t think it’s a silver bullet to standardize something if it’s actually not good enough. We have to be pretty sure it’s good enough to replace the nonstandard solutions and I don’t see a path right now to that. While I don’t block Stage 1 because, you know, it’s exploration is what is needed here, I don’t see a path currently to Stage 2. So I want to be very clear about that.
JSL: Just quickly responding, yeah, I agree. I agree with that. I just think it is something that we need to discuss. I don’t see a path to Stage 2 right this moment either. But let’s at least have it on the agenda for discussion, then.
CDA: Okay. You also had some voices of support for asking for Stage 1 from KKL and JWK which i assume translates to also support for Stage 1 if you’re asking for it.
GB: I’m going to suggest a framing, then, in that case, what if we say that there’s empathy for use cases in this space but there’s certainly not agreement on the shape of the solution and so this specific proposal for import.sync
is not the thing that’s being proposed with Stage 1. What if it were instead Stage 1 for something in this space? And maybe I actually update to remove the exact API shape and say this is still an exploration.
MM: I think with Stage 1, the general thing there’s a problem statement which is really what you’re asking Stage 1 for. But the concreteness of having some sketch of some possible API I think is always appreciated. But, yeah, the thing that’s Stage 1 is about is the explicit problem statement. I think this is, you know, a fine problem statement to explore in Stage 1.
GB: So to be clear, what we’re asking for Stage 1 on in that case is not the proposed design, because that is not a Stage 1 decision. But in particular, exploring the sync imports use cases including optional dependencies and synchronous executions and explorations and conditional loading parts and built in modules as the problem statement. And under that framing, DE, would you be comfortable with Stage 1?
NRO: Dan is no longer in the meeting. But I will note that he did say he would not block Stage 1.
GB: In that case, I would like to ask for Stage 1.
NRO: Stage 1 is about the problem not the solution, anyway.
CDA: As MM said, it is not the strongest signal to actually land something in the language. Do we have consensus for Stage 1. I think you had support from JWK and KKL and MM. Any other voices of support or does anyone object to advancing to Stage 1? Hearing nothing and seeing nothing, you have Stage 1. Congratulations. We are a little bit past time. Do you want to dictate key points summary for the notes.
- List
- of
- things
- Presented a number of use cases where synchronous access to modules and their execution could be valuable
- While there were some reservations over exact semantics, there was overall interest from the committee in exploring the problem space under a Stage 1 process
- Stage 1 was requested and obtained
Presented a number of use cases where synchronous access to modules and their execution could be valuable and would like to explore the problem space of these under a Stage 1 process. There were reservations about the import sync design, but we are going to explore the solution space further.
Presenter: Guy Bedford (GB)
GB: So in the last meeting, we presented an update on the source phase import proposal. I will just go through a very quick recap of where the proposal is today. So this is a follow on to the import source phase syntax proposal which defined an abstract module source representation for host defined source phases. But it did not provide a module source for JavaScript modules. So this proposal extends the previous source phase proposal to define in ECMA-262 a representation for a JS module source that represents a JavaScript source text module and also forms the primitive for module expressions and module declarations.
GB: The feature is needed in order for it to fulfill the primitives required of module explorations and expressions and dynamic report and host postMessage as module harmony requirements. We’re specifying motivating this proposal on the new Worker() construction use case. So the motivating use case is the ability that the spec will immediately be able to satisfy is the ability to substantiate a worker from directly a source phase import. And this is something that provides tooling benefits, ergonomic benefits for users and enables portable for working substantiation and work across platforms. The module expressions use cases we’re going to be supporting ? the module expressions and message them to other environments. There are object values that get to dynamic import and support serialization and deserialization. The other update that we have from the last meeting is we formally had syntax analysis functions, the import function and export function and import.meta
with top level await property. They were on abstractModuleSource and not on ModuleSource. These have since been removed because they were a secondary use case to the proposal and not part of the primary motivation. To be clear, this still remains a future goal. Because they are not suitable or in this position. But instead to just focus on the module source primitive for the specification. And these will likely to come back in instances of virtualization proposals in the future.
GB: So when we got Stage 2, we identified certain questions we would need to further answer before we could seek stage progression. And these four questions. The big one for worker substantiation is can we actually do this across the different specifications? The source phase has implications in the WebAssembly and HTML and collaboration that has to happen between standards. Can we do that? Do those behaviors work across the specifications? We also identified early on that this module source as specified in the source phase sort of implies that you would have an immutable source record backing it. Generalizing the concept of a module which in turn requires generalizing the concept of the key to align with this. I got my numbers out here. I previously had number 4 higher up and switched it around. I shifted it there. But the concept of a compiled module record is number 4. And Number 2, the concept of generalized keying can work with that. Thinking about the problem of keying and thinking about if there should be some kind of compiled backing record. And so Number 2 is keying. Number 4 is spec refactoring and Number 3 is how does dynamic import behavior work for module sources across different contexts including across compartments and across realms and through serialization? So these were all individually big problems for us to investigate. We spent a lot of time from the module harmony meetings working through a lot of these requirements. So I will give an update on each of these. Cross-specification behaviors we presented at the HTML meet whatnot meeting at the 10th of October and explaining this proposal was at Stage 2 in TC39 and specifies this new source object, because source phase imports have already been merged into HTML, there was awareness of the source phase. This new Worker use case and its semantics and the transfer semantics represented. And there was genuine interest in the proposal. And no negative concerns were raised. It was not an explicit signal of intent or interest. But it was certainly a very unsafe signal positive experience if that makes sense. So, yeah, based on that, I put together a very draft HTML PR to work through some of the initial semantics and prove out the cross spec behaviors. And we worked through this. There are still some outstanding questions that we might well defer initially. We might say that the shared worker and Worklets are unsupported. We’ll probably default to some high secure settings for the cross origin instantiation and COOP integration and CSP integration. And then there’s another question on the HTML side about setting import maps for workers that comes up with resolution and the idea that there is a rough isomorphism for modules in different agents, which only works if you have the same resolution. One of the things we’re looking at there is import map, having good defaults for import maps in the working substantiation so that this worker substantiation would actually clone the import map in the pairing context and to do a best effort match of resolution across contexts.
GB: So this is the HTML PR. There is no HTML PR right now. As a Stage 2 specification, we would like to seek Stage 2.7 to be able to work—to be able to put up the HTML PR and move that into a spec and implementation process. In addition, I presented the WebAssembly CG yesterday, a variation of these slides, and gave another update on the implications for the WebAssembly integration. Again, the overall feedback is interest and no negative concerns were raised. The second investigation was module keying.
GB: So I want to just go through the semantics of how module keying works and it’s kind of like a key semantic when you support dynamic import of these module sources. How does this keying module work? This is something that we spent a significant amount of discussion time exploring and something we gave an update on at the last meeting. And so the semantics that we converged on here is an example of the module registry on the left. There’s the key and the instance. And note that the source is part of the key. So the key consists of URL and attributes and also the actual module source aspect, the sort of compiled source text and then the instance is the thing that you look up against those. So what happens when you import a source? If there is not an existing source in the registry, the source carries both the source and its underlying key, which is the URL and attributes. So when you import a source, it gets injected into the registry with that key end source and gets instantiated against that key and you get back that instance. If I later on import the string key with the matching attributes, I will also get back that same instance corresponding to that same source. What happens if I import a source that is a different compiled source via whatever means you transferred it from another agent and there was a file change in the meantime and so you had different responses on the network? That source in the other agent, but it has the same URL key in attributes as source C but it’s a different module source. This is one of the primary requirements that we identified for import sources. The module keying behavior is that if you import a source, you should always get an instance of the source that you imported. We discussed lots of variations here and discussed them at the last meeting. This is the semantic that we feel is crucial to maintain for this model to make sense. So what happens when there’s already that URL in the registry and a source, we add another entry into the registry against the new source and create a new instance for that. You get a new instance get the new source. So you get registry as far as the URL key and source match as part of the two aspects of the source key. And then the other case is what happens when you have an eval-like module. So you could think of if you evaluate a string containing a module expression, if you have kind of like in WebAssembly, it would be WebAssembly.compile
and compile some random bytes. They are strongly linked and not to the original source URL key. It has a base URL but the source that you have was just its module constructor that would create these when you construct the module from sources, they are eval sources. They just have a unique ID. It has a source and the unique ID and when you import it, the URL key aspect is not the full URL key because that’s just a base URL key. In this case it’s actually this eval unique key combined with the source. So if you structure clone these things, they do reinstance because that key gets regenerated.
GB: To summarize, the primary module key consists of the URL key and attributes or the unique eval ID for unrooted? modules. When we extend this module to module declarations and expressions, their key is parent relative. It’s basically the parent and an offset or something like that. In addition, there’s the secondary key which is the module source and it acts—it contains exact immutable source contents. We need to be able to do comparisons of module sources and define a new concrete method module sources equal which is able to do comparison of module sources between module records. We distinguish sources that are rooted to a key so the source phases records from those that are not eval-ish things with the unique eval ID. And as I mentioned, we define equality because you could have that case where we loaded two modules that have the different source with the same underlying URL key and we need to be able to detect that and add another entry in the registry. So they have the same keys, they coalesce, if they have separate keys they are separate entries in the registry. So that’s module keying.
GB: The next investigation is what happens when you move module sources between agents? So if you have got different types of module sources and you transfer them between agents and dynamic import them, what kind of behaviors do we get? And this directly follows from the keying semantics described in the previous section. Here I have three types of modules. I have a module source that’s rooted to its absolute URL key and a local module declaration that’s contained in this parent module and I have an eval module that’s created by eval and could also be created by the module constructor. When I post message across, I send two copies of every module so I have two variations of each module. They are serialized and deserialized twice. Because we do the serialization and deserialization twice, the actual object that the module source object itself has is unique itself for each structured clone operation. That’s not the level at which this identity exists. Instead, when you import these objects, that’s where the keying identity comes into play. So if I have the source mode, it’s sorted with the URL key and source text and post it twice and the URL key and source text will match for dynamic imports so we get the same instance in namespace. Similarly, having done that, this module source is not present in the registry and just with the previous keying demonstration, if I import the URL key string, I’m going to get the same source. We maintain identity for module expression and module declaration based on the parent equivalently provided the parent isn’t self rooted. And the eval-ish modules on the other hand, every time you transfer them, that eval key effectively gets this key here effectively gets refreshed. So every time a structured clone is regenerated. There’s no concept of a global key. It’s all just serialization. So they aren’t equal. These are the proposed semantics and this is what is written up in the spec and host and variance. To explain the structured clone, you get the same behaviors there. The module and the string imports will be the same even though the objects aren’t the same. You have module declaration identity and the evallish modules get the eval ID created with the serialization and deserialization and they become unique instances.
GB: The implications for WebAssembly is that it also gets the same behaviors. So the analogy of this, we don’t have module declarations or module expressions for WASM today and the module proposal that components became potter components does support the nested modules and you have something similar there. But in view of that not existing yet, we can sort of describe this in sort of the source phase for WASM and WebAssembly.compile
is eval-ish. You post these things you get the same behavior source module for WebAssembly match the chronicle instance and refer to it with the string imports and that matches the cross-examined instance and one of the hard things here is you can already compile WebAssembly and post messages today. We have to be compatible with that semantic as well which we are as a quality. And if you have two different agents so that have different sources, so say for example one agent it had a module URL key with the source bar and on the other agent you have a module that happened to get a different source for that and post them both into agent 3, well, if you do the—if you import agent from agent 1 first is foo = bar. That will become your instance under that source. And so it will have equality. But we don’t get coalesce are qualities and source module two gets different instances. That’s from the course semantic you get the source you import and you don’t get the different source. So that’s the core principle for import source that it must provide canonical instances for the source provided. We updated this into the spec and allowing equality operation before sources by the ? source record and module equal concrete method and update import to run through the host load import module machinery to allow it to perform registry injection and when a record exists coalesce on I quality and import on a source must return instance of the same source and extension of the existing such that the same instance for a given source must be returned every time. If you transfer a module source from an iframe outside of the iframe and you have a module source from a different realm, today this will throw an error. It’s a one line specification because we weren’t sure if we should support it. This is purely a technical question or like an editor but not an architecture request. It’s something where we can remove the lines or add them. Seemed more conservative to add them initially. We could always remove them making what is an error not an error. And so this is something that could be discussed further as well. But it seems better to error on the side of caution without further discussion. And then the last investigation for Stage 2 was the refactoring of the source record. Today, we talk about sources and instances, everything is just a module record. So we talk about this module of importing the source and having the registry key by the source and matching to the instance, and creating instance undertaking against that but in reality everything is just the module record. So in the registry you have module records against the URL keys and when you import a module source, it just points to its module record. So the question here is there this refactoring and split up the source and instance? Should we be doing that? What happens when you import source and points to the module record? Well, we don’t inject the instance that you pass with. We inject the source, the instance gets injected because it effectively already existed in the registry. It’s kind of like sensing which like the module record already represents, it’s basically you can look up the module record like the registry entry. If you have the module record you already have the registry entry. If you have the source object, you basically have the constraint that you have to not rely on the instance data. So that’s the constraint on the source data. So, for example, if we had—sorry, here. This should already be in the registry here. If you import a module record that happens to have another instance on it, you’re going to get the canonical registry instance for that source, not the one that happens to be on the module record. So in the current spec design, we do actually specify these kind of almost like ghost instances that are unused where you’ll still just get that instance 3. So every time you structure clone a module source, you breed a new module record that has the sort of floating instance. But you still converge on the registry and instance. This is kind of the question of spec reality versus spec fiction. And an important part of the discussion. The argument is we maintain equivalence with the spec fiction because the import of a source is always the same canonical registry instance at the key in source. The only way to obtain an instance is through canonization. Only the canonical instances are sort of the ghost instances are fully inaccessible. That’s an invariant that we obtain. This is module records and abstract module records. So if we split them up, we split them all down the middle. But because we don’t have multiinstances today and there’s only one canonical instance of the source identity, we can maintain the invariance on the current module records to specify the necessary behaviors. Only when we get to multiinstances or module instance primitives with compartments that we need to start separating these things.
GB: Since the key model is always consistent with the source instance separation. And the argument that we are making is that right now, today, it would be an increase in spec to make this refactoring. So, yeah. That’s the—those are our Stage 2 updates. For Stage 2.7, we have reviews from Nicolo, and Chris. And we also have from the editors, in that review process, some things came up. Kevin had a –
GB: so KG, brought up a good point about possible refactoring of GetModuleSource. Because initially, in the—in the previous proposal, for source phase imports, that was only supporting WebAssembly source imports. We weren’t defining WebAssembly ModuleRecords in ECMA262 so we used a get concrete method to allow hosts to define their ModuleSource. But now, with ESM phase imports, we do this in internal slots. And we could ensure, I am writing a spec mode the host is maintained, that it’s always the same object. But now we have the field defined, we could actually go back to the source phase imports spec and upstream this new ModuleSource internal slot. As an alternative to the concrete method. Which would basically inspect those eagerly populate the ModuleSource JavaScript objects for all ModuleRecords, even if they don’t have a ModuleSource imports, which we would then expect hosts not to actually do for proponents that they shouldn’t allocate those objects that aren’t used, since maybe less than 10% would expose the sources. They would expect it to do it lazily. But it might be a spec to just define it as an object field. So that’s something that I didn’t want to change upstream in source phase import, in this presentation, but something to continue and determine if it’s a suitable refactoring.
GB: So I want to take a break there. And open up to discussion on the design of the proposal and any questions?
JHD: yeah. You mention something allowing the HTML PR. Why is Stage 2.7 a blocker? I put in for Error.isError
in Stage 2. I marked it as a draft.
GB: That's great to hear. Yeah. I think—it would help a lot. You know, it isn’t just HTML but WebAssembly as an integration. And spec and implementation through this prospect stuff they do go naturally together. I think also, seeking, like, if reviews on HTML are generally reviews to land and implement a feature, or reviews for implementation, I feel like if we want to see this feature shipping early next year, so that we can start to move forward with module declarations and module expressions late next year, that obtaining Stage 2.7 now instead of February will allow us to be able to see module expressions and declarations by 2026. And, you know, there’s still two stages left to that as well. So I—I think it’s interesting to hear how—what the requirements for Stage 2.7 are, or in terms of what the standards processes are for Stage 2.7 in this contexts and I think that’s a really interesting discussion. Maybe we can move more of that discussion to the last discussion on this.
DM: yeah. I am happy to postpone this to another time as well. But similar to what Jordan was saying, I am wondering, there’s an ambiguity at what Stage we want to resolve cross-specification issues. After the ShadowRealm discussion earlier this week it sounds like we have this resolved before Stage 3, which means Stage 2.7 is a perfectly fine time to do that. It’s nice, as a committee, to make that part of our process so that we can remove this in the future.
SYG: I prefer proposals where a majority of the proposal does depend on another spec, like HTML. Or the PRs and where the more equivalent stage of advancement and in lock step. I don’t like the pattern, we want to advance to a more mature stage to convince the other body to, like, look at it for real. I think it’s fine to tell the other body that, their interest in it, should be independently derived and that will feedback into 2.7 or 3. I don’t see why any particular standards body needs to move ahead of the other one. If HTML doesn’t have interest, that should directly feedback into stage advancement considerations here.
GB: just so follow-up on that, I have written an HTML spec. Out of respect for the HTML authors, I did not post it up because I didn’t personally feel like it’s—like, HTML does not have a stage process like TC39 does.
SYG: I think it’s fine for you to say that, like, there’s no more concerns on the TC39 side, aside from HTML folks being okay with this. And if that is—if HTML says, there’s no concerns from our side, aside from TC39 being okay with this, then we are both fine to advance.
GB: that’s not the situation we are in here.
SYG: I see. Okay. That sounds fine here.
GB: so HTML did mention in the whatnot meeting that there could be a concept of an explicit signal of intent from HTML. And that there could be some process around this in the WHATWG meta issue. That could be something that TC39 could explore in this space for future proposals. We did not obtain that official intent because it’s never been done before. But it’s worth mentioning in this context.
SYG: yeah. For the future, that would be a clear signal.
NRO: yeah. Well, it’s already been said, but the problem is that asking web people to integration proposal for Stage 2 usually does not work because Stage 2 can mean like proposals can significantly change at Stage 2 so they usually prefer to wait. So I was not aware of this proposal, official signal, in some way. But we should, like, really work something like that for some of our proposals.
MM: okay. So I have a minor question on the slides as presented so far. But firstly, the orientation question: this question’s discussion we are having right now, this is an intermediate step; is that correct? We are still going to have a response to deal with—
GB: I will follow up with a compartments deep dive and a and then a process discussion before the advancement
MM: great. I will postpone all of my major issues until then
GB: great. KKL could I ask the same of you as well?
MM: okay. Just the minor issue, which might, you know, which if you want to postpone also till then, it might be appropriate –
GB: it’s been—if it’s about the designs as presented so far, this is the time for that design discussion.
MM: okay. So the—you talked about coalescing, and that also was a phrase that Kris Kowal used in our private discussion last evening. And in both cases, it confuses me. Maybe it’s just a terminology issue. But coalescing, to me, sounds like there’s two things that already exist in separately and then made to be the same. And from everything I understood both last night and today, we are not talking about coalescing, if that’s what coalescing means; we are talking about using E information to look up an entry in a table and look up an existing value, rather than creating a new value. There were never two separate values that are then retroactively being made into one
GB: okay. So I will—I would update this slide to demonstrate coalescing, but maybe I can actually just make some edits here. So in this case, you have got two separate agents that have the same ModuleSource to the same URL, but it has a different underlying content. So when I transfer the first module, I will get an instance of that source. When I transfer the second one, I will get an instance of the same source and in this case, there is no coalescing. If we instead had the same SourceText, so if both contain the SourceText, foo = bar, the SourceText equality is, as you say, part of the key so you would get the same instance, and this is what we mean by coalescing. Even though they are completely separate things, and they were structured and serialized and deserialized, their identity coalesces. And strictly speaking, it is just a key lookup. But I have been using the term coalescing for the source coalescing because the source is a secondary key, not a primary key. So it’s a secondary key coalescing
MM: what do you mean, when you say secondary key? I think maybe I still don’t understand that. The pair together is the key.
GB: yes. From a lookup perspective, you would look up the string key, and then you would check if the canonical source matches your canonical source ID. It’s like a primary and a secondary key.
MM: is it –
GB: maybe it’s a terminology thing. You say the lookup is for that key
MM: okay. As long as it’s consistent with saying, lookup is for that key, the issue of how do you break up the overall key lookup seems like an implementation concern, not a semantic one
GB: that might be a case of spec fiction versus spec reality. The spec model is that. Where the—you are effectively looking up the compound key. Because one key is defined in HTML and the other key is defined in ECMA262, it does end up being a two-part process
MM I see. Okay. That was clarifying. Okay. I think I can postpone everything else I am concerned about
GB: let’s follow up in the—after the compartments discussion. Were there any other questions on the queue?
KKL: I just wanted to throw in that I propose the word coalesce might be the source of the confusion. I think, yeah. A way to describe this is that in transferring a ModuleSource from one agent to the other, the identities of the ModuleSource objects diverge, and when you import them, the identity of the corresponding ModuleInstance converges. Is that a good way to describe it?
MM: not to me. It makes me even more confused.
KKL: okay. Maybe—better luck next time then. Pray continue
GB: I am sure there will be more on that topic compartment discussion, so we can have a more in-depth discussion shortly.
GB: All right. So thank you, Chris, for the review. And thank you for getting it in swiftly. I appreciate you having taken the time yesterday. The—in that review, what came up was that there—there are a lot of compartments interactions here that have not been fleshed out by this proposal, and so what I am going to attempt to do here is some kind of rough working through what those compartments interactions might look like, for the sake of the compartments folks, to feel comfortable with the proposal.
GB: Please do interject if you want to clarify or if I am going off track from compartments and working or trying to work. For folks not actively interested in compartments, this will be far too much detail. So my apologies in advance
GB: If you can consider the compartments model today, before the ability to import ModuleSources, compartments moved to a model of module hooks and module instantiation. And that is compatible with source phase imports, so far as this can be instantiated with. Then you have import hooks. They call it the import hook and just pull a result from here. And about you can instantiate instances that have hooks and through the hooks model, be able to virtualize the module system. In this example, if you want b.js to resolve to a specific instance, I can implement that took and dynamic import is used as the compartment executor to execute the virtualization. And so in this example, we have got a static import to the local b.js and a dynamic import to it. Because module resolution is reified per ModuleInstance, the results look only runs once. It only runs once for the static import, and dynamic import and you get the same thing. We get this kind of idempotence put into properties of modules that the I am sport of the same specifier should return is maintained through the design.
GB: It’s worth noting that that we –
MM: I’m sorry, before you advance, can you go back to that slide and just stay on it for just a second. So I can observe something.
KKL: Mark, the parents' instance argument is irrelevant. It doesn’t exist in the proposal, but also isn’t germane
GB: my apologies, if I haven’t worked out my imagination of compartments as opposed to the compartments. I hope what I have written adapts to the compartment model
KKL: I think so.
MM: okay. Okay. I am fine. Go ahead.
GB: So the proposed is a local idempotence and not a global idempotence. Because the URL key is not defined in ECMA262, we have no way of creating key quality and so it’s possible to break global key items easily, where you could return a different instance for a “././b.js” and a “./b.js” and you would actually get a different instance. And so you can violate global key item ID Epotence so far as traditional folks are concerned. It’s worth noting this is like an edge case of the model. That is quite similar to the one that dynamic import of source imports also exposes.
GB: So what happens when we introduce the ability to import a ModuleSource? We have got the—we talk about looking up in the registry the canonical instance for this source key. But sources can exist across compartments. You can pass sources around, so how do we define the canonical instance? And the only way to do this is to introduce a compartment key that is associated with the instance doing the import.
GB: So here is the concept of multicompartment registries. Where you have got two separate registries and we have got canonical instances. The instance has a [homo?] reference on it, C1 on the first compartment registry and C2 on the second compartment registry. When you import a source, in the spec fiction here, you put that source inside of the registry, and you create a canonical instance in that registry for that source. And you get the canonical instance of that source in that registry. In the other compartment, you will get the version of the canonical in that instance. Sources have canonical instances per compartment.
GB: To illustrate that, if—what we would have to do is to create a ModuleInstance from some kind of compartment key. So in this case, we are going back to our compartment constructor that defines the hooks, and passing that key into the ModuleInstance constructor. When we do that, the instances are associated with the compartments and able to maintain this relation that when you call this import B function, which will import a source, that is, its own source, it gets back the same canonical instance against that compartment. And so we are able to create this spec relation by design that the import of a source for a key is the canonical instance of that key, which will be the same as the one that you would import normally through the module system. So we maintain the new invariance introduced by the source phase module through the compartment key. There are some questions about how exactly canonical instances should be defined for compartments. And this is something that could be explored more in the compartment design process, but to widen the field here, you could—canonical instances can be set ones for registry. So it could be something like part of the constructor does the constructor immediately set the canonical, you have some kind of canonical true option. Which means, okay. This is going to be a canonical instance and the registry versus a non-canonical sort of separate instance that exists outside of the normal canonicalization process. Or is it an operation directly on the compartments, where you can create a source instance relationship, if you do it twice for the same source, it throws because it’s already been done.
MM: I’m sorry. I don’t understand non-canonical. The model is that—you know, the things we’re talking about have a key that’s looked up by the multicolumn quality that you talked about. And that the per-compartment registry, if you will, has a single value for a key, which says that there is a per compartment canonical instance for that key. First of all, does that correspond to what you call canonical and second of all what is the use case for non-canonical. Why is the concept there
GB: when you construct a model instance, you can have multiinstances for the same source. Which are non-canonical. Because the canonical ones says, when I do an import of this source, this is the one I usually want. But you create other module instances against the same source that have different resolution behaviors within the same compartment. If we want to allow multiinstancing within the same compartment, we need to distinguish canonical versus non-canonical and the distinguisher; if it’s in the registry key. There could be a compartments model that doesn’t allow multiinstancing if we deprecate the module instance constructor.
MM: I see. I see. It’s the coexistence of the ModuleInstance constructor and compartments that creates the question. yes. okay.good. I understand. Thank you.
GB: great. So or a special canonical hook. If you bring up a source in the compartment that the compartment has never seen before, you could get a canonical that runs against the source. Based on the invariant it should return an instance that is an instance of the source and throw it. Alternative, have automatic. What you would expect. Which is when you do the first load of an instance or source, it creates the canonical instance automatically. So yeah. There is some—a little bit of design space there. So another point worth noting in the spec reality, where we have this ModuleRecord, that is both the source and an instance, already solves the canonicalization because the ghost instance can be just adopted in the compartment matches, and if not, we create a new instance. So there is—you need a compartment field on the ModuleRecord that you would carefully check when doing this adoption.
GB: So what does that look like in reality? So if we created a compartment, and we just—I am calling it the Joe record design, but the last bullet point, when you do it automatically. When you import the instance, it’s now put that instance in because it was the first instance seen for the source, and it’s now canonical. Now it’s dynamic import. So if you now have this import external function, which takes external sources and takes a source B—sorry. This is source A. Apologies. So if it takes an external source and we pass back in the source amount, greater than instance for, the canonical instance is going to be—sorry that was a source B. My apologies. So if we import the source B, the—it adopts the source, creates a new canonical instance for it. If we later do an import later, at its canonical instance for the string, that will be the B. If we put in a source A, it’s the same instance for A. We have single canonical instancing here. So there’s effectively no separate instances in this model. Furthermore, the import hook would not necessarily need to be called. Because when you import the B, it’s already at the key for that ModuleSource potentially. So there’s some kind of, like—there’s some questions about resolve and import. And that sort of aligns with how much to think about the local versus invariance, and there’s still some questions there. But overall, the model seems to support these canonicalization features
MM: Can I repeat back in my own words? if you’re loading—importing from a source, then the full specifier plus the SourceText itself is the key. But then, if you import from a full—if in that context, I you import from the full specifier from a string, there is no SourceText yet to compare, so the two design choices would be that you go out to the network and fetch the source in order to have the SourceText to complete lookup of the key, which is unpleasure. Or the other choice to say, this is where your primary key thing, I think, becomes a relevant part of the definition which to say, okay. I have already got the prior key, the full specifier, so rather than go to the network, I am going to assume that the SourceText I would get is the one that I have already gotten and then proceed under that assumption. Is that a correct restatement of what implied
GB: That's the model. The details are the question and the hook design is the question. So I think there is—the way this is presented here, I don’t think it’s clear that import hook would never necessarily be called because you want to normalize the specifier and you could have alternative normalization. So effectively, I would say if you—if there was a way to pass a normalized URL as the full URL and you say that’s the thing, then that’s this model. Because of the fact we don’t have a model for URL resolution for key resolution, the non-source part of the key, this statement is not necessarily true in this framing. Sorry, I should correct this slide. But it was late. But yeah. The model you described is correct. And there is definitely some design space there.
MM: okay. Good. I think I understand.
GB: Canonical instances map one-to-one with their compartment so the instance is associated with the compartment and the compartment keys are the instance. If you go to another compartment, it will—the dependencies, and give you the instance fully loaded. If you were to go inside another compartment and import an instance that belonged to another compartment, you are not going to stop populating that compartment’s registry. Obviously, this design, you could throw and say, this is not allowed. You should only import things in your own compartment. Do you want cross-compartment loading, that could be an option where it will derive the other compartment’s loading to completion. The point being that only sources preserved between compartments, instances don’t transfer between compartments. They stay in their home compartment. They stay associated with their own compartment.
GB: To try to summarize that, we do require a compartment identity for this source key to model. The canonical instancing model. There are questions of spec fiction and reality that need to be carefully considered. But both worlds are very much intact, and ModuleInstance and source combinesses, actually help us to write the spec text, in most cases, apart from having the ghost instances on clone source that are never accessible in non-instancing and non-multiunderstanding. As far as we have the invariance to separation, keeping the spec as simple as it needs to be until it needs to be more complex is better so we don’t try to refactor before we have all the design constraints in place. I have tried my best to explore the interactions as much as we could, but there’s some design work to go. Overall, there’s clearly only less work for compartments with the ModuleSource defined and go through all the transfer and import semantics that’s up.
GB: so yeah. I am happy to have a discussion on compartments at this point, if you would like to. Kris Kowal?
NRO: yeah. So question about the table that you showed. Within your compartment key. We didn’t have compartments today, but realms and they share similarities in which they are like some set contexts that—and share objects, when it comes to the web frames. Even though we don't have compartments today, we need to add the realm as the third entry of the composted key. I guess also for work—with other compartments its more granular division
GB: I have also imagined that the—since we need a compartment field, on the ModuleInstance, that that would point to its realm. And so you could maintain that anyway. But I haven’t thought about that.
MM: I agree.
NRO: Okay. I think the answer to me is, yes. Because the map is already per realm. But I am not 100% sure about it.
KKL: yeah. I wanted to—for one, thank you for framing the conversation in terms of compartments, it’s been helpful for us to invest in them. And apologies for not invested in compartments, I wanted to draw you in anyway. If we go back to a slide that illustrates—where the ModuleInstance constructor, and the compartment property and its handler, the—Guy is using a placeholder word for compartment. But this is more fundamental than compartment. And is an abstraction that lives beneath the layer of compartments that I think is well motivated for other reasons, specifically, Nicolo has pointed out in the shared structs proposal there would be intersection between shared structs and multiinstantiation of modules and compartments and such. Such if you had multi-instances of the same source, that contained shared structures, there is a relationship between the instance and which set of prototypes of those shared structs you are going to get access to and this exists within multiple realms of the same agent that already and all that compartments are adding is another level of indirection between the execution context and the so associated realm effectively. The—so the key in this registry would be moving from being keyed on the registry to the—what I am going to call for the purposes of this conversation, the cohort. That is to say, within a cohort, you are going to get a single registry of ModuleInstances and also, a registry of shared struct prototypes and these can be the same concept at this level. And again, apologies. There are a bunch of complications that Guy proposes that I do not think will survive to the final design. I think that in the end, the implication of the proposal that Guy is advancing, the ESM source phase imports, the implications of that landing ahead of module harmony is, for one, it seems likely to me there will be a simplification that module—that the module constructor with its hooks, as you see in the ModuleInstance constructor here, will probably just—it will probably need to simplify down to just being an options bag on the ModuleSource constructor in a future proposal because the model that this proposal establishes is one where there’s only a ratified ModuleSource that directly addresses immutable source under the—detachment hooks are semantics for the current realm, its association with a particular registry. And yeah. I think—I think this simplifies in time. I wanted to make sure my fellow delegates are aware that that is an implication of this proposal advance being than the fullness of harmony.
NRO: So when earlier Mark restated to Guy about that we have the two choices, like go to the network to check if the source is the same, or not and just—so when we need to define canonical instance, we actually don’t have the choice between the two options. It’s already the case that the dynamic part will not go to the network. So that—the behavior is already done.
GB: So if you had a ModuleInstance constructor, I guess the open question there; whether that instance has been injected into the registry to block further fetching or if the instance is existing outside of the registry in a sense.
KKL: To follow up on that, I believe the implication is that the only way to have an entry in the register industry is to dynamically import the thing.
GB: that’s very much the model that we’re working to.
MM: So several things. First, just some questions about—further questions about key quality. When you talked about the eval-ed module expression, something surprised me in what you are saying; on transmission of the ModuleSource, that it loses its identity. That—basically, a new identity is regenerated on each deserialization. I can understand why that might arise as a constraint of serializing data, if you don’t want to imagine that you can create unique unforgettable identities in data. But other than that, it seems to conflict with the goals of, as I understood it from your private discussion last night, that the ModuleSource has a transmissible identity, where by the lookup equality, where the—where the key lookup equality is—that that equality is preserved across transmission. So if the same originating eval is transmitted to the same destination, multiple times through multiple paths, that has arrived, it’s equal to itself. Is that still desirer and is it the constraints of deserialization that caused you to give up on that
GB: you could in theory define a cross-agent keying, unique keying. And have some kind of relation like that. I think there are a lot of benefits to making sure we don’t introduce new side tables. And so the most straightforward behavior was to just have structuredClone or serialization or deserialization to key evaluated sources. I would be interested to hear if there are other use cases for maintaining the key. It’s not something I have heard of as a desirable property. But it—yeah. It was more a case of implementing the most reasonable design as opposed to tying to enforce a new type of side table for a new use case.
MM: I mean, just—the idea that it is a key locally, such that importing the same evaluated, you know, thing gives you back the same instance locally multiple times, it seems strange to have it produces the same instance for multiple importing by key quality. But if you emit through multiple locations and paths and receive from each path, you use as import, you have different instances, , that seems strange. It seems like a weird incoherent intermediate case. If you want to regenerate the key on every transmission, you should just have it not be canonicalized in the first place. So every time you import it, even locally, you get a unique instance.
GB: So there are benefits for the local because we will—using a ModuleSource constructor in the same agents, or same compartment, you—there are benefits to being able to treat that as a cross-compartment key normally. The—so it’s only in the transfer with your of the key compartment because it’s a local key, not a cross-agent key. And early on we ruled out the idea of having key synchronization between agents to remove a lot of complexity, so it’s not introducing a bunch of complexity. So to put that back on the cards would have to be motivated carefully. It’s not out of scope. It’s something that could be considered. But it’s not something that has been strongly motivated today and there are definitely some high bars for chasing a whole new type of synchronization
MM: to restate my words to see if we are in sync here: it’s in the abstract would be desirable to say that a ModuleSource has a transmissible identity that preserves module equality, but doing that cross-agent has complexity costs that are just not worth paying, so as an expedient matter, we are not going to contain key quality across sittings for the evaluated ModuleSources
GB: Yeah. And to be here with evaluated ModuleSources on the web, at least, you would have to have an eval CSP policy enabled, in general the rooted sources as we call them, are the—generally like much more recommended path and that security hosts control sources
MM: good. I understand that. The—I want to make an observation about the spec complexity. I am satisfied that as far as I can tell, you have succeeded at specifying an observable semantics that is consistent with the specs spec refactoring that we are postponing. That was an issue that came up hard last night. And I am satisfied that you did that. And very glad that you did that. The statement that the spec would be more complicated on the other side of the refactoring, I don’t believe. And that’s based on a previous exercise that I did exploring what refactoring would look like. But I do believe, and supporting the same end conclusion, that the effort to do the refactoring is a complicated effort. Not that the landing point on the other side of the refactoring is more complicated. But being given the effort to do the refactoring is quite a lot of effort. Postponing it as long as we are quantity, maintaining observable equivalence is fine. I won’t object to that. I wanted to register that I don’t believe that the resulting refactored spec would actually be more complicated. I think it’s actually simpler.
NRO: I agree with everything that MM said. The refactoring makes everything much easier, I think, to read. But it’s a lot of work.
MM: So since you are asking for 2.7, I will state my position there: I very much want to see this go forward to 2.7. You have successfully dealt with all of the things that were red flags to me yesterday. So congratulations. I am on the edge of approving it for 2.7. But I think I don’t want to do that today simply because of the size of the surface area of new issues to think about, and my uncertainty and fear with regard to whether I am missing something. And if I had had more time to think about the new issues, raised by the changes since what I understood last night, my level of fear might be reduced to the point that I would approve today. But I just—I just think we need to postpone and as we discussed privately last night, just—we are going to continue to discuss this in the TG3 meeting, which meets weekly, between now and next plenary, and I expect to be fine with 2.7 as a result of those discussions.
GB: I am just going to quickly run through the last two slides and then do the formal request for Stage 2.7. As opposed to taking this as an immediate blocker, if you would be okay with that. I will jump to the very end.
MM: sure.
GB: So what we’re looking to achieve in the next steps is, as soon as when the import attributes perspective lands we will land the source phase PR, which this specification is based on. Source phase import now shipping the V8, soon to be implemented in node.js and Deno, after which it could seek Stage 4. To keep the module harmony trend going, the goal is to have this proposal closely followed so we can unlock module expressions and module declarations next year. If we can achieve 2.7, which can be the downstream HTML WASM, specification updates move toward coming back for a Stage 3 request before landing the HTML PR. So the HTML PR would not land before we seek Stage 3 and not regress the Wasm integration either without first getting to Stage 3 at TC39 and having everything presented at both groups.
GB: To give a very brief demonstration of the spec, it’s a very small but a spec text to dynamic import and a couple of invariance on HostLoadImportedModule. And so it’s not a large surface area of change to the source, to the spec. But being able to achieve Stage 2.7 would allow us to be able to move forward with further investment in the proposal. I would therefore like to formally request for Stage 2.7.
MM: So thank you for that. Those clarifications. I am still going to object, but—if you get consensus for 2.7 right now, then could we have agreed to a process where I am reserving my approval but could approve before the next plenary, in which case if we get conditional approval now, at the point where I am comfortable approving, then you can announce 2.7. Is that a conditional stage advancement that we could agree to?
CDA: that seems a little bit awkward.
MM: okay.
CDA: yeah. If you have blocking concerns now,
MM: it’s simply my degree of uncertainty, and that 2.7 is a green light to implementers to proceed to implement. And long experience on the committee says that once there are entrenched investments by implementers implementing this stuff, if there’s a mistake from my point of view that needs to be corrected, we—especially if the people who have invested in implementations don’t particularly care about the consequences of that mistake, the friction in getting the mistake corrected is much higher once they have been given the green light to implement and they have invested in implementations. So time to correct those things would be before 2.7.
CDA: okay. There’s a couple of comments on the queue. NRO?
NRO: yeah. Just that if mark thinking more about this implies there is some tweak needed, even if it’s just some integration, probably should be represented like without the conditional thing. Like, I am fine with the condition thing if the—what I am saying is that the condition should be—this is fine only if it ends up with Mark like saying, okay. Everything is fine. If Mark requests tweaks to the whole picture, it should probably be brought back for clarity. And then assume you are quick approval next time. But it should be presented with the tweaks
MM: it makes sense to me
DE: I agree with what Nicolo said. That if would—if any changes, bring back to committee for review. When have done lots and lots of these conditional advancements in the past based on someone needing a bit more time for review, including with Mark in particular, but other reviewers. So I think it makes sense to do here. We definitely need to work out all of the observable semantics before Stage 2.7. If we are not sure, we need to be and this conditional is a way to do that. At the same time, I just want to make a slight correction whether this is a signal to implement. The reason we separated Stage 2.7 from Stage 3 is because we want there to be tests present to, you know, to save the implementers’ time and everything. It’s optional to implement after Stage 3 and implementation sometimes happens before 2.7 for prototypes. But I wouldn’t consider Stage 2.7 to be the implementing signal. That’s it. So I support conditional consensus on 2.7, conditional on this being this proposal and Mark asynchronously signing off on it.
GB: I would be happy to engage in meetings on a conditional progression. Under the understanding described by both Nicolo and Dan.
CDA: okay. do we have support for 2.7?
NRO: +1. if I can add, I was in the same boat as Mark. It took me a while to understand the spec text matches the implementation model.
CDA: okay. Other voices of support for 2.7? I think Dan was a + 1, if I understand correctly.
CDA: aside from Mark’s concerns and for the review, do we have any voices of objection to advancing this to Stage 2.7 at this time? Any dissenting opinions are welcome as well, even if they are non-blocking. All right.
JHD: I would just ask that there be a specific issue where MM can comment when he has had an approval so that we can all be—have a place to follow and be notified. When the condition is met
CDA: GB, would you create an issue in the proposal repo for the conditional 2.7 advancement and yeah. A home for those concerns and up follow approval from MM. That would be great.
CDA: okay. You have 2.7 conditional.
(earlier mid-summary):
GB: So to summarize we provided a ModuleSource intrinsic. The spec text is complete and reviews and has all necessary reviews. There’s a possibility of editorial refactoring. We have investigated all of the stage 2 concerns. Cross-specification, defining key and this is based on the source record concepts. Including identifying necessary refactoring for the compartment specification. The semantics have been presented at both the whatnot and the Wasm CG without any concerns raised.
GB: We presented the proposal semantics including an update on the Stage 2 questions. Including cross-specification work. ModuleSource keying. And equality. The behavior of dynamic import across different agents. And also, investigating compartments, interactions, and the refactorings in implications for future ModuleSource records and compartments.
We obtained Stage 2.7 based on conditional approval from Mark for further interrogation of the semantics, through meetings at TG3, which will happen before the next meeting.