Skip to content

Latest commit

 

History

History
450 lines (234 loc) · 91.7 KB

File metadata and controls

450 lines (234 loc) · 91.7 KB

105th TC39 Meeting | 6th December 2024


Attendees:

Name Abbreviation Organization
Waldemar Horwat WH Invited Expert
Jesse Alama JMN Igalia
Istvan Sebestyen IS Ecma
Gus Caplan GCL Deno Land
Dmitry Makhnev DJM JetBrains
Andreu Botella ABO Igalia
Keith Miller KM Apple
Eemeli Aro EAO Mozilla
Richard Gibson RGN Agoric
Ron Buckton RBN Microsoft
Jirka Marsik JMK Oracle
Jack Works JWK Sujitech
Samina Husain SHN Ecma International
Daniel Minor DLM Mozilla

Vision for numeric types in ECMAScript

Presenter: Shane F. Carr (SFC)

SFC: Hello everyone. You can see my slides. So a little preface for this presentation: we’ve been going back and forth for a little while now regarding different number related proposals and it concerns me that we haven’t taken a holistic view at how numbers work in ECMAScript and how we want them to work moving forward. We’ve been sort of narrowing in on let’s solve this little problem here and solve this little problem there. So my goal for this presentation is to sort of have a discussion about what we want number—how we want numbers to work in ECMA script in general and then that can sort of give us a framework so when we work on the other proposals we can see how they fit in with the big picture. So that is sort of the goal of this presentation.

SFC: So here is what I have on the agenda. So first I want to talk about what we currently have. Then I want to talk about problems that I’ve heard that delegates wish to solve and the process of making this presentation, I spoke with a number of other delegates. I sort of synthesized these into five unique problem spaces. And then the third is possible ways to solve the problems and then last one is opinions of delegates. Not just Shane’s opinions, but opinions of several delegates.

SFC: Starting with background on what we currently have. So we have these two numeric types. Number and BigInt. Number has been around for a long time. IEEE 64 bit floating point approximately and it does funny things with NaN and Infinities. I have this little line the domain is real numbers to distinguish it from bigints where the domain is integer. In integers, one thing that’s different about integers is unlimited significant digits and the number we have only what fits in the IEEE 64-bit floating point but we only cover the domain of integers. Let’s talk a bit about numbers.

SFC: So hopefully people are familiar with this. What is 0.1? 0.1 in memory is represented like this as a 64-bit floating point IEEE floating point number. The bits of the 0.1 are broken down into the sign exponent and mantissa and there’s the floating exact representation or the fully—the full precision value actually is what is 0.1 followed by a bunch of zeros. After you get past 15 significant digits you get some things and always ends with a 625 at the end because it’s a base 2 number. So is 0.1 really 0.1? I think this is a question that has confused me and confused a lot of other people when I talk to them about it.

SFC: So is 0.1 actually 0.1? Really interesting question. Because the IEEE floating point numbers are discrete points in the number line, right? And every particular valueOf an IEEE floating point can be represented in one of two ways, right? It can be represented, you know, it’s a binary representation in memory on one hand, but also the shortest round trip decimal. There’s a lot of algorithms. Every engine ships an algorithm for computing what the shortest round trip decimal is. There is a unique representation of 0.1 in IEEE floating point number. So if you have that—those bits I showed on the previous page, like, that is 0.1. So, yes, it is. But it’s also not. So it depends on how you interpret it. If you inter present it as decimal it’s 0.1. If you’re interpreting it as a binary, that’s the other thing. That’s the important distinction to draw. When you do arithmetic, you always do it in binary space. Arithmetic uses binary representation of the number in order to do the arithmetic. This is why you get things like this. 0.1 plus 0.2 is not equal to 0.3, it’s equal to the binary floating point number that is one tick above 0.3 that I have here on the screen (0.30000000000000004). And that’s how binary floats work. And that’s how they work. That’s why they do what they do, right?

SFC: So with that little bit of background, I’m going to go into problems. But I’m going to open up the queue first to see if there’s anything on the queue. Doesn’t look like it so I will just keep going along. I will go ahead and talk about the problem space. So what I did is I wanted to synthesize down to five core problems that I see that we have in terms of things that the language doesn’t currently do, like, issues that we would like to be able to solve. So problem 1 is arithmetic on decimal values. I sort of synthesized this from the readme file of the decimal proposal to try to summarize what I see the use case being of that proposal. So when you’re doing financial calculations like calculating sales tax, for example, you want that to be done in decimal space. You don’t want that to be done in floating point space. There’s specific rules that have to apply. Those rules are much based on arithmetic as you learned in second grade that is in decimal spaces and not in floating point space. So that’s something that we don’t currently have the ability to do in the language. You don’t have in the language a built-in mechanism to do 0.1 plus 0.2 equals 0.3. We don’t have a way to do that right now. That’s one feature that’s missing, arithmetic on decimal values. The second feature missing is representing the precision of numbers. So there’s a thread that I wrote on the decimal proposal repo explaining this idea here. Depending on how the number is written, it may be spoken differently.

SFC: Therefore it depends on how you internationalize it. You say “1 star” because that’s singular. But “1.0 stars” the zero at the end the plural form, even in English, and it’s sort of interesting that this shows up in English, which is when it comes to grammatical plural cases and things like that has not as many rules as other languages do such as Polish and Arabic and Russian and things that have more rules. The fact it shows up in English means this is very much a very common widespread problem here. So why do we care about representing precision? For Intl, with these different ways. When we format the number we want to know what we’re formatting and decouple as much as possible the international step from the representation step. I have a long post on GitHub if you want to dive more into this topic.

SFC: Two is, we want to interop with systems that retain precision. Among other IEEE decimal systems, most retain the zeros. I have on GitHub and done the analysis and look at what languages of Java and others do retain the zeros. To fully round trip we need the capability. The third is finance and scientific computing. There’s some other people who posted allegation on the issue that have noted that trailing zeros are important when it comes to the financial calculations that are exactly the ones that the decimal proposal is aiming to solve. I make a note here that the IEEE reckoning of precisions is primarily focused on sort of the financial use case and scientific precision could have different ways of being represented. And then four is possibly HTML input elements. So that’s sort of problem space two. We want to represent the precision of numbers. There’s a lot of use cases for this. That’s not something that we should leave out.

SFC: The third problem is representing more significant digits. So the number type is limited to 15.95 on average decimal digits. That means that 15 decimal digits is safe to assume. 15.95 is the average, which is enough for a lot of cases, but not enough for every case. For example, large financial transactions, things on the order of bitcoins could exceed that limit. Interoperability with decimal128 is also an issue here, because if you have a system like Python or Java that uses decimal128, it may have more than 15 significant digits and want to operate with it. And the third is big data and scientific computation. From time to time when I’m training my machine learning models, I do sometimes run into this issue where I have like two weights that are very close to each other and then I try to take the difference and then all of a sudden I’m down to three significant digits and that’s not always helpful. There’s definitely use cases in that area. So that’s sort of problem space 3.

SFC: Problem space 4 is unergonomic behavior. I could have put a few more examples on this slide but we should have a numbers framework that just works. So we want to be able to make sure that programmers can avoid the footguns like 0.1 + 0.2 in order to have something that works for users that doesn’t have the mistakes that you can easily make.

SFC: Problem 5 is associating a dimension with a number. So, for example, we want to be able to take not only the point on the number line, but also want to be able to take the unit that’s being represented. For example, dollars or meters. Why do we need this? Because in Intl.MessageFormat, Intl.PluralRules and so forth, this is something that we want to have as part of the data model and also feeds into the unit conversion measure proposal and avoids a certain class of programming errors. After my talk, EAO will go into more talk to justify this problem in case people are not convinced this is a problem that we need to solve with the language. EAO in the next time slot has an excellent slide deck to go into more of the motivation behind problem Number 5.

SFC: I see JHD has questions. Before I get to those, I will go ahead of the next section of the slides. I think they might be answered there.

SFC: So a non-issue and I want to sort of emphasize this, because this is something that is sort of, I think, been a point of confusion, is that a non-issue is being able to represent decimal values because as I showed earlier in the deck, as long as when you have your IEEE binary floating point number and you say I’m going to interpret this as a decimal, you can represent decimals exactly. Like, 0.1 does triple equal 0.1 if both created the same way and normalize them the correct way, they will equal each other. That is actually a correct representation of 0.1. So representing decimal values is something that we can do in the language, it’s not necessarily type safe and goes into problem 4 and maybe not ergonomic but it can be done. The problems that we often see arrive when we normalize numbers. We don’t have decimal arithmetic and we are able to represent it if you interpret the number in the correct way.

SFC: I’m going to go over some solutions now. The solutions are not in any particular order. I sort of put them in this order in order to most easily explain how they—what the different aspects are of these different types of solutions. When I say solutions, I mean how all the different problems we’re trying to solve and how they can all fit together in one cohesive package for developers.

SFC: So solution 1 is the measure proposal. Measure proposal that BAN presented at the last plenary and EAO will describe more today. It’s a number and precision and dimension. So a number is currently a JavaScript number, a point on the number line. It can also possibly support current and future numeric types like BigInt and things like that. So precision is the number of significant digits and then dimension is the unit. So this solves the precision problem and the dimension problem. It’s possible that decimal math could be included via prototype functions. It’s possible that you could support more digits via string decimals. If the number type is sort of abstract, right, then it’s possible that we could add additional functionality to sort of say if the number is a string, go ahead and do decimal math but do it and then you basically use string as the sort of type where you encode the arbitrary precision decimal value, right? Without actually exposing it directly, it would sort of be inside this wrapper. Measure could be an all in one solution where, you know, it represents all these things. Dimension could be null if you just want to be able to represent a decimal value without any unit attached to it. Set dimension to null. That’s fine. And then otherwise you can sort of use this one package that sort of has all these features and solves all the problems except not necessarily ergonomics because it doesn’t necessarily give a direct way to do the 0.1 + 0.2 as a primitive.

SFC: The next type of solution is decimal 128 with precision. So IEEE decimal 128 basically is an encoding over 128 bits that is able to represent numbers with quanta and precision. Quanta and cohort. JMN talked about this previously in previous plenary meetings. So decimal one, if we add such a type in ECMA script, we could add a type that is fully conformant with IEEE. Measure no longer needs precision and decimal needs precision. We solve the precision problem. One concern that I heard when discussing this with folks is this concern of precision propagation. IEEE gives a specific algorithm if you have two numbers and then you multiply them together, it gives a very specific algorithm for how you calculate the output and how many trailing zeros the output has. That algorithm is sometimes surprising how it behaves. I’m told it’s based on a certain set of rules for how you do financial calculations. But that’s not necessarily a generally applicable algorithm. Another concern is this idea of equality operators. If you have a decimal value of 2.5m and another of 2.50m you want them to be equal because they represent the same point on the number line. The representation in memory is different because they have different precisions. And do you include the precision as part of the equality operation or not? And there’s been some debate about that and it causes concerns especially when we look at what the behavior would be with primitive values because that’s much more tight if decimal is an object we can have two equality functions equals and total qualities, that’s what Python does. When it’s primitive, we don’t have that luxury.

SFC: Solution 3 is decimal 128 without precision. And this basically means that within the decimal 128 space, we only represent the—we only include the numbers that don’t have trailing zeros and ones that do have trailing zeros, we just don’t expose, we don’t export those from JavaScript. If you have a decimal 128 that has trailing zeros, like, that is not something that you’re able to represent as a decimal 128 in ECMAScript. So the main benefit I’ve heard from this is it’s potentially better for a future primitive decimal because this makes the equality operators behave the way that, you know, certain delegates expected they behave which is nice. A concern I have is that the unused bits are wasteful because IEEE gives us a framework to be able to represent precision in the same bits that the decimal is represented. Overall if you take every bit pattern that could possibly represent decimal 128, 10% of them have trailing zeros. Numbers less than 20 significant digits, a common use case, over 90% of them are able to be represented with trailing zeros. We lose the ability to represent those values if we sort of have this limitation. Storing precision as separate is possible and doesn’t work as well with arithmetic operations and so forth. The other concern is not interoperable with decimal 128 and precision is part of the data model and support other languages and we use the ability to have interop. This is the concern that I raised in Tokyo when this was presented.

SFC: Solution 4 is DecimalMeasure. This is a new one I’m sort of throwing out there to put it in the field of possible approaches that could be taken is the DecimalMeasure approach. So the DecimalMeasure is we take the idea of a measure but then the measure instead of wrapping a number in precision, it wraps a decimal with precision. And it associates that with a dimension. This could have decimal semantics, a future primitive decimal can still be its own type and sort of emphasize that. It could be composed. The type decimal measure could be composed with fully normalized primitive decimal? There’s no reason that that can’t be composed because these are two different enough types that they could co-exist in the same universe. And alternative i18n focused decimal measure. The one way to think about decimal, think about measure is it’s just an input type for Intl operations. The other is general purpose useful type with other operations on it. So decimal measure can sort of take either shape.

SFC: Solution 5 Number.prototype. I want to talk more about this one. I posted this more in the decimal repository. Since Number is able to represent a decimal value but you can’t operate on the decimal, that’s the main foot gun. Decimal add could be a prototype function and the prototype function just defines to say if you have 0.1 and 0.2 you add them up as if decimals and get a decimal on the other side as a number. I hope that makes sense to people. And this can be a function of the prototype. There’s sort of a couple ways it can be exposed to developers. It could be exposed with a new operator. Since these are already primitives we can spec out an operator and another is JSSugar or TypeScript. TypeScript can introduce a type called decimal number or something like that. And in TypeScript land, if you use a plus operator on the decimal number, it gets compiled to JavaScript as a.decimalAdd(b). This is a nice way of JSZero & JSSugar to work together and you have the abstract layer and then the built in layer. It’s a minimal change you have to make on the built in layer. It really gives the ability for TypeScript and JSSugar to do something on the user facing layer of the API by exposing this primitive operation called decimal add. I’ll keep going through.

SFC: Solution 6. There we go. This is one I brought up. I haven’t gotten a very specific clear signal from any engines yet. I sent some inquiries and haven’t got an answer whether it’s feasible. It’s an interesting idea. We have the existing BigInt type. What we could do in principle is we could—again, I don’t know if this is feasible or not—add a field to it for the scale. And then the scale could represent a decimal value. And existing BigInt would work exactly the same way that existing BigInts work. If you construct them, they’re fine. If you compare them, they’re fine. Everything works as expected. However, you are able to construct a BigInt with the scale. If you do that, what you get is a decimal BigInt. There’s some questions here about what you do with the slash operator if you have two BigInts and divide them, like, that would probably have—that would have to maintain existing behavior. So we probably have to add another operator that does a decimal divide, for example. Another concern here is like we evaluate the risk of changing BigInts domain. For example, if there’s a program that assumes you had the interrogee and maybe do index to array and then pass in the BigInt and now it’s not an integer anymore, could that be a problem and evaluate that risk? And of course feasibility. It’s a solution I want to throw out there. I haven’t seen anyone actually give a definitive answer that no this is not feasible. I think it’s an interesting avenue that could be explored. The benefit is gives us a primitive right out of the gate because the primitive is already there. So that is solution 6.

SFC: Now I will just go through some opinion slides and then after these, we’ll open up the discussion. I’m glad I booked an hour for this because I think we might need it. So my opinion. I try to make the slides as neutral as possible. Some of my biases may have slipped in a little bit. But my opinion is that I think we should leverage iEEE to represent a precision. Because IEEE gives us a way to do it. It’s very well defined and interfaces how other languages solve the problem. I think we should leverage IEEE to represent the precision. I think we should leave the door open for primitive decimal. I don’t think we should design around a primitive decimal today. I think a primitive decimal is something that we should leave the door open for in the future. I think we should design a good object type for dealing with these numbers because that’s what developers will have today, and what developers will have for the next probably decade or so. And even in a world with a primitive decimal, developers are still going to be using objects. We should try to design a good object decimal. And if we introduce a type that sort of makes it harder to add an object decimal, that’s sort of a problem. We should leave the door open. I think we should focus on building a good object interface for decimals. My third point is DecimalMeasure seems like could be a decent solution and sort of solves most of the problems in one package. Leaves the door open and I sort of wanted to float that out there as a possible approach. The main push back I heard there, sort of scope creeps the measure proposal and merges the problems and solve in one way than another way.

SFC: I asked NRO for an opinion and this is what he said. He sort of pointed out with Temporal. In Temporal I’m also a co-champion for Temporal. We designed 7 different types with different data models. There’s a plain time and zone time and instant, right? And there’s sort of the universe and then when you’re inside one of those little types, no matter what you do with it, it will always be well defined. I think that’s cool we did that with Temporal. Maybe there’s an opportunity to do that with numbers. NRO, I don’t know if you wanted to add anything to that.

NRO: I think you represented it somewhat well. What I like about Temporal is that don’t have all the sub slices of the whole model. You don’t have to worry about things you don’t need. You can have a PlainDate and you don’t have to worry about the TimeZone. Also if you have some where we expect ZonedDateTime it’s easy to check. And not somewhere else. You don’t risk using the wrong thing because we have a good runtime type system there. So I would—if we’re going to have different types of numbers like many more variations, I would hope we go in some direction like that where for example I don’t have to worry about a dimension if I don’t care about that and same the number and mention and not one without. I don’t accidentally use the wrong operations of banner versus decimal.

SFC: Then I will go ahead and move on to JHD’s opinion. And again once we get through the slides, we can go to the discussion. I want to focus on the opinion slides right now. I just did SFC and NRO and JHD has his turn. After talking with JHD, you know, we sort of established the primitive decimal is a really good long-term solution, because it solves the ergonomic problems and some of what NRO was talking about with the type system and know what you get in and get out. But a solution that solves only a subset of the problems without a clear path forward makes us in a worse solution with a long-term solution. Imagine we do a have solution today that did a little bit and not all the stuff. And then in a world where we can add a decimal primitive, it’s now harder because we have this new type that we have to inter-op with. If it wasn’t there, we could add it clean. We can be where we are now and add a clean decimal primitive and everyone is happy. It sort of muddies the water. JHD, I don’t know if you want to add anything.

JHD: This is a good summary. I also spoke a bit during JMN’s presentation in the previous plenary about my wider vision. I have some more specific comments but can wait for the queue.

SFC: Cool, thank you. So then I have EAO. Not all these problems need to be solved in the standard library. So the I18N conditions could be solved with the thin measure protocol, you know, with these precision dimension string decimals. Do we need a type that solves all the problems? Maybe we just need to sort of solve the one concrete use case that we really have today is this use case of how we interop with for example solve measure format and design the protocol we don’t need to muddy the waters with decimal and leave that open just to solve in the future. We don’t have to think about solving the problems now. We do need to solve the measure problems now because inter op with native types with primitives with primordial types it needs the protocol to read the message from. Maybe we have to focus on that problem space. I don’t know if you had anything to add.

EAO: I have half an hour to continue on this topic later. Nothing more at this time.

SFC: Okay. Thank you. So I threw in this extra slide yesterday just because sort of thinking a little bit more about what NRO and others had said and there’s sort of a little bit of a composition here, you know, like there’s sort of the three things that could be layered on top of each other. You have the normalized decimal 128 and full decimal 128 that has the cohorts in it and then you have your measure which has also dimension in it. So think about how the types compose, this could be sort of one framework that we could use. I have no more comment on this other than just showing this slide. It’s just a brainstorm. Thank you so much for hanging with me through the presentation. I think we have half an hour to continue with the queue. There’s quite a queue to discuss. I’m happy that people are interested in this subject. So with that, we can—CDA looks like back to you.

JHD: So like the second or third slide, I’m not aware of any system where 1 star and 1.0 stars would mean different things. Every star system I know of that’s not talking about stellar phenomena is either in increments of 1 star at a time or half star after a time. Anything more granular than that gets hairy as a visual representation. Can you elaborate on when those are different?

SFC: So I think you’re talking about the Problem 2 slide. So I posted a lengthy essay on gitHub and I think you read it before. Basically my evidence that 1 and 1.0 are different things are the fact that they create different pluralizations. Even if they represent the same point on the number line, they need to be handled differently in software. Because one has no precision and the other has some precision and therefore need to be treated differently in software. The fact they need to be treated differently in software means we need a way to represent it.

JHD: Thank you. That was good. And then my next queue item, when you’re saying precision, like, I feel like that word is used to describe two things. One of them is I think the previous slide probably, if I remember correctly, which is—maybe a different slide I was thinking of. Any way, one of them is supporting enough decimal places to do math. So when you said—thank you. This one, Problem 3. If you have a 20-decimal-digit precision number, you need to be able to do math with it. But then the second bucket is from science class and stuff where you actually care about the underlying precision of the numbers you’re using and combining those and all that. And so I think it’s—I don’t know how to differentiate those two. I think it’s important to try to figure out which one of those two or both we’re talking about when we talk about precision. Personally I find that the first bucket which is just supporting very fractional numbers is very important. That is something that needs ergonomics and accuracy and perhaps deserves primitive support. But I think the second bucket is something that is important and perhaps could be satisfied by user land or API only solution.

SFC: Cool. I guess just to clarify what I mean in the presentation, I used the word precision for trailing zeros and significant digits to refer to the number which is sometimes called precision in other cases. I try to use the language in that way. I try to be consistent with that. I think I was consistent in this presentation about those different words to represent those things. WH looks like you’re next.

WH: I have some questions about the bit pattern concerns on the slides. Why do you care about bit patterns of numbers?

SFC: Why do I care about bit patterns? I can say why I care about bit patterns. So in the all in one measure type, we have an interesting issue where we have like a number which is 64 and it’s one chunk in memory if it’s in the future of a normalized primitive decimal. That’s the 128 bits. All of a sudden we have this precision and dimension field. Dimension is probably a pointer or an enum and more likely a pointer to a string value or something like that. And then we have this extra precision value which like what is it? It’s sort of a big bucket of like things. It could be a number of significant digits , it could be number of fraction digits , it could be error bars, for example. It could be a number of different things and on the one hand that’s cool, we have the flexibility. On the other hand, it’s a big muddy murky space. IEEE with the bits sort of gives us a way to represent that compactly. We can eliminate the extra fields from the measure type. We pack it all in to the 128 bits of the decimal type. Engines don’t have to worry about supporting this extra field. We don’t have to worry about figuring out what the extra field does. And we leverage the existing machinery that IEEE already gotten us. Does that answer your question?

WH: I don’t understand the concerns about wasted bit patterns. Using Decimal128 to just represent points on the number line representable in Decimal128 requires 128 bits, so there are no wasted bits in the representation. If you count the number of possible values, there are 340 undecillion possible 128 bit patterns out of which there are 221 undecillion possible points on the number line. You can represent those in 128 bits. You cannot represent those in 127 bits if you want a fixed-with number type. As far as wasted bit patterns, the bigger source of waste is actually the base-1000 representation Decimal128 uses. There are Decimal128 values that have thousands of possible bit patterns all representing the same number. That’s due to its using the base-1000 representation where each digit uses 10 bits. So it seems like a bit in the weeds to worry about Decimal128 bit pattern efficiency. I’m not sure why that should have any effect on our proposals.

WH: The other thing I’d like to note is that on a later slide you discuss the BigDecimal proposal, calling it BigInt. That has issues which have been well-discussed which are not on the slide. When reviving proposals like that it would be good to replicate the main concerns about them on the slide.

SFC: For the second point, if you can—I did a little bit of looking around. But I didn’t find that. It may have been—I would like to read more. If this has been discussed, I would definitely like to read more about it.

WH: We spent many hours on this. The primary concern is runaway precision with multiplication.

SFC: Cool. I would like to read more about that. And regarding your first question about wasted bit patterns, you know, another sort of thing that I didn’t put in this deck which I think is maybe worth mentioning is that if we’re going to have 128 bits and we’re not going to be representing precision, we can actually get a little bit more out of it if we did float binary 128. IEEE do binary 128. If the whole plane is to represent precision, we could use binary 128. Using decimal 128 is not as efficient for doing—I will not be doing machine learning with decimal 128. So I might need to do things where I really care about precision like financial calculations. I won’t do big data with 128. Binary 128 is another thing if that’s really the thing we care about, you know, that’s the more efficient option any way.

WH: For machine learning you want the least possible width because it’s faster.

SFC: You want the least possible width that gives you correct results. And 64 bit is usually enough for that.

WH: We could debate what “correct” means. Anyway, we’re going off into the weeds. Let’s move on.

SFC: NRO is next with a comment about this one.

NRO: It’s more about JS sugar than numbers. When we talk about JS sugar, we always dream about what tools could do but not actually able to do. I see RBN on the queue and won’t speak for TypeScript but everything except for TypeScript, any type of like type-directed conciliation that affects is starter and run way is the same for TypeScript second.

RBN: I concur with NRO on this. TypeScript’s position is not doing type directed emit unless able to statically determine that syntax can only be used a certain way. We would not be able to transpile it. Something like ~+ that is everything is ~+ is always transpiled to thing on the left dot decimal add or something like that. Yes, that’s feasible. That’s something we can always do regardless of what the input value is. If it’s something like transpile plus, we can only do that if we transpile plus for everything that would slow down everything. We would not be transpiling plus. That is not something that we would be able to do.

SFC: Sort of going on the point, then, even if you don’t transpile plus, is there still the possibility of writing a lint or TS lint and use plus on the decimal type and maybe meant to use ~+.

RBN: That’s something that is essentially feasible and not going to catch everything. If we know the type is decimal type, that is something that you could be warned by.

SFC: Okay. Thanks for that comment. Looks like EAO is next on the queue.

EAO: Just continuing on this same slide. Hopefully a quick question given that we have the math.some precise proposal currently at Stage 3, I’m wondering doesn’t that actually provide a solution for the use cases that something like decimal add or ~+lus would be doing and then the concerns here would be going further from there and ergonomic concerns that need to be improved regarding what Math.sumPrecise is kind of what we’re already doing?

SFC: I don’t know if KG is on the call and could make a comment about that. I think WH is on the queue.

WH: Math.sumPrecise gives you precise binary addition. Math.sumPrecise of a set of numbers will always be equal to the mathematical sum of the numbers rounded to the nearest representable IEEE double value. When adding two numbers this is always the same thing that the built-in + operator does. When adding 0.1 and 0.2 Math.sumPrecise by definition will likewise produce 0.30000000000000004 because that’s the nearest representable IEEE double.

SFC: Just to echo that. I tried out the Math.sumPrecise polyfill and it had that behavior. So unfortunately that proposal doesn’t solve this problem. It has to be another proposal.

KG: I was on mute. You can’t solve that problem as long as you’re using Numbers, because the Number 0.2 is not the decimal number 0.2. It’s the floating point number. Something more complicated than that.

SFC: Looks like MM is on the queue.

MM: Yes. So let me start by asking you ayou a rhetorical question. If I ask you to write down two-thirds to four significant digits of precision, what would you write down?

SFC: Two-thirds to four significant digits of precision? This is a little mental exercise?

MM: Yeah.

SFC: Well, I mean, I would like to—I would have to be able to know what rounding mode that we’re discussing. Maybe I think—if we’re assuming like half-even rounding, like, that would mean it would go—the last digit would round to a seven.

MM: How many sixes would you write down before you wrote down the seven?

SFC: That would be 0.6667. That’s my mental model.

MM: Okay. Good. Thank you. So the question was rhetorical. The point of it I’m making, the larger point I’m going to make, there are many different notions of precision and I find that the one bundled into IEEE decimal 128 is not any of them in a coherent manner. That in particular the notion of precision that you’re emphasizing when you talk about 0.1 stars is a display notion of precision that is usually static. It is usually not a degree of precision that is data dependent. It is for all the data flowing through a given call site or all the data flowing through a given parameterized system and more statically parameterized than individual units of data. I will note in the example that I just posed you that is not what the IEEE will render for two-thirds no matter what the non-normalization is, because it’s not an issue of trailing zeros. It’s a question of overall total digits of rendering. If you’re in a context where what you want to see is numbers rendered to four digits of precision, and there are many such static contexts, rendering two-thirds at all possible sixes followed by the trailing seven is what you get directly out of IEEE and not what you want when you’re trying to use precision to colour a display. The other notion of precision that I think is coherent is something to capture the notion of error bars. And there are many different ways to do this. There are many different theories of that. There’s statistical error bars where you’re trying to propagate through one standard deviation of error under some statistical independence assumptions and then there’s a lower bound and upper bound and trying to propagate through worse-case error bars. So you’ve made the point that—you agreed with the point that the scientific notion of precision, which is intended to take into account error bars, is certainly not what IEEE is doing. I don’t see any theory of what IEEE is doing that actually meets well any use case. So I’ll let that be my first question. And then I’ll put myself back on the queue.

SFC: I can respond to that a little bit. So first of all, you know, as I think I mentioned this a little—this also came up in JHD’s point which is that like the word precision has multiple meanings and different context that is a little bit unfortunate. In this presentation, when I say the word precision, I’m referring to precision as needed in the context of Intl.NumberFormat and talking about it in terms of the number of trailing zeros. That’s different than significant digits, which is representing precision in terms of like how many digits of a number are you able to represent? So trailing zeros versus like total number of digits that are able to be represented.

MM: So in the Intl.DisplayFormat, if you’ve got two-thirds, and the display format is suggesting four digits of precision, how would the Intl number rendering render the two-thirds value?

SFC: So currently Intl.NumberFormat has the ability to encode rounding options in the options bag, and that’s a utility –

MM: I’m not that concerned about whether the last digit is six or seven. I’m concerned about how many sixes are displayed before the last digit.

SFC: So it depends on—so Intl.NumberFormat allows you to configure if you want to round through a number of fraction digits or a number of significant digits. If you choose four significant digits, that’s what I said earlier which is 6667. If you specified you want four significant digits.

MM: So does Intl.NumberFormat actually have any need for the display format that comes bundled with the IEEE definition of IEEE 128?

SFC: Yeah, okay. I can definitely answer that question. I have a little bit of a thread about this on GitHub. But this idea of being able to fully decouple display from the quantity being displayed is a thing that helps us fix bugs in how we, for example, interoperate with PluralRules and NumberFormat and allows us to be able to more correctly express numbers into Intl.NumberFormat and allows us to potentially interoperate better with HTML input elements. It decouples as much as possible from Intl and as we’ve been working on these Intl APIs, the more the making Intl APIs focused on how to internationalize the number and how to take the data and put it in a form that can be displayed and as much as we can decouple those two things it tends to solve a lot of problems. That’s sort of the idea for like why having precision in the data model as opposed to just being formatting options is a desirable outcome. Obviously remain formatting options because it currently is. But it would be nice to be able to put it in the data model.

MM: I’m sorry. I didn’t understand how you got through the first part of what you just said to the second part.

SFC: Maybe NRO can give an example. He’s on the queue.

NRO: I can give an example here. In Intl currently, when you want to, in this case for example, display 1.0 stars, you have two different Intl functions. One that gets Number 1 and converts it to the string “1”.0. And then you have another function that gets Number 1 and gives you back the string “stars”. And you need to make sure to configure these two functions the same way and tell what functions the numbers will have two digits. We’ll have one digit after the dot so that they are coherent so they don’t give the string “1” and the string “stars” or the string “star” and the string “1.0”. And right now you have to—given that this settings are not saved together with the number, you need to make sure to pass the coherent settings to all the functions while by having this encoded in the number itself means that you don’t risk accidentally getting the various functions out of sync.

MM: So if the actual underlying number was 1.1111 and you’re rendering it in a context where you wanted to render it to one digit of precision it would be rendered as “1” and when it’s rendered as “1” would still be singular and the rendering it as “1” is not a rendering that IEEE provides you because the IEEE degree of freedom is only trailing zeros, it’s not overall precision of display. So I just don’t find dynamically tracking trailing zeros as the degree of freedom carried dynamically in the data to be coherent. It doesn’t match any use case that I can imagine.

NRO: Yes, I agree with you here. What is important for the Intl as presented is to have the number together with a number of trailing zeros. But it’s not really necessary for it to track this number of zeros across operations. You usually would want to just set the precision after you’re done with your computation.

MM: But when do you care about number of trailing zeros as opposed to just number of significant digits?

SFC: I mean, I think number of significant digits could be one like way of representing the number. I think in many cases, that is the thing that Intl would need. But that can include trailing zeros. If you say, well, I want to render this number 1 with two significant digits, like, that’s something that can be encoded in the data model. IEEE gives us a mechanism for encoding it in the data model. To finish my point, I think you’re sort of discussing a little bit about this concern, the first concern on this slide here, which is that like the way that IEEE deals with precisions across operations is kind of unexpected in certain situations. And that’s not necessarily the problem that Intl needs. Intl just needs it in the data model. Intl doesn’t care how it’s propagated.

MM: Intl doesn’t need trailing zeros. Intl needs total number of digits whether the digits omitted are zeros or not. So if I was in the context to see something to three significant digits and the actual number was one, I would expect “1.00” to be displayed. The trailing zeros comes from the DisplayFormat at the point of display. It’s static. It’s not dependent. It’s not carried with the data. I still have not heard a use case where what’s dynamically carried with the data is only number of trailing zeros rather than number of digits to show.

SFC: Yeah, I understand your point. But I want to make sure we get through the—we’re pretty close to time. If—if Nicolo, if I can jump ahead to NRO. If you can make your last little comment.

NRO: Yeah. He would also like to hear from JMN, but I was trying to encourage other people to give their opinion here. We have heard from a few people today, and this same people already discussing all this a few weeks ago in other meetings. It would be great if the rest of the committee also like expressed some expression or their feelings

SFC: And yeah. JMN, you said in the queue that you like 3, 2 and 6 in that order. Is there anything else you wanted to add to that? Or elaborate on why.

JMN: Yeah. I think 3 is the state of affairs today. 2 is what we had, I think, one or two iterations before that. 6 is interesting because it is a kind a path to being a primitive today. But as WH said, there’s some big concerns about that, with values getting extremely big very quickly. But maybe just a general point, why would I prefer these three things? It’s because to my mind, they clearly separate the measure idea from the decimal proposal, which I understand to be something focussed on numbers. We can debate whether that’s mathematical values or things with some precision on them or not. But it’s still—at least as far as I understand it—somewhat separable from the measure idea, which is a nice, I think, independently-motivated proposal. So that’s why I would list those things in that order. This is fantastic. Thank you for organizing the presentation.

Speaker's Summary of Key Points

SFC: The goal is to take a holistic approach how we want numbers, precisions and measures and dimensions to interop together to give ECMA developers a cohesive, well-designed architecture. I went over several of the different problems spaces, as well as some of the different possible solutions. We had some good discussion regarding, you know, what should be represented in the type system, some good discussion involving, you know, what is precision and the different ways to represent precision. And I think the—you know, next action items are for the sort of number-related champions to dive, to continue to sort of iterate on this and come up with a, you know, architecture that solves all of the problems in a clean and future-proof way.

SFC: Does that sound about right, NRO, JMN, et cetera?

NRO: Yeah.

CDA: Okay. Thank you, SFC.

Measure Stage 1 update

Presenter: Eemeli Aro (EAO)

EAO: This was supposed to be BAN presenting, but as he’s on medical leave I’ve stepped in. I needed to put the presentation together yesterday, so apologize for rough edges and so on.

EAO: This is something like a continuation of the previous discussion, but looking at the—maybe not how to define a number part of this. Measure as a proposal is providing a way to separate the “what” and the “how” when we are formatting numbers. This statement is carrying a lot of weight. So in the “what” here we have, for example, a number and units; of meters, kilograms, or any other things that are being measured, US dollars could be one. And then separately, “how” are we formatting these things. I will get to why that’s an issue we would maybe want to address in the next slide.

EAO: The Measure proposal is also talking about supporting mixed unit formatting, such as rather than formatting “3.5 feet”, providing a way of formatting that value as “3 feet, 6 inches”. And then, the third sort of basket of problems, shall we say, that we are looking to solve is providing unit conversion capability in ECMA262.

EAO: To some extent, all of these are coming from desires and needs identified in other discussions and proposals, such as the Smart Units proposal, Decimal to some extent, and Intl.MessageFormat. Measure is one possible way of looking at the space of problems we have here that we would like to solve.

EAO: A lot of what is going to continue from here is based around the proposed solution of adding Measure as a new primordial object and specifically, one that would be accepted by Intl.NumberFormat as a formattable value.

EAO: That part is, in fact, the—the key of what makes this something that, I think, we ought to be defining in the spec. And that’s coming from the way that we do number formatting. Along with the other formatting operations in Intl, we have a two-phase process here. First, we have a constructor. And in the constructor, we set a bag of options that are defining how the constructor instance ought to be formatting. And then later on, once or multiple times, the formatted value is given in a format() method on this instance that we’ve created.

EAO: So what this means is that as it’s currently set up, if we want to format currencies, for example, we need to create a separate Intl.NumberFormat instance for every currency that we would like to format in, even if the other aspects of how are we formatting currencies, or values with units, or values with precision, would otherwise stay the same. And this ends up mixing what we are formatting with the options of how are we looking to format this. And specifically, as alluded by SFC in the previous presentation, this becomes a problem if we consider for instance the Intl.MessageFormat proposal, where we have in the MessageFormat 2 specification, almost a requirement to support something like a currency or a unit as a concrete thing that can be formatted. The sample code here is showing how this could likely look, if Intl.MesageFormat advances in the spec. We have the pattern of a message, which includes a placeholder cost formatted as a currency, and then we have something like a Measure that we can pass in, as the cost, and that Measure, then, carries with it the currency or unit could work there as well, for, you know, when doing unit formatting. That could give us a value that can be passed through the message and formatted in a way that ensure that a translator does not “translate” the value, and localize it, which could change entirely the meaning of what is being formatted here. This is largely the problem we are looking to solve.

EAO: The strawman proposal in a little bit more detail, allows for operations here, we can create a new measure. For example, we are starting from 180 centimetres. Then we are converting a unit here defined as foot-and-inch. And then this is what we allow to be passed to a NumberFormat instance that gives us output that says “5 ft, 11 in”, in this case. I am omitting some discussion about how exactly precision works. That is something we can consider, I think, separately. There’s a lot that I would not spend time on that topic because it’s a big topic that could swallow up the discussion completely.

EAO: One further example of what we may consider to be in scope for Measure is this conversion to a locale where we could be defining, for example, a usage for the value. So here, in this example, we’re starting from the same starting point of having a measure of 180 centimetres, and then converting that to en-US, American person-height usage. And then, getting my height as a new measure instance. And this, then, effectively becoming foot-and-inch, which can, then, be formatted as previously, and we end up with “5 ft, 11 in”.

EAO: As might be obvious here, a lot of this is a proposal that is to a large extent coming from an internationalization and ECMA-402 interest, why does this exist effectively? Because we do have an interest in 402 looking forward, in particular, for NumberFormatting for enabling something like “usage” to be accounted for, because it becomes very convenient to be able to format values and localize them in this way.

EAO: But at the same time, we are very concerned about the same sort of issues that, for example, the Stable Formatting proposal considers, where if we were to introduce any capability of having an input like 180 centimetres and having output coming out of that is “5 ft, 11 in”, we end up in a situation where JavaScript developers will absolutely figure out a way of getting a “5 ft, 11 in”, even if that is only available through a complicated sequence of formatting to parts and parsing the output from there. So we are looking to ensure, in part, that this sort of capability is provided without needing to do convoluted work and abusing Intl, in order to get at the final result.

EAO: At the last meeting, BAN represented some of the aspects of this, as well, of how we would allow for a—the myHeight instance here, for example, to be able to output the “5 ft, 11 in” values that would be also used for the formatting, for instance.

EAO: It’s maybe relevant also here to note that there’s a whole bunch of things that this proposal is not about. It’s not about unit formatting, because this is already a thing that we can do with Intl.NumberFormat. It’s already supported for an explicit list of units that we say must be supported and you can’t go beyond that.

EAO: And furthermore, it’s not even about localized unit formatting because that is already a thing. This is taking a formatting finish, the feet unit and note in particular that this is already handling some amounts of pluralization, “1 jalka”, “3,5 jalkaa”, where the units are accounting for the value and being formatted there. And then this is also not about formatting numbers with an arbitrary count of digits because we have that too, as the input given to NumberFormat gets converted internally to an Intl mathematical value that, if I remember right, has effectively arbitrary precision. Furthermore, even though we talk about currencies, we are not talking about or even considering allowing for a currency conversion to happen within Measure. And we’re not talking, within at least the scope of the Measure proposal, of considering measure as a primitive or otherwise allowing for operator overloading with it.

EAO: But then we do have some things that this—this is the part of the proposal where I would be interested in input and comments from TG1 here. One aspect is that this proposal can be done with a very, very minimal amount of data payload addition that could be added, because we already have these units, and we don’t necessarily need to go beyond them. But we could. There’s a bunch of units that it might be interesting to have formatting be supported for, or to have conversions be supported for, but these would, then, carry additional data requirements. Should we or should we not do that? That would be interesting to do, or if there is a hard line, that would be very interesting to hear.

EAO: Then, also, there’s the conversions that account for the locale and value-specific usage references. That’s the second example I showed. That would be very interesting to hear whether this should be considered as a part of the initial proposal or something as a possible later addition. And these are conversions like, I mentioned earlier, about converting a height to a person-height, or for other locales. And it’s important to note that the conversion also needs to account for the value of the number that we’re formatting. For example, if I remember right, the CLDR data commonly used for this, says if a person age less is less than 2 ½ years, then you end up including months in the output, but over 2 ½ years, it’s only years that are being sorted out. So the usage depends on the value and the locale.

EAO: And this data for this is very small. Like, compressed, if you look at the CLDR data, we are taking maybe 2, 3, 4 kilobytes for this sort of capability. This is not a lot that is being asked for potentially.

EAO: Also under consideration is whether Measure should support addition, multiplication, division, and other operators on the value. Given that we already consider and do want to support conversion to some extent, should we allow for operations that potentially would even transform the base unit of what is being worked on?

EAO: So a lot of this is driven by this one big question, which I would appreciate input in, should we really care about anything beyond specifically formatting and conversion? Those are the requirements that this proposal at a minimum needs. But whether we should go beyond them is something that could be done, but it doesn’t need to be done. And knowing whether to—whether measure ought to go beyond is going to drive quite a bit of the considerations for how we structure it and about how we allow for it or not, something like a usage parameter, and how it interacts with the other parts. So this is where I would be very interested to hear if there’s anything in the queue or other comments or criticisms to address here.

CDA: WH?

WH: So … the answer to the question you have posed all depends on handling of precision, which you didn’t cover in the presentation. Because I think that’s the long pole in the tent here. Treatment of precision becomes important for doing arithmetic. And treatment of precision also becomes important when doing conversions. So do you want to do the precision-handling work in one place or do you want to do it in two places, and have them potentially get out of sync?

EAO: I would say the precision question depends on this question that’s on the slide currently. Because if we were only caring about formatting and conversion, we can consider precision only from these points of view. However, if we also want to support, for example, operations on the value, explicitly, as a part of Measure, then precision, as you mentioned, needs to be accounted for more widely. This is why I am asking this question, because it needs to be answered first before we get into the depths of how do we handle precision.

WH: You skipped over the precision part of the presentation. I can’t give you the answer until you present that.

EAO: What I mean is that we do not have a ready answer for how exactly the precision ought to work because we can define it in multiple ways and I think there are—this in particular is a fundamental way that ought to be answered first before we figure out, okay. Given these are the use cases and needs that we are trying to address, therefore, what do we do about precision here?

WH: Well, that’s the opposite of what my point is. We need to understand what’s involved in handling of precision here. And it’s hard to answer this question without a good understanding of the precision aspects of conversion.

EAO: Okay.

WH: What I am asking for is either a presentation or some kind of discussion of what are the considerations dealing with precision. And that would be helpful to decide whether we should care only about conversion and reinvent the wheel for doing arithmetic, or whether it’s better to consider them both at the same time.

EAO: That does seem like a topic for consideration later.

MM: Yeah. My question is, related, I suppose: given that measure includes some notion of precision, even without pinning down what it is, but given that, you know, the current IEEE floating point numbers and the current BigInts don’t carry a distinct notion of precision, they just identify a point on the number line, and given that the number field of a measure would also be able to carry regular IEEE floating point numbers and BigInts and add some notion of precision in this measure wrapper, SFC had raised the idea of somehow combining the trailing zeros that are being dynamically carried by a decimal number, using that in the measure context as the precision of measure. And that confuses me on two grounds: one is that, in order to deal with—so this question is sort of across both presentations taken together, so I consider a question for both of you. So this confuses me because on one hand, Measure would already need to carry its own precision in order to deal with floating point numbers and BigInts. So that would seem to carry as whatever theory of precision it would apply to decimals. And would the theory of precisions that you might think to carry in Measure, is there any use case for which the theory of precision that you would consider would be one that’s only tracking trailing zeros as opposed to tracking trailing digits?

EAO: So I would say that if we consider precision as a utility primary for the Measure of formatting, for instance, and also directing what might happen during conversion, then it becomes sufficient, for instance, for the precision to be retained within a measured instance as an integer number of fraction digits of the value that is being then formatted. And we can theoretically, with this sort of approach, even require precision to be included as a parameter, when conversion is happening. So that we are completely externalizing what happens to precision when converting, say, from centimeters to inches and—or doing other operations like this. Does this possibly answer your question?

MM: I think so. Let me restate and see if you agree with my restatement: that there is no anticipated use case for which the notion of precision that measure would carry dynamically would be trailing zeros, the closest is trailing digits. Two-thirds rendered with three trailing zeros is 0.6667 or something. And, therefore, there is no theory of precision that measure would want for which, if the number is decimal, it could just delegate that notion of precision to the dynamic precision information that decimal numbers carry.

EAO: Probably yes. Because we will absolutely need to support numbers, and numbers do not carry their own precision, so the precision will be need to be somewhere, or the number will need to be converted into a Decimal, and converting the number into a Decimal to later to be converted into an Intl numerical value seems a bit too convoluted.

MM: There’s two grounds: one is, as you said, that the precision has to be in the Measure because it applies to numbers and BigInts. And the second ground, which the second part of my question is focused on, none of the theories that precision that one would think to build into measure is a—something that keeps tracks only of trailing zeros, rather than trailing digits.

EAO: I would agree with that.

MM: Okay. Thank you.

CDA: SFC?

SFC: Yeah. I think I have the next two items on the queue. First about the precision. Trailing zeros versus trailing digits. I don’t necessarily understand why those two concepts are distinct, because, for example, let’s say you have 2 …. 2.500, which also 2.5 with 4 significant digits. Those are two different—the only difference is like you know how you represent it in the data model. But the data model is able to represent both the same concept. Right? The concept of this number 2.500. Both are able to do it.

MM: Yeah. And so I agree with that. And I agree that you can get there by saying, either two trailing zeros or three significant digits. There’s several different ways to do it. But none of them are—trailing digits versus total significant digit, none of the coherent choice, none of the choices you make has something to lift into measure or something to use as a substitute for the precision carried by measure would be number of trailing zeros, rather than trailing of zeros or total digits.

SFC: I still don’t understand because number of trailing zeros is also a coherent model. As is the number of fraction digits or number of significant digits.

MM: Give me a use case for which number of trailing zeros as opposed to number of trailing digits is useful.

SFC: They represent the same thing in the model.

MM: I didn’t understand that.

CDA: I want to interject because we only have a couple of minutes left.

MM: I think we can probably further investigate this off-line.

SFC: My initial reaction, MM, as far as I can tell, as I said, those can—the thing we want to represent is 2.500—and at the end of the day, to be able to represent the quantity is what we care about.

MM: The context in which you want to represent 2.500, which in which the underlying number is two-thirds, you want to represent all the 6s you can.

SFC: I think I see. I mean, we wouldn’t represent two-thirds because two-thirds is neither a decimal or a floating point binary.

MM: I think that misses the point.

CDA: We do need to move on. SFC, do you want to briefly, very briefly touch on your last topic.

SFC: I think a lot of questions that EAO is asking have to do with the scope question that was the topic of my discussion. So I feel like we should continue to have these discussions and, you know, decide what the scope is going to be and that will drive a lot of these decisions and answer a lot of the questions from EAO’s presentation.

CDA: All right. EAO, would you like to dictate key points/summary for the notes?

Speaker's Summary of Key Points

EAO: The rationalization and use cases for the Measure proposal were presented along with a strawman solution. Some of the extent of the scope of the proposal was also presented, along with some of the other open questions about the extent of said scope. No clear opinions were expressed by the committee on the questions presented, but a further discussion on the representation and handling of precision, in particular, was requested.

Continuation: Error Stacks Structure for Stage 2

Presenter: Jordan Harband (JHD)

JHD: Okay. All right. So I don’t remember where we were at the end. I think it was DLM’s comment was the last one.

JHD: So just—my understanding ever the—the push back from Mozilla, in particular, MAG and DLM, I believe is this seems like too much, too big, not well motivated as a big proposal. Make we could split it up. I think that in general, that is a good principle to apply. Like a good way to interrogate proposals. This proposal contains three separable pieces, I guess. One is the normative optional accessor, which like we could—ship that and say great. That accessor is great. It produces a host-defined string. Cool. The problem that solves is the one that doesn’t—that isn’t actually very convincing anymore, which is great. We have specified it. But, like, it’s not actually—I guess it prevents someone from having their own property. It’s not no value, but it’s not a lot of value to be a whole proposal. That’s almost like it needs consensus PR.

JHD: And then the next piece would be the system.get stack string, wherever it lives. And the benefit there is that is—with the combination of those—the first one and that one, now the stack string can be retrieved in a way that is compatible with the desires of hardened JavaScript. There’s a brand check included in the method. That could be done, even in an environment where the stack accessor is not available. Then it can be denied in a way that is compatible with the needs of hardened JavaScript so there is some value to be had there as well, but the—typically, the desires of hardened JavaScript have been enough to motivate design changes, but, like, I also haven’t seen a lot of enthusiasm from the committee as a whole for building things just for that purpose. I am not trying to say we shouldn’t, but you know I am just concerned that perhaps that wouldn’t be seen as enough value to be a proposal. And then the third piece is, the bulk of this proposal, which is, the get stack static method which gives you the structured data. This is the one that developers want. Nobody wants to work with a string. And that’s where I think the majority of the value comes, but that isn’t very useful unless it is tied together with the contents of the string. So that you can be confident they represent each other in some way. So I don’t think that the structured data can happen in the absence of at least specifying the contents—the structure of the string in the way that this proposal does, for the accessor. I suppose we could omit the get stack string, but, like, I don’t think that’s going to be the—I don’t think that—if you are already building the structured meta data and you are already ensuring that complies with the structure and schema and shipping the accessor, I would be surprised if someone thought it was a lot of extra work to add the static method that’s basically doing the same thing the accessor is doing. I can separate it, but that feels like a bunch of overhead and a process that won’t add any value and won’t result in a different outcome. Assuming all three eventually make it.

JHD: So I would love to hear some more evaluation about the value of splitting them up and where the difficulty lies around, like, implementing this and so on. So let’s go to the queue.

DLM: Yeah. My topic is not addressing what you asked about. I don’t know if you want to follow up on that later. Yeah. Basically after the conversation the other day, I went back to the meeting notes from the last time that was brought to committee in 2019 and that helped me clarify my thinking a little bit about my concerns. So basically, at that time, with Adam and Domenic expressed concerns about exposing, like, the structure in order to get access to frames without standardizing the contents of the frame. I believe that would start exposing a bunch of things that are kind of non-interoperable in between the different engines. And the other thing that really stuck out was when—the SpiderMonkey team in 2019, we already tried to align our stack space with V8 and found it wasn’t possible. We were breaking internal code and extensions. And breaking code on the web. So to tie those together, unless we can standardize, not just a schema, but the actual contents, this is going to introduce more interoperable troubles and cause more trouble than it’s going to solve. The concerns raised the last time this came to committee are still valid. I share them, and like I don’t think there has really been any change since then. I am not hearing any evidence that, you know, anything around those concerns has changed in the intermediate time.

JHD: I don’t think it’s necessarily clear that it’s a value or desirable goal to make this stack trace contents actually be the same across browsers. Like, it’s—it seems nice in theory, but I don’t know if it makes much of a difference. Anything working with stacks is already doing some stuff to work around the differences across browsers. So I—I don’t understand—like, I am not convinced—so the—one of the concerns you stated, which was stated back then as well, is that the—that it would expose information that would make or close—like, interoperable differences or create for compatible problems down the road. The people already doing this stuff, century and so on, they are already—they have already built that. And they are working with it already. So making their job easier by encoding some of the stuff in the standard doesn’t strike me as something that makes compatibility problems worse. It would prevent engines from deviating in some ways and not in other ways further. Which seems like it reduces compatibility problems

DLM: So I think, you know, what this will do is actually make it easier for people to start things inspecting stack frames. This is actually going to increase the usage of this kind of code, which means we expose this differences to a broader audience. Like a few specialized people are doing this, working around it, that doesn’t convince me it’s a good idea to expose this to everyone on the web.

JHD: Okay. So I understand your position better. Thank you.

DLM: Thank you. And I sympathize; I understand why people would want this. Okay. It’s not like I think, it’s a bad idea itself. It’s just I am completely unconvinced without standardizing the concepts, exposing this more easily is going to make the world better for anyone.

SYG: I agree with Mozilla’s concerns here. To put it another way on how we think this does not help the interop story, we have one point of non-interop today, the whole of the get stack machinery, you have to wholesale, do browsers insisting and decide what to do. It’s unlikely we can unship that. It’s beyond unlikely. We can’t just unship that. If we standardize a new thing, what happens is, there are two concerns: one is a footgun concern. It looks like it’s interoperable but it’s not. The contents are not. We got into that last time. You have to do the browser insisting and deal with the contents. The net increase now, another point of interop. We expose the stacks and the existing non-standard stack machinery that you will have, now there’s going to be a new thing that we will also have to maintain forever that is not interoperable and unlikely to ever be. Net increase in the non-interop surface, I am not interested in that.

JHD: Just to clarify, so your concerns here and DLM’s, are those primarily about the structured form? Like, if I did the three pieces I discussed, the first two deals don’t with the structure, do those same concerns apply to the first two?

SYG: Number 1 was, the normative optional accessor. Which is what you already have in theory. Number 2, is the static method that gets you the string and 3 is the structure. The concerns we just talked about you and DLM are about the structure part and not about the other two.

SYG: That’s my concern was about the structure part. But I don’t see the value in the first two.

JHD: Got it. Okay. So you don’t—those concerns don’t apply, that you don’t see the value. Just clarifying. Thank you.

MM: Yeah. So given what SYG just said, I am going to combine this with the thing—the other thing that I put on the queue because they both address the degree of interop concerns. First, to be more ambitious, and the second to be less ambitious. The first one to be more ambitious. A possible compromise that’s still below trying to fully specify the stack, which I don’t think will ever get the engines to agree on, especially since one of the engines does something like tail recursion optimization or the others. I can’t imagine that’s going to—that that’s going to be surmountable in terms of what stack traces are produced. The ambitious compromise would be that any stack frame might be omitted, but any stack frame that is present reflects reality. So that once again, an empty stack would still be conformant, but a stack that simply claims that there’s some function on the call stack that has nothing to do with any valid interpretation of the actual call stack could be considered non-conformant. So that would be very ambitious. I am not hopeful we can get agreement on that. I am offering it in response to the idea that the structured stack trace is only something that might be agreeable, if we go beyond –

SYG: I’m sorry. Could you repeat the last—like, 45 seconds? There was an earthquake and I zoned out.

MM: Sure… glad you’re still there. There’s been concern that just standardizing the schema without standardizing the content would be not very useful. I think it would still be useful. But I offer the—offering the ambitious compromise, as one of the two compromises I am suggesting today, the ambitious compromise is that we go beyond just the schema to say that the—that any frame might be omitted, but any frame that is represented must be truthful, must be accurate. So, for example, you can’t produce a stack trace, a structured stack trace that claims that there’s a function on the call stack that by no semantics interpretation of the call stack is actually on the call stack. So that would, I think, be something more than schema that would be useful and potentially that is in the realm to get engines to agree on. But let me just stipulate that I find it unlikely that we would actually get engines to agree on that, even because of lots of internal ways they might be optimizing code or stacks or whatever. And that’s the part that covers everything you might have missed. Now new material the less ambitious compromise, I am going to suggest, is Jordan’s number 1. I agree with Jordan’s statement of the value of each of his three break downs, except that I want to say that just number 1, by itself, would be hugely useful to us, that number 1 by itself is just the normative optional accessor, and it doesn’t even need to be normative optional, since it would be conformant for it to return the empty string. If you want to censor it. We provide a substitute accessor that returns the empty string. Which is conformant without resting on the normative option. The thing about standardizing the accessor, as the source of the stack property is it would address what is currently a very painful, a very different, painful situation for us. Mozilla, SpiderMonkey, already conforms to the accessor, where the stack property is located, it’s a narrative accessor. And Moddable access conforms to it as well. Our shims, basically, tries as much as possible to turn JavaScript platforms into one in which the stack trace is the accessor. The two pain points for us is JS C, Safari, the stack—there’s a stack data property on error instances that are—are produced on error instances before we can intervene. We don’t have any hook to intervene. And, therefore, we have no hook to be able to sensor information of the call stack. The revelation of that, you know, implementation—the spooky action at a distance, of seeing what should be encapsulated information in the call stack model above that. We do not have a way to censor that on JSC. And the much more telling stake that V8 made and and the stake from our point of view, we had a long discussion about this, on GitHub threads, public and private, with Shu, but the end result of those is that V8 recently, without realizing the damage it would cause, added a own accessor property to error instances, where all of the own accessors have the same get—sorry: that’s probably the same…

SYG: Yeah. It’s a tsunami from the earthquake

MM: Sorry about that. And I am very impressed there is V8 but an own accessors property on error instances where the getters and the setters are the same function, and therefore, the per error instance it was using information they must be accessing are hidden internal state so it would have been, and this was agreed to on the thread, that would have been and would still be easy for V8 to change that, to be an inherited accessor, and it’s simply the case that right now, it’s—there’s no basis for motivating the V8 to make the change.

MM: If it was an inherited accessor across all engines, then it would give us one way without virtualize the—to censor the visibility of the stack, and then the issue about virtualizing it, in the absence of the other parts of this proposal, would still, perhaps, be a lot of sniffing and the platform stuff. The major need is the censoring. Because right you on V8, they have created not just an unplugable communications channel for data, the accessor properties will allow you for the communication of object references through the hid general state because the setter is honoured and it does not require the argument to be a string. So that’s a capability leak. That we cannot plug because of this set of decisions that V8 made. And it would be easy for V8 to change to this common behavior if we could agree to that. So if there’s—you know, if part 1 of this is something the committee could agree to, I would be very happy to separate it out and try to push that through to consensus and let the remainder remain in a distinct proposal.

CDA: Noting we have less than 10 minutes for this topic.

DLM: I have two quick replies to what MM said. First of all, I wanted to clarify our position about a schema without specifying the contents, we are not saying it’s not useful. We are saying it’s harmful because we’re concerned about interop problems. In the other one, we would be happy to see some specification of the accessor because this is causing web compat problems for us.

SYG: So to MM—it sounds like you would specifically like V8 to change our existing non-standard API, which we have discussed. I would like to point out that this is not a direct outcome of standardizing a new thing. Like, if you standardize this something, this stack getter, like a very likely outcome is we have that and our thing. It’s not now, you standardize a thing that kind of sort of overlaps with a non-standard thing and we unship this. These are independent outcomes.

MM: My understanding from the threads, that—on GitHub that you and I engaged in both public and private, are that the—is that if there was the accessor property on error.prototype, that was inherited by error instances, that there would be no reason for new Error instances that were created and thrown by the engine to carry own stack accessor properties that simply have the same getter and setter to them because the ones that they would inherit would access the same internal state.

SYG: That’s correct. But the outcome—like to get to that place, the investigation needed is, like, what is the risk of doing that? It’s not just standard vs. non-standard.

SYG: It is just independent of whether it is a standard thing.

MM: Certainly, any change to, you know, browser-specific API, in order to conform with cross-browser agreement is a danger to that browser and the users of that browser. And, yes, I will acknowledge that and, yes, it would be for this something to make it to Stage 3, would certainly require, you know, buying in to at least do the experiment and see if there’s any interop risk. In the case of—not by the way, the security problem that we’re concerned with. What we need here only has to do with the pre-endowment of the error own accessor on platform generated errors. It has nothing to do with whether capturing a stack trace stamps error stack own properties on to errors and non-errors because we can censor capture stack trace. It’s only the pre-endowed accessor

JHD: To clarify, in general, correct, standardizing a thing cannot force an engine to unship a non-standard thing. And the rubric is based on many things, but breakage and not about the simply the fact of being standard or not. In this specific case, it’s likely if we shipped an error prototype, that V8 would do it, but that’s not a guarantee. That accurate?

MM: That’s correct and that kind of investigation is appropriate after, you know, to happen, you know, at least during Stage 2, if not later. It’s an implementor feedback. It’s one that might volume the same kind of counters that you have done for the fixed versus non-extensible. You know, it’s an investigation to see what the –

SYG: Let me be frank. We haven’t done this investigation because we don’t think it’s high priority. And you don’t get to force that high priority by making it a proposal.

MM: Okay. I understand that. Would that be an objection to this proposal sectioned off from the error stack proposal proceeding through the early stages of a process so that we can continue this discussion and possibly cajole V8 into trying the experiment?

SYG: Are you asking if this part is being split off to continue the discussion?

MM: Yes.

SYG: I do not object to it being split off.

JHD: Okay. So it sounds like just to summarize what I have heard, so I can update the proposal with the current status: there remains concerns that any form of standardizing the schema that does not account for the contenters, whether it standardizes them is not the issue, but accounts for those issues, Mozilla and V8, at least, consider that would be harmful. Even though a lot of other folks think it would be useful, that’s the constraint there. There is intrinsic value, it seems, in shipping the stack accessor by itself, where the only requirement is that it return a string. So what I think that I—I will talk over with MM, but I am suggesting that happen is, I rename the current proposal to be like about the structure, and then I make a new proposal that is just for the stack accessor and try to advance that. And figure out what to do with the structure separately. Does that seem like a viable plan for now? Or does anyone have a reason for why that’s not a viable plan for now?

JHD: Feel free to reach out, outside of plenary. I just wanted to get the opportunity to get in the notes, if anybody has a reaction.

MM: Obviously, I support that plan, and I would volunteer to be a cochampion on both.

JHD: Okay. Well, then, I will plan to come back at a future meeting, request Stage 1 or beyond for accessor, and I will update the README of the current proposal to indicate what those concerns are, and how we might need to address them and proceed from there.

Continuation: import defer updates

Presenter: Nicolò Ribaudo (NRO)

NRO: Okay. Yeah. Hello, everybody. We are continuing from the discussion started on Tuesday about import defer. On Tuesday we had different proposed and there’s one which didn’t concluded about specifically, just like recap from the proposal currently does is that D well, there are some evaluation triggers when happening the models. Whenever you perform a get operation, module, symbols. It will trigger evaluation. This means that operations like foo in the namespace does not trigger evaluation because it doesn’t go through the Get internal method of the deferred namespace object. Operations like object.key triggers evaluation in time. Specifically, because Object.keys calls get when there is some key. And operations like Object.getOwnPropertyNames or—well, I guess Object.getOwnPropertyNames does not trigger evaluation because it doesn’t trigger get. There are other ways to get objects. There are a bunch of internal object methods.

NRO: The proposed change is to align all of these things and to make all of them always trigger evaluation. So that the rule would become—when you try to get some information about the export of the model, you are triggering evaluation. There are some arguments in favour and against the change. The argument in favour is that this change would simplify what tools to implement, making it possible for the tools to implement the semantics of the proposal. And the reason I am expressing this is because decide native browser, a lot of the time, ESM gets transpiled or bundled to the problem in the browser environments. If one way we have the model declaration proposal, bundler would meet—so use the ESM as defined by—as implemented by the browser. The argument against this change is that it removes some abilities we are giving to JavaScript users right now with the proposal. That is, to list the export model without triggering evaluation. This change is entirely driven by the needs of tools, and not of any spec constraint or any constraint coming from JavaScript engines.

NRO: And the counterpoint to the argument is that, well, we can still introduce a way to get a list of exports in a module, in a way tools would have needed to implement in some different way probably. But it was part of the ESM phase imports where we have the static import capabilities. And it’s now been split out and deferred when we do—we continue with the other virtualization proposals, but it could still come in the future.

NRO: So we ended up with discussions last time, and arguments, and at the time asked for a temperature check. So I would like to—if anybody has further thoughts, other than the four people expressed, you are welcome to get in the queue. Otherwise, I would ask CDA to prepare the poll with this question. Like, how do you feel about this change? Specifically about changing the evaluation trigger to be whenever you are querying about the exports of a model. So just in the list of exports or checking whether an export exists my personal preference is to do this change, but let’s have the poll.

CDA: All right. Nothing on—MM supports. Nothing else on the queue. So for temperature check: in order to participate, you need to have TCQ open before we bring up the interface. Once it’s up, if you join after, you will not see it. So if you have—if everybody—if you don’t have TCQ open, please open it up. I will give you 10 or 15 seconds. Or shout out if you need more time to open it up. Otherwise… All right. We will bring up the temp check.

NRO: Okay. So I think some people are actually missing, because I know at least GB would have voted unconvinced—but considering that, I think the—these results are giving me a direction. Is GB in the call?

AKI: Point of order, do you have to have the TCQ window active in addition to be open because I think that my tab was in the backed and the temperature check never showed up.

CDA: Yeah, it depends on your browser. If your tab was inactive for long enough and the browser does any form of, like, memory optimization. And then that feature would have prevented that from coming up. You could indicate—is that—you want to see the results

CDA: 3 strong positive. 9 positive. 3 following. 1 indifferent. And everything else is zeros.

NRO: Okay. So I would like official consensus for this change. Given that GB is not here, I want to read a message that GB sent to me: “I want to be sure and clear about decisions made, as long as we are clear in making these tradeoffs, the committee can decide to make them, but let’s have a discussion openly.” And the previous slide about the tradeoffs was reviewed by GB. So I am just going to assume that GB would have been fine with the conclusion, given the temperature poll and ask, does anybody object to making this change?

CDA: Nothing on the queue.

NRO: Okay. Thank you. Then, we have consensus.

Speaker's Summary of Key Points

NRO: The summary for the notes, including the discussion from Tuesday, is that we presented four changes to the proposal. The first was presented, the same one we conclude today, was about changing when evaluation of the deferred model happens to happen whenever we not only read the value of the exports, but also when we read the exports of the model. This change got consensus. The second change was in response to a problem, when it comes to the dynamic form of import defer and with the behavior of promises by reading them would trigger execution. And the change was to make sure that deferred module namespaces never have a then property, regardless of what the module exports have not. And it does not read the contents of the model. That change also got consensus. There was a third change, about changing the value of the toStringTag symbol from model to deferred module, and deferred module namespaces and that changes also got consensus there was a fourth change, adding a symbol evaluate property to deferred module namespaces, whether reading properties from it would trigger execution or not. Given the feedback that was—generally it seems supportive of the idea, but not in the shape and especially given that the stabilized proposal is in a very similar area, we did—I did not ask for consensus on this change. The first three changes are in and the fourth one is not. And I think this is it.

Adjournment

CDA: With that, that is the end of this meeting! Thanks to everyone, big special thanks to everyone who helped with the notes.

AKI: Don’t forget, if you want a hat for your contributions to note-taking, you need to make sure to contact me, so I know to make it.

MM: I need reviewers for immutable ArrayBuffers which got to Stage 2. SYG and WM, I think, that you had privately or in previous structs meeting expressed interest in being a reviewer?

SYG: I will confirm, I will review

JHD: I am happy to also review it

WM: Yes.

MM: Excellent. So I have got three reviewers. Thank you very much.

CDA: Great. We did get reviewers for upsert/map-emplace. DLM?

DLM: That’s correct.

CDA: Okay. I just got paranoid about any other ones we missed. Okay. Great. Thanks, everyone.