Skip to content

Conversation

titzer
Copy link
Contributor

@titzer titzer commented Sep 22, 2025

As it turns out, there weren't many (any?) that have more than 4GB. This tests 4GB + 1 page and out-of-bounds conditions.

@rossberg
Copy link
Member

The problem with such a test is that engines can legally fail it, due to (legal) implementation or resource limits. And most likely, such engines (will) exist, even when they allow memory64 syntactically — e.g. consider embedded. So far, we have only put tests in here that we expect all implementations to handle(*). And we don't have infrastructure to make the distinction either.

So I believe adding such a test requires some basic discussion first, about how to deal with "optional" tests or how to enable expressing them such that failing instantiation is a non-failing outcome. Maybe the upcoming F2F is a good opportunity?

(*) That is, we have stayed clear of "reasonable" limits in the test suite. In fact, we even have changed tests when it turned out that there are implementations whose limits they hit. This happened at least once, with a JVM-based implementation that couldn't handle functions larger than 64K.

@titzer
Copy link
Contributor Author

titzer commented Sep 23, 2025

The problem with such a test is that engines can legally fail it, due to (legal) implementation or resource limits. And most likely, such engines (will) exist, even when they allow memory64 syntactically — e.g. consider embedded. So far, we have only put tests in here that we expect all implementations to handle(*). And we don't have infrastructure to make the distinction either.

So I believe adding such a test requires some basic discussion first, about how to deal with "optional" tests or how to enable expressing them such that failing instantiation is a non-failing outcome. Maybe the upcoming F2F is a good opportunity?

(*) That is, we have stayed clear of "reasonable" limits in the test suite. In fact, we even have changed tests when it turned out that there are implementations whose limits they hit. This happened at least once, with a JVM-based implementation that couldn't handle functions larger than 64K.

I understand this perspective, but I'd rather have tests that are normative but only fail due to resource exhaustion rather than avoiding such tests altogether. For example I was a little shocked that currently an implementation can pass the test suite for memory64 without implementing 64-bit bounds checks at all. While we can add some tests for 64-bit bounds checks that don't use memories that are very large, testing that memory above 4GB is actually accessible and works properly is something every engine will need to do. Without such tests engines will write their own and having varying coverage and almost certainly bugs.

That said, fast 64-bit bounds checks are actually quite tricky and a subject of current research. Considering they are critical for security, I say we test the heck out of them :-)

@eqrion
Copy link
Contributor

eqrion commented Sep 24, 2025

When SpiderMonkey imports the spec tests, we can patch them to insert guards around things that could potentially fail. So far we've not had to do that for memory64 spec tests. For our own hand written tests we have code to only run it on big enough test runners and be tolerant of failures. If we had a spec test mechanism to annotate a test as using a lot of memory, we could transparently apply that here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants