Skip to content

Commit 9b6267f

Browse files
committed
Several WG applications edits
1 parent bbaff9c commit 9b6267f

File tree

1 file changed

+17
-24
lines changed

1 file changed

+17
-24
lines changed

documents/wg-application.md

Lines changed: 17 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,10 @@ We plan to solve the "documentation debt" with a broad range of tutorials, examp
3333

3434
The WG is not focused on promoting or developing "standard" frameworks.
3535
Instead, we want to provide basic and reliable support of the feature and inspire the community to start using it.
36-
This WG is only about CUDA support - other GPGPU targets are out-of-scope. Our focus is on making the current CUDA target more reliable. Everything that goes beyond that (e.g. higher-level CUDA libraries, CUDA frameworks, etc.) is also out-of-scope.
36+
37+
This WG is only about CUDA support - other GPGPU targets are out-of-scope.
38+
Our focus is on making the current CUDA target more reliable.
39+
Everything that goes beyond that (e.g. higher-level CUDA libraries, CUDA frameworks, etc.) is also out-of-scope.
3740

3841
## Is your WG long-running or temporary?
3942

@@ -45,56 +48,46 @@ to support other GPGPU platforms or to create higher-level frameworks to improve
4548
## What is your long-term vision?
4649

4750
Having a reliable and safe CUDA development experience is our ultimate goal.
48-
This should include:
49-
50-
* Getting `nvptx64-nvidia-cuda` to a [Tier 2](https://forge.rust-lang.org/platform-support.html) support state.
51-
* Test infrastructure on a real hardware and running the tests during Rust CI process.
52-
* Rich documentation, references, tutorials and examples.
53-
* A broad set of new compile errors, warnings and lints for SIMT execution model to avoid pitfalls and ensure code soundness.
51+
To get there, first of all, we will need to achieve all milestones in our [Roadmap].
5452

5553
## How do you expect the relationship to the project be?
5654

57-
The WG will be responsible for CUDA related issues and user requests.
58-
This includes [already reported issues](https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Aissue+label%3AO-NVPTX) and those that will be opened due to the work made by the WG.
59-
An important aspect is that the WG will take care of the state of the `rust-ptx-linker` and will do its best to avoid blocking someone else's work (like it, unfortunately, happened in [rust#59752](https://github.com/rust-lang/rust/pull/59752)).
55+
The Working Group is mainly going to create various RFCs and send PRs to Rust components.
56+
57+
An important aspect of the WG is active participation in discussions related to the NVPTX target and involved tools.
58+
This also includes [already reported issues](https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Aissue+label%3AO-NVPTX) and those that will be opened due to the work made by the WG.
6059

6160
### How do you want to establish accountability?
6261

63-
For that purpose, we will publish an agenda and made decisions from our meetings.
64-
Once we achieve important milestones we plan to make public announces.
62+
Currently, as an unofficial WG, we work completely in the open and we intend to go on in the same way.
63+
Most of discussions happen either in out github issues, or in our public [Zulip], so that they be read by anyone.
64+
65+
Once we will start having regular meetings, we would publish agenda and summary.
6566

6667
## Which other WGs do you expect to have close contact with?
6768

6869
We would like to cooperate with *Language Team* in discussions about safety in SIMT code.
6970
Additionally, it would be very important to discuss with *Infra team* a strategy of reliable deployment of `rust-ptx-linker` (likely as a `rustup` component).
7071

71-
On the other hand, proposed [Machine Learning WG](https://internals.rust-lang.org/t/enabling-the-formation-of-new-working-groups/10218/11) could leverage CUDA accelerated computing capabilities, so we can discuss their use cases.
72+
On the other hand, proposed [Machine Learning WG](https://internals.rust-lang.org/t/enabling-the-formation-of-new-working-groups/10218/11) could leverage CUDA accelerated computing capabilities, so we can collaborate on their use cases.
7273

7374
## What are your short-term goals?
7475

75-
Mainly, short-term goals are about evaluating and discussing how can we achieve long-term goals:
76-
77-
* Make a poll about community experiences and use cases for CUDA.
78-
* Deploy the `rust-ptx-linker` as a `rustup` component.
79-
* Collect soundness and safety issues that can happen in SIMT execution model.
80-
* Decide testing approach on a real hardware.
76+
Mainly, short-term goals are defined in our [Roadmap] for an MVP.
77+
Additionally, we plan to make a poll about use cases, expectations, and community awareness that it is already possible to experiment with CUDA on Nightly Rust today.
8178

8279
## Who is the initial leadership?
8380

8481
> TBD...
8582
8683
## How do you intend to make your work accessible to outsiders?
8784

88-
Excessive learning materials and retrospective about made decisions should help to get more people involved in either discussions or further experimenting.
89-
90-
> TBD... something else?
85+
Blog posts, announcements and retrospective about made decisions should help to get more people involved in either discussions or further experimenting.
9186

9287
## Everything that is already decided upon
9388

9489
We work in the open, see our [Github].
9590

96-
> TBD... would it make sense to move to a `rust-lang` Zulip server?
97-
9891
## Preferred way of contact/discussion
9992

10093
[Github] issues or [Zulip].

0 commit comments

Comments
 (0)