summaryrefslogtreecommitdiffstats
path: root/third_party/rust/rayon/README.md
blob: 51d92a62603c461364571fb828bd7715d01e0c02 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
# Rayon

[![Rayon crate](https://img.shields.io/crates/v/rayon.svg)](https://crates.io/crates/rayon)
[![Rayon documentation](https://docs.rs/rayon/badge.svg)](https://docs.rs/rayon)
![minimum rustc 1.56](https://img.shields.io/badge/rustc-1.56+-red.svg)
[![build status](https://github.com/rayon-rs/rayon/workflows/master/badge.svg)](https://github.com/rayon-rs/rayon/actions)
[![Join the chat at https://gitter.im/rayon-rs/Lobby](https://badges.gitter.im/rayon-rs/Lobby.svg)](https://gitter.im/rayon-rs/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)

Rayon is a data-parallelism library for Rust. It is extremely
lightweight and makes it easy to convert a sequential computation into
a parallel one. It also guarantees data-race freedom. (You may also
enjoy [this blog post][blog] about Rayon, which gives more background
and details about how it works, or [this video][video], from the Rust
Belt Rust conference.) Rayon is
[available on crates.io](https://crates.io/crates/rayon), and
[API Documentation is available on docs.rs](https://docs.rs/rayon/).

[blog]: https://smallcultfollowing.com/babysteps/blog/2015/12/18/rayon-data-parallelism-in-rust/
[video]: https://www.youtube.com/watch?v=gof_OEv71Aw

## Parallel iterators and more

Rayon makes it drop-dead simple to convert sequential iterators into
parallel ones: usually, you just change your `foo.iter()` call into
`foo.par_iter()`, and Rayon does the rest:

```rust
use rayon::prelude::*;
fn sum_of_squares(input: &[i32]) -> i32 {
    input.par_iter() // <-- just change that!
         .map(|&i| i * i)
         .sum()
}
```

[Parallel iterators] take care of deciding how to divide your data
into tasks; it will dynamically adapt for maximum performance. If you
need more flexibility than that, Rayon also offers the [join] and
[scope] functions, which let you create parallel tasks on your own.
For even more control, you can create [custom threadpools] rather than
using Rayon's default, global threadpool.

[Parallel iterators]: https://docs.rs/rayon/*/rayon/iter/index.html
[join]: https://docs.rs/rayon/*/rayon/fn.join.html
[scope]: https://docs.rs/rayon/*/rayon/fn.scope.html
[custom threadpools]: https://docs.rs/rayon/*/rayon/struct.ThreadPool.html

## No data races

You may have heard that parallel execution can produce all kinds of
crazy bugs. Well, rest easy. Rayon's APIs all guarantee **data-race
freedom**, which generally rules out most parallel bugs (though not
all). In other words, **if your code compiles**, it typically does the
same thing it did before.

For the most, parallel iterators in particular are guaranteed to
produce the same results as their sequential counterparts. One caveat:
If your iterator has side effects (for example, sending methods to
other threads through a [Rust channel] or writing to disk), those side
effects may occur in a different order. Note also that, in some cases,
parallel iterators offer alternative versions of the sequential
iterator methods that can have higher performance.

[Rust channel]: https://doc.rust-lang.org/std/sync/mpsc/fn.channel.html

## Using Rayon

[Rayon is available on crates.io](https://crates.io/crates/rayon). The
recommended way to use it is to add a line into your Cargo.toml such
as:

```toml
[dependencies]
rayon = "1.5"
```

To use the Parallel Iterator APIs, a number of traits have to be in
scope. The easiest way to bring those things into scope is to use the
[Rayon prelude](https://docs.rs/rayon/*/rayon/prelude/index.html).  In
each module where you would like to use the parallel iterator APIs,
just add:

```rust
use rayon::prelude::*;
```

Rayon currently requires `rustc 1.56.0` or greater.

### Usage with WebAssembly

Rayon can work on the Web via WebAssembly, but requires an adapter
and some project configuration to account for differences between
WebAssembly threads and threads on the other platforms.

Check out [wasm-bindgen-rayon](https://github.com/GoogleChromeLabs/wasm-bindgen-rayon)
docs for more details.

## Contribution

Rayon is an open source project! If you'd like to contribute to Rayon, check out [the list of "help wanted" issues](https://github.com/rayon-rs/rayon/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22). These are all (or should be) issues that are suitable for getting started, and they generally include a detailed set of instructions for what to do. Please ask questions if anything is unclear! Also, check out the [Guide to Development](https://github.com/rayon-rs/rayon/wiki/Guide-to-Development) page on the wiki. Note that all code submitted in PRs to Rayon is assumed to [be licensed under Rayon's dual MIT/Apache2 licensing](https://github.com/rayon-rs/rayon/blob/master/README.md#license).

## Quick demo

To see Rayon in action, check out the `rayon-demo` directory, which
includes a number of demos of code using Rayon. For example, run this
command to get a visualization of an nbody simulation. To see the
effect of using Rayon, press `s` to run sequentially and `p` to run in
parallel.

```text
> cd rayon-demo
> cargo run --release -- nbody visualize
```

For more information on demos, try:

```text
> cd rayon-demo
> cargo run --release -- --help
```

## Other questions?

See [the Rayon FAQ][faq].

[faq]: https://github.com/rayon-rs/rayon/blob/master/FAQ.md

## License

Rayon is distributed under the terms of both the MIT license and the
Apache License (Version 2.0). See [LICENSE-APACHE](LICENSE-APACHE) and
[LICENSE-MIT](LICENSE-MIT) for details. Opening a pull request is
assumed to signal agreement with these licensing terms.