blob: 1301f6b46695c5815613ffe1a42ec36cf2b9d728 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
|
# Standard Tests Development Guide
Standard test cases are found in [aristaproto/tests/inputs](inputs), where each subdirectory represents a testcase, that is verified in isolation.
```
inputs/
bool/
double/
int32/
...
```
## Test case directory structure
Each testcase has a `<name>.proto` file with a message called `Test`, and optionally a matching `.json` file and a custom test called `test_*.py`.
```bash
bool/
bool.proto
bool.json # optional
test_bool.py # optional
```
### proto
`<name>.proto` — *The protobuf message to test*
```protobuf
syntax = "proto3";
message Test {
bool value = 1;
}
```
You can add multiple `.proto` files to the test case, as long as one file matches the directory name.
### json
`<name>.json` — *Test-data to validate the message with*
```json
{
"value": true
}
```
### pytest
`test_<name>.py` — *Custom test to validate specific aspects of the generated class*
```python
from tests.output_aristaproto.bool.bool import Test
def test_value():
message = Test()
assert not message.value, "Boolean is False by default"
```
## Standard tests
The following tests are automatically executed for all cases:
- [x] Can the generated python code be imported?
- [x] Can the generated message class be instantiated?
- [x] Is the generated code compatible with the Google's `grpc_tools.protoc` implementation?
- _when `.json` is present_
## Running the tests
- `pipenv run generate`
This generates:
- `aristaproto/tests/output_aristaproto` — *the plugin generated python classes*
- `aristaproto/tests/output_reference` — *reference implementation classes*
- `pipenv run test`
## Intentionally Failing tests
The standard test suite includes tests that fail by intention. These tests document known bugs and missing features that are intended to be corrected in the future.
When running `pytest`, they show up as `x` or `X` in the test results.
```
aristaproto/tests/test_inputs.py ..x...x..x...x.X........xx........x.....x.......x.xx....x...................... [ 84%]
```
- `.` — PASSED
- `x` — XFAIL: expected failure
- `X` — XPASS: expected failure, but still passed
Test cases marked for expected failure are declared in [inputs/config.py](inputs/config.py)
|