...
Starting from the worst case in the above table, more and more data is now set to a lower sampling rate, while the others remain unchanged:
Sampling rate / Number of data points | 0 | 1000 | 2000 | 3000 | 4000 | 5000 |
---|---|---|---|---|---|---|
1000 | ~67% | ~65% | ~64% | ~62% | ~60% | ~59% |
2000 | ~67% | ~64% | ~62% | ~59% | ~58% | ~56% |
5000 | ~67% | ~64% | ~61% | ~59% | ~56% | ~54% |
Note |
---|
A mixed operation of 1000 variables each with a sampling rate of 500, 1000, 2000, 3000, and 4000 milliseconds resulted in a CPU load of ~59%. The second conclusion can be drawn as follows:
|
...
This time as well, the values are recorded with and without the value being changed.
Again, the sampling rate is at 500 milliseconds.
Number of data points | Without value change | With value change |
---|---|---|
1000 | ~10% | ~11% |
2000 | ~10% | ~12% |
3000 | ~11% | ~13% |
4000 | ~11% | ~14% |
5000 | ~12% | ~14% |
Tip |
---|
Even if combining data points in different groups/arrays means more work in the actual project, this investment should be made for larger plants. |
...