[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Torperf



Hi Sebastian, hi Roger,

Sebastian reminded me yesterday that we should talk about Torperf at
some point. You're right. There are a lot of open questions. Here are
some of them:

- What shall we do with the Torperf run on ferrinii that fetches a 50
KiB file every minute and notes the path? I think Sebastian wanted this
run to validate whether our assumption about Tor doing path selection is
correct. Do you still want to do this? Shall I keep it running?

- Do we want to keep the #1919 Torperf runs running or migrate them to
some other VM (that has enough memory)? What do we expect to learn from
keeping them running or migrating them that we didn't learn from the
first week or two? Instead of keeping them running we could also make a
PDF report and put it on metrics.tpo/papers.html.

- Shall we "upgrade" the Torperfs on moria/torperf.tpo/siv to write
their path to disk? Sebastian, did you finish the script to combine
.data and .extradata files? And can you push your .extradata code to the
main repository? While upgrading the Torperf scripts, should we also
upgrade the Tor clients? Last time I checked, siv was running
0.2.1.24-dev, torperf was running 0.2.2.10-alpha-dev, and moria was
running 0.2.2.8-alpha-dev.

- What graphs do we want to put on the metrics website? Right now we
have the daily median and interquartile range by file size and data
source on metrics.tpo/performance.html. We could have a similar graph
for all data sources, a graph with all individual data points instead of
aggregates, the ECDFs for all sources and file sizes, and a graph on the
number or fraction of failed/timed out runs. These new graphs would
require us to add the raw Torperf measurements to the database and write
procedures to make materialized views out of them. While doing so we
should also add the path to the database schema. Anything we want to
evaluate based on the path once it's in the database?

Best,
--Karsten