[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-bugs] #13616 [Onionoo]: define jmeter testcase(s) and ant task(s)
#13616: define jmeter testcase(s) and ant task(s)
-----------------------------+-------------------------------
Reporter: iwakeh | Owner: iwakeh
Type: enhancement | Status: needs_information
Priority: major | Milestone:
Component: Onionoo | Version:
Resolution: | Keywords:
Actual Points: | Parent ID: #13080
Points: |
-----------------------------+-------------------------------
Comment (by karsten):
Replying to [comment:7 iwakeh]:
> Concerning the test cases I would add more data, e.g. measure many
request
> for different fingerprints or ip addresses (or ... or ...) in order to
avoid a bias when measuring the new retrieval methods.
Agreed. While this is not necessary for measuring the current
implementation that stores all relevant search data in memory, it would be
very useful for testing any database-based solutions.
I wonder if we can use `out/summary` as input to automatically generate as
many query samples as we need.
> JMeter's scope is concurrent stress testing of the entire web
application.
> If this is not intended at all, we should close this (#13616) issue.
Well, my impression was that we'll want something simpler for this
specific case.
Let's close this ticket as soon as we have spawned new tickets, okay?
> I think, 'response preparation performance measuring' should be a new
issue.
>
> For a database benchmark I would suggest measuring data preparation
> using a simple benchmarking class that calls the code responsible for
preparing
> a response directly. W/o any network or web app in between.
>
> This benchmarking class could be in the testing package. An ant task
> could be added for performing these benchmark tests. Thus, even ensuring
> later on that certain changes don't degrade performance.
Agreed on all the above. Basically, that would be a performance test of
`RequestHandler`. But I guess, we'd want to use what's in the local
`out/summary` to populate the node index, rather than putting in some
samples as we do for unit tests. And we might want to find a new place
for these test classes than `src/test/java/` in order not to conflict with
unit tests.
> It might be even good to prepare a measurement class for json parsing
and
> preparing in itself, in order to evaluate Gson replacements?
That could be useful, though it's quite specific. There's an assumption
in that that Gson is the performance bottleneck, but if we figure out it's
not, we might not even learn about performance problems located nearby. I
think I'd rather want to start one layer above that, and if we identify a
bottleneck there that could be related to Gson, I'd want to try replacing
it and seeing if that improves the layer above.
For example, we rely on Gson being fast when responding to a request for
details documents with the fields parameter set. It might be useful to
write a performance test for `ResponseBuilder` and see if those requests
stand out a lot.
Another place where we use Gson is as part of the hourly cronjob, though
performance is less critical there. But maybe we could write a similar
performance test class for `DocumentStore`, and then we can not only
evaluate Gson replacements but also new designs where we replace file
system storage with database storage.
> What do you think?
I think that this is all very useful stuff, but also that I need help with
this. I'm counting four small or mid-sized projects here:
- Make room for performance tests somewhere in `src/` and write a
separate Ant task to run them.
- Take an `out/summary` file as input and generate good sample requests
for a `RequestHandler` performance test class. Also write that test
class.
- Write a performance test class for `ResponseBuilder`, probably
requiring a successful run of the hourly updater to populate the `out/`
directory.
- Write another performance test class for `DocumentStore` that takes a
populated `status/` and `out/` directory as input and performs a random
series of listing, retrieving, removing, and storing documents. Ideally,
the test class would make sure that the contents in both directories are
still the same after running the test.
Plenty of stuff to do here. Want to help getting this started?
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/13616#comment:9>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs