We're using argon2-jvm to use Argon2id
on our Java TCP Server.
Because its argon2id
instance is thread-safe, we plan to only create a single instance for the lifetime of our app and have each request handler call it whenever necessary (e.g. for new registrations and user log-ins).
We've fine-tuned our single argon2id
instance so hashing and verifying passwords both take roughly 1 second on a single thread using the approach from this answer:
- Use the max number of threads we can use (no. of CPUs x 2 in our case).
- Use the max amount of memory we can use.
- Tweak the number of iterations so it does not exceed our target max time (1 second in our case).
However, when the number of threads (TCP requests) accessing our argon2id
instance increases (e.g. there are multiple users registering and logging in), its execution time increases as well.
Our plan now is to reconfigure our argon2id
instance such that it still takes roughly 1 second to hash and verify passwords, but instead of doing so on just 1 thread, we'll be doing it on our expected max number of concurrent registrations and log-ins at any given time (e.g. 500 TCP requests).
Our concern is that, if we do so, then our hashes might not be secure enough because each request won't be getting as much processing as it should (e.g. a hash that takes about 1 second on max capacity might take only 0.25 seconds when it's the only request being made).
We feel that configuring Argon2id
for max capacity is going to undershoot a secure configuration for each individual request. Is this how it's supposed to be done? Or should we stick with our configuration that takes 1 second on single threads but takes longer on multiple ones (we fear this might take too long for too many requests)?