Hi everyone,
I’m new to TeNPy and have a question about core usage for large bond dimension calculations. What is the optimal number of cores in such cases? Is it generally true that using more cores results in faster computations?
From my experience with other DMRG codes, the optimal number of cores for large bond dimensions tends to be around 5–6. Increasing the core count beyond this often slows down calculations.
Cheers,
Zach
Optimal Number of Cores for Large Bond Dimension Calculations
Re: Optimal Number of Cores for Large Bond Dimension Calculations
Hi Zach,
As with any computation, it is a good idea to benchmark your particular code on your target hardware with at least one set of parameters. This will tell you when the benefits tail off with respect to the number of cores.
For example, in my past benchmarks on the cluster, I found that the speed benefit tails off after about 8-16 cores. However, since this depends heavily on your compute jobs and hardware, there is no general answer to this question.
Once you have determined what the optimum number of cores is in your case, then this should also hold for your other similar jobs (on the same hardware). Note that larger jobs might scale more efficiently than your smaller tests.
Hope this helps,
Bart
As with any computation, it is a good idea to benchmark your particular code on your target hardware with at least one set of parameters. This will tell you when the benefits tail off with respect to the number of cores.
For example, in my past benchmarks on the cluster, I found that the speed benefit tails off after about 8-16 cores. However, since this depends heavily on your compute jobs and hardware, there is no general answer to this question.
Once you have determined what the optimum number of cores is in your case, then this should also hold for your other similar jobs (on the same hardware). Note that larger jobs might scale more efficiently than your smaller tests.
Hope this helps,
Bart