Convergence issues in XXZ model with increasing bond dimension

How do I use this algorithm? What does that parameter do?
Post Reply
rdilip
Posts: 4
Joined: 03 Dec 2019, 12:42

Convergence issues in XXZ model with increasing bond dimension

Post by rdilip »

I noticed some odd behavior when I tried to simulate the XXZ model with tenpy. For reference, the Hamiltonian I am using is

\(H = -J\sum_{i=1}^L (S^x_i S^x_{i+1} + S^y_i S^y_{i+1} + v S^z_i S^z_{i+1})\)

When I sweep through v at a low bond dimension (plotting the correlation length), I see good convergence and a reasonable phase diagram. When I increase the bond dimension, however, many points do not appear to converge (see the attached image). I have increased the maximum number of sweeps without too much luck -- on one occasion a point that had not previously converged appeared to converge, but this isn't very reproducible. I also tried seeding subsequent runs with a nearby point that had converged, but this did not have any effect.

These runs lead to a warning:

Code: Select all

RuntimeWarning: divide by zero encountered in reciprocal
  S = S**form_diff
I am pretty sure this is connected to the convergence issues, because it does not appear for low bond dimension run, but I could not figure out what caused it or how to fix it. The only thing I could of that would mess up convergence at higher bond dimension is that I am somehow taking inverses of very small numbers, but I'm not sure how to fix this or where this would happen, since I'm already setting svd_min in the dmrg_params. Is this something fixable, or would I need to try something like VUMPS instead of DMRG for this type of model? I know that other people have simulated the XXZ model using DMRG before, so it seems like the convergence issues should be resolvable.

Python: Select all

from tenpy.networks.mps import MPS
from tenpy.algorithms import dmrg
from tenpy.models.xxz_chain import XXZChain

def run_simulation(v, chi_max=200, psi=None):
    L = 2
    J = 1.0
    model_params = dict(conserve='best',
                        Jxx = -J,
                        Jz = -J * v,
                        hz = 0,
                        bc_MPS = "infinite",
                        verbose = 1)

    M = XXZChain(model_params)
    pstate = ["up", "down"]
    if psi is None:
        psi = MPS.from_product_state(M.lat.mps_sites(), pstate, M.lat.bc_MPS)
    dmrg_params = {"lanczos_params": {"N_min": 5, "N_max": 20, "reortho":True, "N_cache":22},
                    "mixer": True,
                    "chi_list": {0: 9, 10: 19, 20: 39, 30:69, 40:chi_max},
                    "trunc_params": {"svd_min": 1.e-14},
                    "mixer_params":
                        {"amplitude": 1.e-3, "decay":1.1, "disable_after":100},
                    "max_sweeps": 40000,
                    "verbose": 1,
                    "max_E_err": 1.e-14,
                    "max_S_err":1.e-14,
                    "active_sites": 2
                    }

    info = dmrg.run(psi, M, dmrg_params)
    return(psi.correlation_length())
Attachments
xxz.png
xxz.png (15.09 KiB) Viewed 6507 times
User avatar
Johannes
Site Admin
Posts: 457
Joined: 21 Jul 2018, 12:52
Location: TU Munich

Re: Convergence issues in XXZ model with increasing bond dimension

Post by Johannes »

The model is gapless in the range -1 < V < 1, so this is definitely a difficult region.

As far as I know, VUMPS might indeed be better suited for that, feel free to implement it ;-)

Which version of tenpy are you using? In particular, does your version include the fix of Issue #95 (v0.5.0 or later)?
The issue appearing there might also be the problem appearing here.
Can you please try to disable the update of the environment with the DMRG parameter "update_env":0 and see whether it helps?
Checking it for a few points of V should be enough...

Moreover, re-using the previous state is definitely something that should help here. However, to make sure you actually re-use the environment, you should keep the mixer disabled. Basically, what you want to do is the example under examples/advanced/xxz_corr_length.py.
Does that one actually still work? I haven't been running it lately...
rdilip
Posts: 4
Joined: 03 Dec 2019, 12:42

Re: Convergence issues in XXZ model with increasing bond dimension

Post by rdilip »

It appears to have been the same issue brought up in Issue #95. Using an updated version of tenpy makes the convergence much better. (Just to be certain, your comment on update_env = 0 was to diagnose the issue, not something one should do in general, yes? Since as I understand, in the case where update_env = 0 we are not really doing iDMRG).

To make sure I understand this correctly -- update_env is updating the environment and enlarging the unit cell, which leads to a blow up in the DMRG error over time. How exactly does the fix in the version 0.5.0 work? I see that you added the from_S method to construct the truncation error from the sum of the square of discarded singular values -- is the fix just to continually check the truncation error, then change the precision to 1.e-30 if it goes too quickly to 1.e-15?
User avatar
Johannes
Site Admin
Posts: 457
Joined: 21 Jul 2018, 12:52
Location: TU Munich

Re: Convergence issues in XXZ model with increasing bond dimension

Post by Johannes »

No, you got that wrong.
A value of update_env=0 does not imply that you're not doing iDMRG; the DMRG engine will still grow the environments on the left and right by one unit cell in each sweep.

A nonzero update_env > 0 tells the DMRG engine to do the specified number of environment_sweeps without doing the Lanczos optimization.
In other words, it sweeps through the unit cell like in usual iDMRG (which grows the environments), but instead of finding the ground state of the effective Hamiltonian in each update, it simply takes the initial guess for the wave function. This is done under the assumption that the guess is close to the actual ground state, with the goal to more quickly converge the environment. Moreover, it should also reduce the norm error and bring you back in canonical form.
However, as we have seen in Issue #95, this can also backfire if the initial guess for is not quite accurate, with the error blowing up exponentially with the number of environment sweeps performed.
There are several reasons why the initial guess might not be quite accurate:
  1. One needs to take the inverse of singular values. If those are very small, the inverse can blow up initially small errors.
  2. The guess is obtained under the assumption of translation invariance with respect to unit cell, but the effective Hamiltonian lives on a large yet finite system.
  3. There might be a bug in the function(s) obtaining the initial guess.
Point 1 is what I tried to resolve in Issue #95, as there was a bug that caused Lanczos to stop too early, leading to the small errors which blew up.

Point 2 is what might cause the problem in your case (I still see some remainders of it with the newest version). In particular, since the system is critical/gapless, the correlation length is large and true translation invariance is hard to reach.
There are also systems where DMRG wants to spontaneously break the translation invariance. A sign for such a case is that it "jumps" between different solutions. As far as I know, there is no general way to rule that out; in such a case one can just systematically try to increase the unit cell size. (For example even if you might expect a Neel-pattern for the magnetization with a period of 2 sites, DMRG might converge better if you choose your unit cell to be 4 or 6 sites.)

Regarding point 3: I've been looking for such a bug when trying to resolve Issue #95, and didn't find one. I hope there is none :D

To conclude and clearly answer your question:
It is certainly fine to use update_env=0, you should still converge to the correct ground state for the infinite system.
In fact, I'm not sure if I would still recommend the use of non-zero update_env after finding issue #95.
You certainly should be careful with too large values of update_env.

On the other hand, if you don't use update_env, keep in mind that it is harder to achieve the translation invariance.
Post Reply