Hello,

I am trying to read your implementation of iDMRG and got confused about how you do sweeps and update left and right environments. For a unit cell of 4 spins, let's say, I can see for one round the code uses updating rules like (true,true)X2, (true,false)X2, (true,true)X2, (false, true)X2. Could you explain why this is the right choice? It doesn't look like the "standard" way of implementing iDMRG by inserting one unit cell in each environments.

Thanks a lot!

## iDMRG sweep schedule

### Re: iDMRG sweep schedule

You're referring to the

Note that this is going a little bit further than you might have expected naively, each bond is updated twice, including the bond (0, 1) == (L, L+1). This detail is a bit different from the toycode, which only goes up to (L-1, L).

The

For finite DMRG, whenever we move to the right, we don't need the right environment any more, because we will re-build it with an updated state when moving to the left again. For this reason, we neither need to calculate the new right environment nor keep it in memory.

The same is true for the infinite case, except for the first two updates - 2 because we have two-site DMRG: we will use the right environments from the (0,1) (1, 2) updates (updated to be to the left of sites 1 and 2, i.e. right of site 0, 1, respectively) for updating the bonds (L-1, L) and (L, L+1). Note that this is the step where the right environment grows by one unit cell!

Setting the update_LP_RP to False in the other right-moves ensures that we don't do extra work and don't keep a total of 2L environments, but only a little bit more than L - if I remember correctly, we only need L+2 environments at once in cache. (This is important because these environments are the biggest tensors for cylinder DMRG where the MPO bond dimension easily exceeds physical dimensions.)

Overall, both the left and right environments grow by one unit cell for each call of

`update_LP_RP`

defined in get_sweep_schedule for `n=2`

, i.e. a two-site update. As you can see from the `i0s, move_right`

variables, we do 2L updates in total, each updating 2 sites, in the order`(0, 1), ..., (L-2, L-1), (L-1, L), (L, L+1), (L-1, L), ..., (1, 2)`

.Note that this is going a little bit further than you might have expected naively, each bond is updated twice, including the bond (0, 1) == (L, L+1). This detail is a bit different from the toycode, which only goes up to (L-1, L).

The

`update_LP_RP`

are "just" an optimization trying to minimize the actual environment updates calculated and environments kept in cache/memory. If you set all of them to True (what we do in the toy code), it will still give the same results, but you would do some extra work:For finite DMRG, whenever we move to the right, we don't need the right environment any more, because we will re-build it with an updated state when moving to the left again. For this reason, we neither need to calculate the new right environment nor keep it in memory.

The same is true for the infinite case, except for the first two updates - 2 because we have two-site DMRG: we will use the right environments from the (0,1) (1, 2) updates (updated to be to the left of sites 1 and 2, i.e. right of site 0, 1, respectively) for updating the bonds (L-1, L) and (L, L+1). Note that this is the step where the right environment grows by one unit cell!

Setting the update_LP_RP to False in the other right-moves ensures that we don't do extra work and don't keep a total of 2L environments, but only a little bit more than L - if I remember correctly, we only need L+2 environments at once in cache. (This is important because these environments are the biggest tensors for cylinder DMRG where the MPO bond dimension easily exceeds physical dimensions.)

Overall, both the left and right environments grow by one unit cell for each call of

`sweep()`

. You can see this by the growing number of the "age" in the log: the "age" is just the total number of sites involved in both the left and right environments and your current MPS unit cell, so it grows L -> 3L -> 5L -> 7L ... with the sweeps. Keeping track of the age is important to correctly identify the energy density from the total energy - really, what we use is \(\frac{E(s) - E(s-1)}{age(s) - age(s-1)}\), where s is the sweep index.### Re: iDMRG sweep schedule

Thanks for the nice explanation! Now I roughly understand how idmrg is implemented in tenpy. I have two further questions:

1. For idmrg, after 10 sweeps (which is the default value of 'N_sweeps_check') with optimization, 5 more sweeps without optimization are performed. Since each sweep enlarge the size by 2L, why do we need these sweeps without optimization?

2. For the energy density, is it ok to use E(s)/age(s) instead of (E(s)-E(s-1))/(age(s)-age(s-1)), and how do they compare?

1. For idmrg, after 10 sweeps (which is the default value of 'N_sweeps_check') with optimization, 5 more sweeps without optimization are performed. Since each sweep enlarge the size by 2L, why do we need these sweeps without optimization?

2. For the energy density, is it ok to use E(s)/age(s) instead of (E(s)-E(s-1))/(age(s)-age(s-1)), and how do they compare?

### Re: iDMRG sweep schedule

1) These additional sweeps are meant to (i) bring the MPS into canonical form and (ii) converge the environments. Effectively, it's mostly SVDs and environment updates, and usually cheap compared to the Lanczos optimization, so it should be "cheap" to converge things a bit better, even if we don't converge it fully. The idea is that the correlation length of the MPS might be larger than your MPS unit cell, so you can add a few more unit cells into the environment with "cheap" non-optimizing sweeps before attempting the next optimization of the MPS itself.

It's only heurisitc, though, since we keep the number of environment sweeps fixed instead of changing it dynamically to fully converge the environments. If you would do this, it's roughly the same as the the "VUMPS" optimization.

2) You should use (E(s)-E(s-1))/(age(s)-age(s-1)), since this converges much faster (often exponentially with sweeps!) than E(s)/age(s) (converging as 1/s) when your state still changes.

Say, you already did 100 sweeps with a 10-site unit cell. Your state was far off the true ground staste for the first 90 sweeps, but then found the right minima and converged the state in the last 10 sweeps. (E(s)-E(s-1))/(age(s)-age(s-1)) will reflect this and give you the energy density of the final guess of the last 10 sweeps, while using just E(s)/age(s) will still have the memory of 90% "bad" energy density far above the ground state.

It's only heurisitc, though, since we keep the number of environment sweeps fixed instead of changing it dynamically to fully converge the environments. If you would do this, it's roughly the same as the the "VUMPS" optimization.

2) You should use (E(s)-E(s-1))/(age(s)-age(s-1)), since this converges much faster (often exponentially with sweeps!) than E(s)/age(s) (converging as 1/s) when your state still changes.

Say, you already did 100 sweeps with a 10-site unit cell. Your state was far off the true ground staste for the first 90 sweeps, but then found the right minima and converged the state in the last 10 sweeps. (E(s)-E(s-1))/(age(s)-age(s-1)) will reflect this and give you the energy density of the final guess of the last 10 sweeps, while using just E(s)/age(s) will still have the memory of 90% "bad" energy density far above the ground state.