DMRG Calculation for Large Systems

How do I use this algorithm? What does that parameter do?
Post Reply
Ayldz03
Posts: 1
Joined: 29 Jul 2024, 18:08

DMRG Calculation for Large Systems

Post by Ayldz03 »

Hello there,

To preface, I would like to mention that I am very new to TeNPy, so I am sure that there might be a simple fix to my problem. I am trying to simulate a 2D Square Heisenberg Lattice (https://en.wikipedia.org/wiki/Quantum_Heisenberg_model), to calculate its ground state and ground state energy. I had initially set up some code using Scipy sparse matrices to do this calculation, but my computer memory could not handle it beyond a 4x4 lattice. I created the following model to simulate the system using TeNPy. This model seemed to agree with my Scipy results until a 4x4 matrix, and DMRG worked up until a 7x7 lattice. When I moved to an 8x8 lattice, however, DMRG took a very long time to run.

Python: Select all

bc_xx = 'open'; bc_yy = 'open'; bc_MPSS = 'finite'; # boundary conditions
# coupling_axis = 'Sz' # adds exchange interaction along Sz axis

lx = 8
ly= 8
tot_sites = lx*ly; # lattice

# Create antiferromagnetic initial state
product_state = []
for i in range(ly):
    for j in range(lx):
        if (i + j) % 2 == 0:
            product_state.append('up')
        else:
            product_state.append('down')

jz= 1.0
h = 1.0
jx = 1.0 

class MyModel(CouplingMPOModel):
	def init_sites(self, model_params):
		conserve = model_params.get('conserve', None) 
		site = SpinHalfSite(conserve=conserve)
		return site
	def init_lattice(self, model_params):
		Lx = model_params.get('Lx', 2.)
		Ly = model_params.get('Ly', 2.)
		bc_x = model_params.get('bc_x', bc_xx)
		bc_y = model_params.get('bc_y', bc_yy)
		bc_MPS = model_params.get('bc_MPS', bc_MPSS)
		lattice = Square(Lx, Ly, site=self.init_sites(model_params), bc=[bc_x, bc_y], bc_MPS=bc_MPS)
		return lattice
	def init_terms(self, model_params):
		Jz = model_params.get('Jz', 1.0)
		Jx = model_params.get('Jx', 1.0)
		Hz = model_params.get('Hz', 1.0)

		for u in range(tot_sites):
			self.add_onsite_term(Hz, u, 'Sz')

		for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
			self.add_coupling(Jz, u1, 'Sz' , u2, 'Sz', dx)
			self.add_coupling(Jx/2, u1, 'Sp', u2, 'Sm', dx)
			self.add_coupling(Jx/2, u1, 'Sm', u2, 'Sp', dx)


dmrg_params = {
    'mixer': True,  
    'trunc_params': {
        'chi_max': 100,
        'svd_min': 1.e-10,
    },
    'max_E_err': 1.e-3,
    'verbose': True,
}

model_params = {
    'conserve': None,
    'Lx': lx,
    'Ly': ly,
    'Jz': jz,
	'Jx': jx,
    'Hz': h,
    'bc_MPS': bc_MPSS, 
    'bc_x': bc_xx, 
    'bc_y': bc_yy 
    }
    
M = MyModel(model_params)
lattice = M.init_lattice(model_params)
psi = MPS.from_product_state(lattice.mps_sites(), product_state, bc=lattice.bc_MPS)

eng = dmrg.TwoSiteDMRGEngine(psi, M, dmrg_params)
info = eng.run() # the main work; modifies psi in place

E = info[0]
psi = info[1] 
I tried to change the DMRG parameters to accommodate for the larger system, but it still took 30+ minutes (after which I stopped the simulation). I thought I could overcome this by calculating the ground state of a 4x4 lattice using DMRG and doing a kroenecker product of 4 of the outputs "psi" to provide as an initial state for the 8x8 lattice, which would hopefully allow for the DMRG to converge faster. However, I was not sure if the kron function (https://tenpy.readthedocs.io/en/latest/ ... .kron.html) of TeNPy allows this to be done. I am also not even sure if the "psi" output of the DMRG could be put into a kroenecker product.

As mentioned, I am very new to TeNPy and tensor networks in general, so I might be misunderstanding how DMRG works or trying to do something not feasible. Any help or guidance would be greatly appreciated.

Best,
Onat
User avatar
Johannes
Site Admin
Posts: 456
Joined: 21 Jul 2018, 12:52
Location: TU Munich

Re: DMRG Calculation for Large Systems

Post by Johannes »

to put this a bit into perspective, 30 minutes of runtime is often still considered very moderate, state-of-the-art DMRG calculations for publications in journals often run several days to weeks on high-performance-computing cluster nodes with >30 cores (more powerfull than a laptop...)

Also, it's important to understand how DMRG should scale in 2D. What we do, is "snake" or wind a 1D MPS through the 2D lattice, by convention along the y direction first, see also the lattice intro. This choice is the reason why we expect DMRG to scale exponentially with Ly, while it only scales linear with Lx: cutting the MPS on a center bond cuts the system into a left x half and a right x half, but each part stretches along the full Ly.
To see this exponential scaling, recall that the area law of entanglement implies entanglement \(S \propto \mathrm{length~of~the~cut} \propto L_y\). We can plug this into the bound \(S \leq \log \chi\), impying that the bond dimension should be at least \(\chi \geq \exp(const * Ly)\)

In other words, each time you increase Ly just by one, you should multiply chi_max by a factor if you want to keep the truncation error small, and the `chi_max=100` might not be enough to converge for Ly=8. Of course, since DMRG scales with \(\mathcal{O}(\chi^3)\), this will get more and more computationally costly.

The idea with the kronecker product is nice, but you need to be a bit careful how you want to "stitch" the individual states together - if you patch 4 smaller 4x4 lattices together, the MPS would wind within each patch first, while for the whole 8x8 system, it winds in y-direction first.
That being said, if you're carful about how the MPS winds in the 2D system, you can stitch individual parts together using from_product_mps_covering. I'd consider this advanced usage, though.
Post Reply