I am currently working on finite temperature simulations using the purification approach in TeNPy. I have tried both the applyMPO and TEBD algorithms to perform imaginary time evolution on a purified MPS and I have some questions.
1) I didn't quite get the following line in 'II'-order approximation for applyMPO in the purification.py example:
Python: Select all
Us = [M.H_MPO.make_U(-d * dt, approx) for d in [0.5 + 0.5j, 0.5 - 0.5j]]
2) As reported in a previous post:
viewtopic.php?p=577&hilit=how+to+define ... emble#p577
I noticed that the TEBD algorithm has some problems with particle conservation with MPS from_infiniteT_canonical, while with the PurificationApplyMPO algorithm everything looks fine. Did you understand which could be the source of this O(dt) error? Do you suggest me to use only the applyMPO method if I am interested in conservation of some quantities?
3) I tried to insert the options 'chi_list' and 'min_sweeps' in the options list for PurificationApplyMPO simulations, but the simulations finishes after a maximum of only 3 steps and I don't understand why. Did I do something wrong or these options are not implemented yet?
4) I tried also some iMPS simulations. By using TEBD I obtain the following error:
Python: Select all
NotImplementedError: Use DMRG instead...
Thank you in advance for your help.