There is a good reason for this, since it's exponentially expensive with the number of sites L in the segment, both in memory \(\mathcal{O}(d^{2L})\) and CPU time \(\mathcal{O}(d^{3L})\)! Each additional site quadruples your computational cost.
If you're reaching that limit, it might be better to use the
entanglement_entropy_segment2 method instead, which does the calculation in a slightly different way. Note that this is still very expensive, \(\mathcal{O}(\chi^6 d^{3n_{intern}})\) (compared to usual DMRG/MPS algorithms scaling as \(\mathcal{O}(\chi^3)\), where \(n_{intern}\) is the number of legs between start and end of the segment that are not included.
If you
really need to get to larger segments to include just that extra 2 sites, you're of course free to get a local copy of the source code on your machine, modify the corresponding threshold in the source in
https://github.com/tenpy/tenpy/blob/a3b ... s.py#L2866 and
install TeNPy locally from source.