That's the same limitation with everything. Sooner or later you run out of resources to remember what you've already seen and store changes only.
Bit of an apples and oranges thing here. Compression is intended for a single pass or with a limited buffer to transmute compression of a stream. What you're talking about is a completely different thing, storing a baseline data set and then writing changes only.
Can get some mileage from doing both: running a deduplication algorithm then running legacy compression on the data that can't be deduplicated and the pointers.
What you're talking about, though, is basically what LZ etc already kinda do. No single run compression can write deltas for obvious reasons. Deduplication is the best they can do and, combined with LZ or similar, it produces about the best results.
This is the kind of format used in various storage arrays. Storing the deltas then, when transferring data to a DR array, possibly running further compression on the streams. WAN optimisation kit also does similar, deduplicating data streams in real time, replacing their store of data to deduplicate against in LRU fashion, and using stream compression to reduce data that cannot be deduplicated as it's first pass and remote side needs it, alongside LZ on the output after deduplication.