Perhaps if 7LM is reading he could comment since I believe the OSI is 'his area'?
Oooh err, is this where I'm exposed as a fake?
Somebody once said 'better to remain quiet and appear foolish, than to speak out and remove all doubt'.
I must admit I'd been tempted to chip in to this thread, but there's a few flaws and gaps in my knowledge. In my defense, I'd just stress that TCP/IP isn't OSI and so doesn't always follow the 7 layer model. Please be gentle with me if I've got anything wrong...
Basically, I agree with kitz. The modem(/router) is layer 1 (PHY). The data link layer, which needs to be closely coupled with PHY, is usually responsible for error detection (using CRC). In principle (but see note * below) it's not really accurate to say that the modem 'knows that a packet has been received but corrupted'. All the modem's data link knows (based on a bad CRC) is that some data had been received which was so badly corrupted that it must be regarded as garbage. Being garbage, no assumptions can be made about what the data was intended to contain. Since the modem doesn't know, at that level, what data was corrupted, it can't ask for it to be resent.
ARQ comes in higher up the stack, and is facilitated by inserting sequence numbers in the protocol headers, and requiring the receiver to acknowledge receipt. The sender runs a timer and if a given sequence number isn't acknowledged in good time, it gets resent. Additionally, if a packet has been lost, the receiver will see a 'gap' in the sequence numbers, and a request can be made for retransmission of the missing packet number. ARQ can be used either to recover data that has been lost due to corruption (CRC errors), or for data that was discarded owing to flow control or congestion somewhere between the sender and receiver.
Various protocol layers provide some degree of ARQ. Layer 2 sometimes does provide error recovery as well as detection, where recovery from data corruption errors is a priority. The layer 2 ( connectionless LLC) used in a typical ethernet IP stack does not. AFAIK, ARQ in such an IP stack is left to TCP (which is layer 4-ish, as per Kitz's diagram).
There are also some situations where data is retransmitted pre-emptively. For telecomms earth-to-satellite links, the propagation delay for data acknowledgement can be quite long (several hundred ms) so, when the link is idle, rather tnan send nothing at all, the sender may continuously retransmit data it's already sent just in case it's needed.
ARQ does has some trade-offs. The sender needs to keep a copy of the data until the receiver has acknowledged it, and that can require a larger memory footprint. The acknowledgement timers can consume CPU resource, or need a faster CPU. Additonally, the sender has a finite 'window' of sequence numbers that it can send before acknowledgent is needed. When the window is full, the sender stops sending until some acknowledgements are received, which can lead to periods of inactivity while the acknowledgement timer expires. The sequence numbers, and acknowledgement packets, also consume extra bits in the data stream, and so have an adverse effect on usable data rates.
In the case of DSL routers, I believe (not entirely sure of this) G.992 sets a target BER (Bit Error rate) of 1 in 10^7 at an SNRM of 0dB (that's sort of the definition of 'SNRM'). It's possible that the designers have deemed that, by meeting that error rate, error recovery at L2 isn't justified, which would be OK as the occasional error will be picked up by TCP's ARQ. As for the ATM side of things I don't really know enough to pass comment I'm afraid, though there may be relevant factors.
* Note: One thing I'm not sure about is whether DSL superframes contain any kind of sequence numbers. If they do, then as soon as the next superframe is received the router would see a gap in the sequence numbers and so would in fact know which superframe has been corrupted?
edited (twice, I wish I'm beginning to wish I'd never started this) for typos and clarifications