Yes, traffic at peak times can, and often does, exceed the capacity of the network at various bottlenecks. This causes packet delay (latency) and at worst, total packet loss.
In practical terms, if packets arrive at a router at a rate faster than they can be serviced, those packets will be queued in a RAM buffer to await servicing. This introduces the delay (aka latency). If the packet arrival rate continues to exceed the service rate of the router or packet switch, the buffer will eventually fill up, and arriving packets will be lost.
You might ask why the packet buffers aren't bigger, or why the telcos do not provision more switching capacity. The answer, as ever, is cost. The dual-ported SRAM used for buffering in a router is the most expensive component. And so "statistical multiplexing" plays a part in network service provisioning, just as it always did in the days of circuit-switched networks:
Picture two neighbouring towns: Alphaville and Betastadt, each with 10,000 households. If every household in Alphaville decides to simultaneously telephone a (different) household in Betastadt, then the telecom capacity linking the two towns needs to support 10,000 simultaneous calls, to avoid call blocking.
Obviously that scenario never happens, and the bean-counters took advantage of that.
Around 100 years ago, a Danish engineer called Erlang formally modelled telephone call arrival rates, call blocking rates, and so on, and this led to the discipline of queuing theory.
The bean-counters at the telcos soon worked out that they could save huge sums of money by only provisioning capacity for, say, 500 simultaneously calls between Alphaville and Betastadt, since statistically a call arrival rate higher than that occurs only very rarely.
Exactly the same principle is used with packet-switched networks today. However, things have moved on insofar as we also use traffic prioritisation and shaping to further reduce the cost of the switching plant, and to improve Quality of Experience (QOE).
For some network services, e.g. HTTP, packet delay is largely irrelevant. Moderate latency in HTTP traffic doesn't affect our Quality of Experience. A latency of 100mS in the delivery of HTTP packets holding a web page doesn't spoil the surfer's experience. However, QOE for VOIP and other 'real time' traffic is very sensitive to latency. That's why traffic-shaping plays an important role in service provisioning. Various forms of priorisations at the packet queues are used, based on the service type of the packet.
You are perhaps asking this from a gamer's perspective?
cheers, a