Just a question on, i suppose you would call it "networking theory" How is throughput determined? For example, my connection to my router is 100meg up and down, but my router's connection to the internet is only 1.5/.5 meg. How does the router control the rate of transmission from my machine to match (or at least approximate), that of it's speed? Even simpler, how does a connection know that it has reached a saturation point somewhere along the link between sender and reciever? Is it a hardware level (along the level of MACs) or something higher, such as TCP that handles this?