I think we're on the same page. My point was to address the notion that it's neater to run individual runs from each server to a central rack, rather than having a top of rack switch with 1 or 2 uplinks.
My company does this in a small DC that has 9 or 10 racks full of servers with a patch panel at the top and 40 something cat5 runs over to the "network rack." It's an absolute nightmare if you have to plug in a new device or trace a cable. Yes it works, no it's not worth the down-time for us to change it.
OP asked how to neatly manage cables between a server and network rack, and the cleanest way to do that is to have a top-of rack switch with uplinks to a distro switch.
My $0.02
I agree that top of rack switch with uplinks to distro is much neater for cables then the patch panels, and over time can be cheaper, my company has 2 not so small data center's, up until about 3 months ago there were patch panels above every rack and home runs back to the access switches, to get an idea of how many cables, in the datacenter i work in, we have 10 6509's single sup loaded with 6748 cards at the datacenter access layer, thats almost 4000 connections, plus about ~100 hp c-class enclosures with 4 3120's per enclosure on top of that, we recently started having fiber run (panduit fiber cassettes and mtp cables) to all the racks and were deploying n2k fex's instead of adding 6509's and copper, the fiber up to the n5k's is alot neater and less expensive then running 24-96 copper runs to every rack. now theres 2-16 fiber cables per rack rather then anywhere from 24-96 copper patch cables going into every rack based on density/port requirements...seeing the difference in a large environment, i definitely think if theres some density in the rack its much neater with a top of rack switch