Google Beaten To The Punch by AT&T On Super-Fast Broadband

I live 4 miles from a small town in Kentucky. The town (Bedford) has broadband. I can't get it. I use exede satellite for internet access. No online gaming for me. data limited to 15gb per month. I live about 30miles from Louisville. They have Google fiber. Any idea when this might change? yeah I know, Kentucky... Oh by the way I called the local cable company and they said they would be happy to run the cable to my street for $8000


Not that is any consolation to you but Louisville does not have Google fiber. We are being evaluated as a potential fiber city. Louisville – Google Fiber Time Warner Cable is in the process of bumping everyone's speed from a max speed of 50 M but still have not finished the process.
 
btw, anyone who chooses verizon/comcast/att over google fiber is a fool, I don't care if they offered 5gbps connectivity for the same price as googles 1gbps speeds, the speed is useless if all they give a damn about is the speed of connection on their own INTRAnet. I want to access the outside world without problems, and that means making sure the interconnects are constantly upgraded and not oversubscribed, that when companies like netflix want to place their own streaming servers closer to a home network so that less data has to travel as far the ISP allows that. It's more than just the top end speed.

I STILL get throttling on youtube sometimes, what a god damn joke.

You are incorrect on a few of your comments there. There is always going to be oversubscribing. People would be crazy not to. That google fiber you are talking about, it is over subscribed at the last mile to the customer. It is GPON technology. That means you have 2.5Gb down / 1.25Gbps up that can be split to service up to 144 people by theory, most spit 1x32. So if everyone was online maxing out their connection you are looking at best getting about 78Mbps / 39Mbps. The fact that they turn everyone to 1Gbps and people aren't screaming about speed means two things, people after a certain point either don't check speed anymore and don't realize that they are going getting a few hundred at times or less, or on average people aren't maxing out their connections so you don't realize there is a possible bottle neck with your neighbors. Lets say that they were using active E and not Gpon meaning that everyone gets a full 1Gbps to the box servicing your area. Knowing what they use, I know that 1 box could support 504 customers with every slot being populated with cards only doing active E. that would mean you would need 504Gbps to ensure no over subscribing, however at best you could only get 40Gbps to the unit at that point so you would be over subscribing there. remove a few cards each servicing 24 people and you could get yourself some ways to get more to the box, however I don't think it would ever be possible to feed an equal 1 to 1 to the box so that there is no over subscribing at a full gig. Next you move to the core even if there was. they would need 1 Gbps for every customer. Meaning that you are looking at petabytes of traffic to accommodate everyone.

The problem isn't oversubscribing, as I said it has to do done and is always done. The problem is how you do it. The "correct" way is to monitor your usage through your network and out of your network to ensure that you can always meet demand. I know one company when their peak hits 80% of their current max on a backhaul or connection to another provider they increase it to double what it was. Doing something like that, your customers won't notice the effects of oversubscribing and you still are not paying for extreme amounts of bandwidth to another company that isn't being used. Because not everyone is going to max out their connection 100% of the time companies don't need to worry about being able to provide 100% to everyone, they only need to really worry about what is the actual used amount by their customers.

As for your youtube, that is partly on google itself. They set limits on any none Google customer because they want to be pricks like that. Its the same reason I can't have a youtube app on my windows phone. Google kept removing Microsoft's API account and refused to let them have a windows phone app. Unless they have changed it recently, you use to have a 10Mbps cap to Google if you were not on Google fiber. That isn't your ISP that is Google. I was on our core switch and couldn't get faster than 10Mbps trying to stream a 4K video which granted was about 2 years ago and I haven't tried again since. That said there is still a need for a lot of places to still work on increasing interconnects in general.
 
or on average people aren't maxing out their connections so you don't realize there is a possible bottle neck with your neighbors.
That's the thing about having a 1Gb connection. When you can download a 10 GB game in less than 5 minutes, your connection goes back to idling while you install and play said game. Having most of your customers' connections idling most of the time must be pretty beneficial for the backend infrastructure.
 
You are incorrect on a few of your comments there. There is always going to be oversubscribing. People would be crazy not to. That google fiber you are talking about, it is over subscribed at the last mile to the customer. It is GPON technology. That means you have 2.5Gb down / 1.25Gbps up that can be split to service up to 144 people by theory, most spit 1x32. So if everyone was online maxing out their connection you are looking at best getting about 78Mbps / 39Mbps. The fact that they turn everyone to 1Gbps and people aren't screaming about speed means two things, people after a certain point either don't check speed anymore and don't realize that they are going getting a few hundred at times or less, or on average people aren't maxing out their connections so you don't realize there is a possible bottle neck with your neighbors. Lets say that they were using active E and not Gpon meaning that everyone gets a full 1Gbps to the box servicing your area. Knowing what they use, I know that 1 box could support 504 customers with every slot being populated with cards only doing active E. that would mean you would need 504Gbps to ensure no over subscribing, however at best you could only get 40Gbps to the unit at that point so you would be over subscribing there. remove a few cards each servicing 24 people and you could get yourself some ways to get more to the box, however I don't think it would ever be possible to feed an equal 1 to 1 to the box so that there is no over subscribing at a full gig. Next you move to the core even if there was. they would need 1 Gbps for every customer. Meaning that you are looking at petabytes of traffic to accommodate everyone.

The problem isn't oversubscribing, as I said it has to do done and is always done. The problem is how you do it. The "correct" way is to monitor your usage through your network and out of your network to ensure that you can always meet demand. I know one company when their peak hits 80% of their current max on a backhaul or connection to another provider they increase it to double what it was. Doing something like that, your customers won't notice the effects of oversubscribing and you still are not paying for extreme amounts of bandwidth to another company that isn't being used. Because not everyone is going to max out their connection 100% of the time companies don't need to worry about being able to provide 100% to everyone, they only need to really worry about what is the actual used amount by their customers.

As for your youtube, that is partly on google itself. They set limits on any none Google customer because they want to be pricks like that. Its the same reason I can't have a youtube app on my windows phone. Google kept removing Microsoft's API account and refused to let them have a windows phone app. Unless they have changed it recently, you use to have a 10Mbps cap to Google if you were not on Google fiber. That isn't your ISP that is Google. I was on our core switch and couldn't get faster than 10Mbps trying to stream a 4K video which granted was about 2 years ago and I haven't tried again since. That said there is still a need for a lot of places to still work on increasing interconnects in general.


My mention of oversubscribing was not to say that google has to offer enough bandwidth to give every user 1gbps at all times, it was more like what was described later, where when the bandwidth to a node or from an interconnect reaches 80% capacity or some other target, upgrades are implemented to keep it from causing issues. That was part of the issue with Netflix and companies like comcast, some people blamed netflix for using cogent that was using a pathway that was oversubscribed, but comcast should have gotten off its ass and upgraded the interconnect into its network.

The same thing happened with level 3 with Verizon,

Verizon’s Accidental Mea Culpa - Beyond Bandwidth

Look at this chart:

lvltvzw-1024x351.jpg



This is the kind of garbage I am talking about, not having everyone from google fiber maxing out 1gbps at once, just not having situations like the above, where the interconnect is CLEARLY at peak utilization, where the outside companies have already upgraded their end but the ASSHOLE ISPs like verizon or comcast or whoever else REFUSE to do their GOD DAMN JOB because they want to double dip and get some revenue from companies like netflix or google with youtube or whoever else.

WTF am I paying the ISP for if they are not spending some of that revenue to make sure the pathways into their network are not too congested? That is why I said I don't give a shit about a 5gbps top speed on a comcast INTRANET, I want to access things outside the network, and I don't trust these asshole companies to work in good faith and do their job, in part because of what they have ALREADY done in the past.



As for google restricting the connection speed from youtube themselves, maybe that's an issue, I'm not sure. But there must be ways to get that solved by working with google and not being stubborn. One of netflix solutions to sending all their video data across interconnects into an ISP is to place their own video streaming servers within the ISPs network closer to their customers, from there less bandwidth is needed to transfer content from outside the network because more of it is stored locally and can be sent out with less issues. Some ISPs like google fiber allow this, but not comcast, they want the revenues for a direct connection, this is less efficient in terms of how far the bandwidth needs to travel, but that was never the point.


Again, I do not trust the intentions of many of these ISPs, all I want from an ISP is reliable service, fast speeds, and a proactive attitude to make sure my connectivity to the outside world is solid. And too many of them hold my interests hostage because they are trying to shake down some of the services I use that need to use lots of bandwidth to get onto their network. Eff that and eff them.
 
My mention of oversubscribing was not to say that google has to offer enough bandwidth to give every user 1gbps at all times, it was more like what was described later, where when the bandwidth to a node or from an interconnect reaches 80% capacity or some other target, upgrades are implemented to keep it from causing issues. That was part of the issue with Netflix and companies like comcast, some people blamed netflix for using cogent that was using a pathway that was oversubscribed, but comcast should have gotten off its ass and upgraded the interconnect into its network.

The same thing happened with level 3 with Verizon,

Verizon’s Accidental Mea Culpa - Beyond Bandwidth

Look at this chart:

lvltvzw-1024x351.jpg



This is the kind of garbage I am talking about, not having everyone from google fiber maxing out 1gbps at once, just not having situations like the above, where the interconnect is CLEARLY at peak utilization, where the outside companies have already upgraded their end but the ASSHOLE ISPs like verizon or comcast or whoever else REFUSE to do their GOD DAMN JOB because they want to double dip and get some revenue from companies like netflix or google with youtube or whoever else.

WTF am I paying the ISP for if they are not spending some of that revenue to make sure the pathways into their network are not too congested? That is why I said I don't give a shit about a 5gbps top speed on a comcast INTRANET, I want to access things outside the network, and I don't trust these asshole companies to work in good faith and do their job, in part because of what they have ALREADY done in the past.



As for google restricting the connection speed from youtube themselves, maybe that's an issue, I'm not sure. But there must be ways to get that solved by working with google and not being stubborn. One of netflix solutions to sending all their video data across interconnects into an ISP is to place their own video streaming servers within the ISPs network closer to their customers, from there less bandwidth is needed to transfer content from outside the network because more of it is stored locally and can be sent out with less issues. Some ISPs like google fiber allow this, but not comcast, they want the revenues for a direct connection, this is less efficient in terms of how far the bandwidth needs to travel, but that was never the point.


Again, I do not trust the intentions of many of these ISPs, all I want from an ISP is reliable service, fast speeds, and a proactive attitude to make sure my connectivity to the outside world is solid. And too many of them hold my interests hostage because they are trying to shake down some of the services I use that need to use lots of bandwidth to get onto their network. Eff that and eff them.


Only the big guys can put stuff into their NOCs for Netflix and such. You have to have over 10Gbps of traffic going to just them for them to give you a caching server, plus pay for a 10Gbps fiber connection direct to them. That doesn't stop you from peering with them at the carrier hotel to avoid the rest of the internet however it isn't the same as having a caching server. That is what we do since they wouldn't let us having a caching server (small telco with not enough traffic to get up to 10Gbps), we just peer with them to send our traffic direct to them and by pass our up stream ISP.

I 100% agree about how they should spend money on upgrading their backend. That is what we do. We had three 1Gbps going to two different major cities, when the main two started to get to about 70% we upgraded to two 10Gbps. We peer with Netflix, google, and a few others to keep our traffic going through as few hops as possible. Our customers should never see a bottleneck going to anyone that is caused by our backbone as I monitor it and make damn sure I am not overloading any link between any box that I can help. Have some legacy stuff that I can't help so that I am replacing as quickly as I can get man power dedicated to doing so. But I work for a company that gives a shit about the quality of service we offer, and that is why we have 20 cities begging us to come and do fiber for them as the local ILEC doesn't give a damn but see what we do in our area for our customers and want the same for themselves now.
 
300 Mbps DL, 100 UP -- 7 $ / month
1 Gbps DL, 200 UP -- 12 $ / month

Beat this!

I live in Romania though. The downside is that you only get those speeds in the big cities. In the countryside, the 300 Mbps download tends to be more like 20 for the same price. Which is still a lot better value than in the US and Western Europe.

Doesn't sound that bad, to bad you have to live in Romania though.
 
Back
Top