Forget The Cloud, Fog Computing Is The Next Big Thing

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
First there was "the cloud," now we have "fog computing." What's next? Steam? Wait, that's already taken. Quick, we need to come up with another buzz word for the next big thing before someone else does!

Turns out, the cloud is not all it’s been hyped to be. Enterprise CIOs are coming to realize that many of the services and apps and much of the data their users rely on for critical decision-making are better suited closer to the edge – on-premise or in smaller enterprise data centers. Say hello to the next big thing: fog computing.
 
There was a reason Mainframes died....

Yes . the reason was the you could get better cost/performance with a standalone PC , now mainframe is again alive ( not just 'the cloud', but also on-premise VM clusters , Citrix , Terminal Server etc ... )
we are again at a point where its cheaper / better to have a single machine (or cluster) do the work thousands of workstations)
 
There was a reason Mainframes died....

There were multiple reasons. Not always good ones, but always reasons...

Where I worked they killed the mainframe because they wanted a "modern" application. Well, their mainframe app had sub-second form-to-form times and the $20M+ Windows replacement didn't (ETA: and users constantly complain about performance). But that shit is modern now!
 
Mhm. Until one of your servers goes down and your devops guy (if you have one), is out on vacation.

I can pretty much guarantee you self-managed hardware IS NOT going to make a comeback.
 
Mhm. Until one of your servers goes down and your devops guy (if you have one), is out on vacation.

I can pretty much guarantee you self-managed hardware IS NOT going to make a comeback.

Actually, it will, as businesses are starting to realize not having control over their own data is a major legal and PR nightmare waiting to happen. That's why most in my field are going back to centralized servers.
 
I like Mist as the next term of choice.

But private clouds are nothing new, so I don't understand the interest.

And staff holidays aren't a problem: you outsource the staff but not the servers.
 
There were multiple reasons. Not always good ones, but always reasons...

Where I worked they killed the mainframe because they wanted a "modern" application. Well, their mainframe app had sub-second form-to-form times and the $20M+ Windows replacement didn't (ETA: and users constantly complain about performance). But that shit is modern now!

That is likely due to storage configuration. Mainframes would keep more data in memory to get that "sub-second form to form" time, but that costs indexing stability and makes it more likely to suffer data damage if the power goes out. The way you fix that with "modern" SQL server systems is to get tiered FC storage with flash caching.
 
This is just common sense. Mission critical data? If your answer is yes, it needs to be on site.

Actually, it will, as businesses are starting to realize not having control over their own data is a major legal and PR nightmare waiting to happen. That's why most in my field are going back to centralized servers.

I push ALL of my clients to have their production data on site as well an offsite backup. My employer does corporate payroll and tax filings and this happens at least twice a week. I get clients all the time with the same process, "Oh we have an absolute emergency problem. We need you to take a look at it today to get this fixed!"

Me - "I need to pull information from your SQL database to troubleshoot the problem. Who should I reach out to in your IT department?"

Client - "Oh, we don't have an IT department. We use a third party company. They're really hard to get in touch with, but usually call back within 48 hours. We use their cloud services to host our data."

Me proceeds jumping through hoops to contact this, "Company". Come to find it is one guy operating out of a retail front who never answers his phone. I can't tell companies to fire these people when it happens, but I always outline what a huge PITA the process was and where the delays came from. Sometimes, a nice federally mandated late filing payment gets these people to open their eyes.
 
Dude, I've been saying 'fog computer' for the last four years. As in: We're in the fog. We're lower to the ground and we don't know where we're going.
26173041823_668638024e_b.jpg
 
I stop listening when someone says "the cloud"...two hosts replicating to a physical backup offsite with monthly backups again taken offsite. Works well for me!
 
Actually, it will, as businesses are starting to realize not having control over their own data is a major legal and PR nightmare waiting to happen. That's why most in my field are going back to centralized servers.

I stop listening when someone says "the cloud"...two hosts replicating to a physical backup offsite with monthly backups again taken offsite. Works well for me!

Those are trivial use cases. Try managing architectures of hundreds of servers, with distributed databases, large data compute clusters, high availablity msg queues, process orchestration, etc, etc, and you'll understand why AWS cloud, and Google cloud, and MS cloud and the rest of them continue to grow exponentially and are not going anywhere anytime soon.
 
"It doesn't matter what we sell, our product is the stock price.' - New CEO of PiedPiper

/SiliconValley reference
 
The author is basically arguing that the demand for server side resources in the client-server model is going to decrease because there's going to be so many clients in the future and there's going to be so much demand that all the resources are just going to have to be local. Yeah, that's going to happen.

That doesn't have much to do with "clouds" and "fogs."
 
  • Like
Reactions: VMaxx
like this
That is likely due to storage configuration. Mainframes would keep more data in memory to get that "sub-second form to form" time, but that costs indexing stability and makes it more likely to suffer data damage if the power goes out. The way you fix that with "modern" SQL server systems is to get tiered FC storage with flash caching.

The SAN was benchmarked at 1M iops on a 1TB volume. All SSD RAID 10.

A lot of it is due to HTTP response sizes being huge in comparison. A web page is hundreds of times the size of a mainframe screen. Even taking jquery, by the time you've gzipped it and zopflinated it, it's 12 times the size of a full mainframe screen. We bumped the circuit speed to the endpoints but you only have so much budget. For 1 computer out in BFE, that guy is gonna go from a terminal on 56K frame relay to a PC on a 768Kbps MPLS drop, and it's going to piss him off.
 
Haze computing. The management is in a pure Haze when they don't realize how much money they are wasting ;)
 
There was a reason Mainframes died....

So far, the computing industry and many of its technologies and trends tend to be somewhat cyclical in nature. The mainframes used to be far more powerful and capable than anything desktop wise ever could be. Then we reached a point where the personal computer surged forward, our user experience changed and connectivity back to the mainframe / server became too slow to handle what became the accepted / expected desktop computing experience. Personal computers became far more capable and cheap enough to handle anything the end user could need. At that point servers became file and print nodes rather than processing nodes. Now we have scalable architecture and cheap, yet capable desktop / client nodes. The user experience that bloomed in the mid-late 1990's can be achieved on cheap mobile platforms. It used to be that the graphic interfaces required local computing power, now they are all web based.

Eventually we may see the pendulum swing the other way where the computing experience and what's done on the user side would be too demanding for the scalable processing nodes and virtual machines we have today. Probably not because the processing power isn't there, but because the interconnects to the servers are too slow to handle the experience in real time. I don't think this will happen until the user experience and how we use computers fundamentally changes dramatically. This may not happen for a decade or more, but I think the time will come where computing may be done through a VR type interface, thought interface or something entirely out of Science Fiction at present. When that happens we may see the decentralization of the personal computing experience once again.
 
Everyone in computing knows that cloud is just a server somewhere that holds data. What's FoG? What bullshit are they coming up with next?
 
Fog Computing uses computer power on the premises instead of in a remote datacenter. So basically The Next Big Thing is the Thing We Had Before The Cloud, i.e. about 3-6 years ago.

This invention is genius.

Trendy buzzword "Fog" being the invention, of course.

Because nothing of technological sophistication needs to be invented, and people who make decisions based on the trendiness of the buzzwords used will be pleased, and approve much spending.
 
paid by and for all of the companies losing swaths of money to the big 3(big 1) in cloud computing.
 
To me, it sounds like things have gone full circle. We had dumb terminals hooking up to centralized servers (mainframes), then we had everything running on the local computer, now we are getting back to having terminals (through our web browser) hooking up to centralized servers. And, I think that a lot of companies will find a way to get everything through a web browser rather than an actual on-computer application.

My eye doctor, for instance, migrated two-three years ago to a browser-based medical system. There is an on-site system to handle the data (speed), and an off-site backup (redundancy).
 
Also, isn't Content Distribution Networks a form of "fog computing"? After all, Netflix does offer servers to ISPs so that content is delivered with lower latency at no cost. Verizon doesn't like that though.
 
Yes . the reason was the you could get better cost/performance with a standalone PC , now mainframe is again alive ( not just 'the cloud', but also on-premise VM clusters , Citrix , Terminal Server etc ... )
we are again at a point where its cheaper / better to have a single machine (or cluster) do the work thousands of workstations)

Yep, there is a certain cost argument for a "big iron" VMware cluster in CERTAIN environments. But EVERYONE (I personally service k-12 school networks) are yapping about the cloud. Its a storm cloud! This is what happens when your tax dollars hire yuppies who know buzzwords instead of someone who had their boots on the ground :mad:
 
Pot Smoke Computing...

Not sure what we're doing. But we have the munchies!
 
Also, isn't Content Distribution Networks a form of "fog computing"? After all, Netflix does offer servers to ISPs so that content is delivered with lower latency at no cost. Verizon doesn't like that though.

Well, to the question, no, a caching device isn't a CDN, but you're right in that Netflix does offer those caching devices. Verizon and Comcast didn't like L3 shoving CDN traffic down what was a supposed to be a settlement-free link, and then L3 asking for more free giggitybytes to support their CDN revenue model.
 
Back
Top