How Heartbleed Broke the Internet - And Why It Can Happen Again

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
This is kind of depressing when you think about it. Let's hope this doesn't happen again. :eek:

It’s no surprise that a small bug would cause such huge problems. What’s amazing, however, is that the code that contained this bug was written by a team of four coders that has only one person contributing to it full-time. And yet Henson’s situation isn’t an unusual one. It points to a much larger problem with the design of the internet. Some of its most important pieces are controlled by just a handful of people, many of whom aren’t paid well — or aren’t paid at all.
 
Do we have any security experts on here?

Curious why this stuff happens over and over. Is the technology just always vunerable, or is it just being lazy/cheap in implementing protection of this data.

It seems that NOTHING is secure. I think that we are going to have to have better ways to clean up an attack than to prevent an attack. Basicly if someone uses your identify/financial info put EVEN MORE resbonsibility on the bank/company who is taking it.

Just my opinion.
 
Brings about the debate that the open-source dev model in the end is not good practice for large(r) projects.

People who get paid are simply more productive, period.
 
The only way to guarantee data safety is to have a completely isolated, unplugged system, which defeats the purpose of the internet. IMHO security breaches will never go away, but will only cause people to be more careful about the amount of information they're willing to entrust to the internet. No matter how careful a designer is, they can't think of every angle that an ill intentioned coder will use to attack the system.
 
Do we have any security experts on here?

Curious why this stuff happens over and over. Is the technology just always vunerable, or is it just being lazy/cheap in implementing protection of this data.

It seems that NOTHING is secure. I think that we are going to have to have better ways to clean up an attack than to prevent an attack. Basicly if someone uses your identify/financial info put EVEN MORE resbonsibility on the bank/company who is taking it.

Just my opinion.

It is because people keep writing high-level software in unsafe languages like C. C has no protection for things like array overflows (basically, if you allocate an array of 10 but then write 11 items into it, it will overflow into neighboring memory whereas with more modern programming languages, you'll get an error), casting a variable to the wrong type, or missing bounds checks so it is very easy to inadvertently have security holes.

In the case of OpenSSL, they are pulling data from memory directly; the heartbeat command allows you to specify a length in bytes and they don't check that length against the maximum length of the record so if you specify a length greater than the actual length of the record, it will start pulling data from adjacent areas of memory.. This bug would not have happened in a safer managed language because they have checks in place to prevent these sorts of things (you'd get an exception instead so the most they'd be able to do is crash the server rather than steal data).

One of the biggest flaws in the open-source world is the tendency to view C as the One True Language (TM) and so they will sit and write entire desktop applications in C (not even C++) even though it takes 10 times longer than it would in something like Java or C# and opens you up to more bugs and security holes. C and C++ have their uses (particularly for embedded systems and operating system programming), but high level software and, especially, internet enabled software, in 2014, should not be using C. There are better languages like Object Pascal (Particularly, Free Pascal) that allow you most of the power of C as well as coming within 5-10% of the performance while being much safer.

I would also point out that OpenSSL is poorly written :
https://www.peereboom.us/assl/assl/html/openssl.html (The site uses a self-signed certificate so your browser will complain).
 
People like backdoors in some shape or form in their code and yes there will always be vulnerabilities.
 
It is because people keep writing high-level software in unsafe languages like C. C has no protection for things like array overflows (basically, if you allocate an array of 10 but then write 11 items into it, it will overflow into neighboring memory whereas with more modern programming languages, you'll get an error), casting a variable to the wrong type, or missing bounds checks so it is very easy to inadvertently have security holes.

In the case of OpenSSL, they are pulling data from memory directly; the heartbeat command allows you to specify a length in bytes and they don't check that length against the maximum length of the record so if you specify a length greater than the actual length of the record, it will start pulling data from adjacent areas of memory.. This bug would not have happened in a safer managed language because they have checks in place to prevent these sorts of things (you'd get an exception instead so the most they'd be able to do is crash the server rather than steal data).

One of the biggest flaws in the open-source world is the tendency to view C as the One True Language (TM) and so they will sit and write entire desktop applications in C (not even C++) even though it takes 10 times longer than it would in something like Java or C# and opens you up to more bugs and security holes. C and C++ have their uses (particularly for embedded systems and operating system programming), but high level software and, especially, internet enabled software, in 2014, should not be using C. There are better languages like Object Pascal (Particularly, Free Pascal) that allow you most of the power of C as well as coming within 5-10% of the performance while being much safer.

I would also point out that OpenSSL is poorly written :
https://www.peereboom.us/assl/assl/html/openssl.html (The site uses a self-signed certificate so your browser will complain).

It doesn't matter what language your code is written in, the more complex the application is, the better chance it'll contain vulnerabilities. It might not be buffer overrun but it could be something else.
 
It doesn't matter what language your code is written in, the more complex the application is, the better chance it'll contain vulnerabilities. It might not be buffer overrun but it could be something else.

But you are a lot less likely to have vulnerabilities in a safer language because stupidities cause exceptions or compile errors rather than buffer overflows or variable truncation.
 
Apparently, OpenSSL also shims malloc so its not wonder the damn thing is full of holes. Seriously, with a PhD in mathematics as one of the project leaders, you'd think they could at least write better code.
 
How can you blame it on the language its just a tool. Its the implementation that matters much like the saying "guns dont kill people, people kill people" and the reason for using C is because you have far greater control of the memory itself and often times people can do amazing things with this.

If you have seen the code, it only took them a few lines to fix it. As simple as checking the response length is the same as request length it has nothing to do with memory managmant. If they want production quality they need to put more resources into it, even then you can't catch everything computers are programmed by people and "to err is human"
 
How can you blame it on the language its just a tool. Its the implementation that matters much like the saying "guns dont kill people, people kill people" and the reason for using C is because you have far greater control of the memory itself and often times people can do amazing things with this.

If you have seen the code, it only took them a few lines to fix it. As simple as checking the response length is the same as request length it has nothing to do with memory managmant. If they want production quality they need to put more resources into it, even then you can't catch everything computers are programmed by people and "to err is human"

But most of the times, you don't need that level of control with memory. Something like FreePascal allows you to have the best of both worlds; you can avoid pointers when you don't need them and you can do pointers and advanced memory management when you do.

malloc has everything to do with this bug. Most operating system's mallocs have checks in place that are designed to catch potential security holes. By shimming malloc, they bypassed the very checks that are designed to catch these types of bugs.

See : http://www.tedunangst.com/flak/post/analysis-of-openssl-freelist-reuse
See : http://article.gmane.org/gmane.os.openbsd.misc/211963
 
It is because people keep writing high-level software in unsafe languages like C. C has no protection for things like array overflows (basically, if you allocate an array of 10 but then write 11 items into it, it will overflow into neighboring memory whereas with more modern programming languages, you'll get an error), casting a variable to the wrong type, or missing bounds checks so it is very easy to inadvertently have security holes.

In the case of OpenSSL, they are pulling data from memory directly; the heartbeat command allows you to specify a length in bytes and they don't check that length against the maximum length of the record so if you specify a length greater than the actual length of the record, it will start pulling data from adjacent areas of memory.. This bug would not have happened in a safer managed language because they have checks in place to prevent these sorts of things (you'd get an exception instead so the most they'd be able to do is crash the server rather than steal data).

One of the biggest flaws in the open-source world is the tendency to view C as the One True Language (TM) and so they will sit and write entire desktop applications in C (not even C++) even though it takes 10 times longer than it would in something like Java or C# and opens you up to more bugs and security holes. C and C++ have their uses (particularly for embedded systems and operating system programming), but high level software and, especially, internet enabled software, in 2014, should not be using C. There are better languages like Object Pascal (Particularly, Free Pascal) that allow you most of the power of C as well as coming within 5-10% of the performance while being much safer.

I would also point out that OpenSSL is poorly written :
https://www.peereboom.us/assl/assl/html/openssl.html (The site uses a self-signed certificate so your browser will complain).

I'll agree with you on the lack of bounds checking in C causing problems.
I'll also agree with you on C# being much quicker to write a desktop application in: writing a full desktop application is pure C in this day and age is sort of nuts.

That said, the background bounds checking of many newer languages does cause a performance hit -- and it is also harder to verify for security purposes because there is NOT a 1:1 mapping of the functions from the high level language to assembly.

Personally, something on the order of OpenSSL *should* be written for efficiency in hand optimized C or a similar language for efficiency reasons and for verification purposes. With something like OpenSSL the sheer number of times it would be getting called per second on a high end, heavily loaded web server easily justifies the need for tight, efficient code -- and when you are dealing with something as heavily used as Gmail, even a slighly lower degree of optimization could easily equate to needing hundreds of extra servers. Speaking from 30 years of personal experience, 9 times out of 10, I can hand optimize C code in ways that generates faster, smaller code than the compiler's optimizer does. It is also possible to use certain tricks to enforce hard bounds checking in code like this (i.e. you can code it to where an overflow or underflow would actually trigger a seg fault or similar hardware exception). However, hand tweaking code in this manner is SLOW and generally does require compiling with optimization completely disabled (otherwise things actually get worse). The problem is one of coding time vs server time vs maintainability -- and, in the case of something as heavily used as OpenSSL, optimizing for efficiency is probably worth it -- but requires heavy review of the code to check for security holes.

Herein lies one of Open Source's greatest assets and greatest flaws: with something like OpenSSL, the code is there for all to see and it CAN be hand reviewed and verified for security purposes (And, for something like OpenSSL, I would trust an Open Source program to be secure and not have back doors more than I would a commercial, closed source product that could not be reviewed). The problem is that no one apparently actually IS reviewing the code before it is put into use -- primarily because no one is paying the developers to do it, so it just gets done by people in their spare time. Which emphasizes the issue: most of these people are unpaid and are doing it as a hobby, and, while writing code is "fun" (for some of us), debugging it, validating it, and doing full regression testing is decidedly NOT fun -- hence, it doesn't always get done. Also, in the case of OpenSSL, the code just wasn't that well written or optimized to begin with (what is also appalling is that so many people have been using it for such a critical task without verifying it or improving it).

Basically, for something as important as OpenSSL, there needs to be a *sponsored* community effort funded by people like Google, etc. with paid, professional programmers writing, optimizing, and validating the code -- but keeping it as Open Source so that no one is able to sneak in back doors and vulnerabilities. In other words, for something like this that is so intrinsically linked to the security of the internet, more oversight, organization and funding are needed. Simply put, someone needs to be PAID to closely monitor and control code this important, the question is who??? (i.e. "Who watches the Watchmen?").

The entire problem
 
People write a log of FOSS apps on C because the compiler is free for almost every OS and because that's what the underlying system is written in for most of the systems it will be deployed on. Are there better choices? Probably but it depends on your view and your experience. Would I say C# is better than C? Not a chance but its what we use at work because write apps for Windows systems & the tools we need are there. I've also coded in Java (and more recently Android) and the developer tools aren't quite as nice. Most FOSS solutions aren't written in C because its "The One True Language" but because there is a reason to.
 
that means nothing, its still up to the implementer to choose the language, design program, and implement it. They just did it wrongand im pretty sure you just proved my point. see(http://www.tedunangst.com/flak/post/analysis-of-openssl-freelist-reuse) "This bug would have been utterly trivial to detect when introduced had the OpenSSL developers bothered testing with a normal malloc"

This bug would have been utterly trivial to detect when introduced had the OpenSSL developers bothered testing with a normal malloc

So its not the languages it was the shimming of malloc, who shimmed it....the developers.
 
It is because people keep writing high-level software in unsafe languages like C. C has no protection for things like array overflows (basically, if you allocate an array of 10 but then write 11 items into it, it will overflow into neighboring memory whereas with more modern programming languages, you'll get an error), casting a variable to the wrong type, or missing bounds checks so it is very easy to inadvertently have security holes.

In the case of OpenSSL, they are pulling data from memory directly; the heartbeat command allows you to specify a length in bytes and they don't check that length against the maximum length of the record so if you specify a length greater than the actual length of the record, it will start pulling data from adjacent areas of memory.. This bug would not have happened in a safer managed language because they have checks in place to prevent these sorts of things (you'd get an exception instead so the most they'd be able to do is crash the server rather than steal data).

One of the biggest flaws in the open-source world is the tendency to view C as the One True Language (TM) and so they will sit and write entire desktop applications in C (not even C++) even though it takes 10 times longer than it would in something like Java or C# and opens you up to more bugs and security holes. C and C++ have their uses (particularly for embedded systems and operating system programming), but high level software and, especially, internet enabled software, in 2014, should not be using C. There are better languages like Object Pascal (Particularly, Free Pascal) that allow you most of the power of C as well as coming within 5-10% of the performance while being much safer.

I would also point out that OpenSSL is poorly written :
https://www.peereboom.us/assl/assl/html/openssl.html (The site uses a self-signed certificate so your browser will complain).

I'm sorry, but I'm afraid you are no expert on the subject. First, C is NOT a high-level language, it is a low-level language. The best example is the fact that is allows and requires you to allocate memory yourself.

Your proposition of using higher level languages for tasks such as OpenSSL is totally ridiculous. Data encryption is a compute intensive task and using higher level languages will lead to poorly performing systems. You may not mind if YOUR computer spends a little bit more time processing your bank transaction. But on the server-side, have a components that runs slower has an impact on capacity planning and thus will considerably impact the cost and complexity of the solution that needs to be put in place.

The point that the author of the article tries to make is valid. Proper financing would allow code revision (audits) to be performed. This would considerably reduce the risk of introducing a faulty piece of code in production code.

Another point we need to think about is that we rely on open-source programs assuming that since it's open-source, people have looked at the code and if there was an issue with it, it would have been identify pretty quick. Obviously this is not the case. I won't repeat all the arguments from the article since everyone can read it.

As a rule of thumb, any code the is part of the operating system or that is compute intensive should be developed using a lower and more efficient language by more experience programmers. AND every piece of code that goes into production should be reviewed by peers. And the number of auditors should be proportionnal to the important of the program. OpenSSL code should not be audited by only 4 or 5 programmers.
 
http://www.pcpro.co.uk/news/388162/heartbleed-coder-bug-in-openssl-was-an-honest-mistake

"I was working on improving OpenSSL and submitted numerous bug fixes and added new features," Seggelmann told the Sydney Morning Herald. "In one of the new features, unfortunately, I missed validating a variable containing a length."
His work was reviewed, but the reviewer also missed the error, and it was included in the released version of OpenSSL.


Read more: Heartbleed coder: bug in OpenSSL was an honest mistake | News | PC Pro http://www.pcpro.co.uk/news/388162/...n-openssl-was-an-honest-mistake#ixzz2yapFNFIE
Isn't the point of using a common language such as C is because of portability and compatibility, but widespread usage by other programmers for easy development and porting?

The same can almost be said of why developers for games continue to use DirectX over OpenGL-- the language is more common among games, easier to port and easier to program and develop for. But, using something common over something barely used means you run into issues and flaws. With the case of DirectX, you run into CPU overhead as we've seen in comparisons between Mantle and DirectX, and you limit your audience to only Windows-based machines especially newer operating systems Windows 7 and higher. Why use something like PASCAL over C? Or, why use C# over C? Or, why use .NET over other frameworks?

Programmers would rather use something they're more comfortable with as well, and something that can still be understood by the good majority of programmers out there. Then again, majority is not always a good thing.
 
I'll agree with you on the lack of bounds checking in C causing problems.
I'll also agree with you on C# being much quicker to write a desktop application in: writing a full desktop application is pure C in this day and age is sort of nuts.

That said, the background bounds checking of many newer languages does cause a performance hit -- and it is also harder to verify for security purposes because there is NOT a 1:1 mapping of the functions from the high level language to assembly.

Personally, something on the order of OpenSSL *should* be written for efficiency in hand optimized C or a similar language for efficiency reasons and for verification purposes. With something like OpenSSL the sheer number of times it would be getting called per second on a high end, heavily loaded web server easily justifies the need for tight, efficient code -- and when you are dealing with something as heavily used as Gmail, even a slighly lower degree of optimization could easily equate to needing hundreds of extra servers. Speaking from 30 years of personal experience, 9 times out of 10, I can hand optimize C code in ways that generates faster, smaller code than the compiler's optimizer does. It is also possible to use certain tricks to enforce hard bounds checking in code like this (i.e. you can code it to where an overflow or underflow would actually trigger a seg fault or similar hardware exception). However, hand tweaking code in this manner is SLOW and generally does require compiling with optimization completely disabled (otherwise things actually get worse). The problem is one of coding time vs server time vs maintainability -- and, in the case of something as heavily used as OpenSSL, optimizing for efficiency is probably worth it -- but requires heavy review of the code to check for security holes.

Herein lies one of Open Source's greatest assets and greatest flaws: with something like OpenSSL, the code is there for all to see and it CAN be hand reviewed and verified for security purposes (And, for something like OpenSSL, I would trust an Open Source program to be secure and not have back doors more than I would a commercial, closed source product that could not be reviewed). The problem is that no one apparently actually IS reviewing the code before it is put into use -- primarily because no one is paying the developers to do it, so it just gets done by people in their spare time. Which emphasizes the issue: most of these people are unpaid and are doing it as a hobby, and, while writing code is "fun" (for some of us), debugging it, validating it, and doing full regression testing is decidedly NOT fun -- hence, it doesn't always get done. Also, in the case of OpenSSL, the code just wasn't that well written or optimized to begin with (what is also appalling is that so many people have been using it for such a critical task without verifying it or improving it).

Basically, for something as important as OpenSSL, there needs to be a *sponsored* community effort funded by people like Google, etc. with paid, professional programmers writing, optimizing, and validating the code -- but keeping it as Open Source so that no one is able to sneak in back doors and vulnerabilities. In other words, for something like this that is so intrinsically linked to the security of the internet, more oversight, organization and funding are needed. Simply put, someone needs to be PAID to closely monitor and control code this important, the question is who??? (i.e. "Who watches the Watchmen?").

The entire problem

Security should take priority over performance. That being said, there are other safer languages that are native code such as Pascal that can come within an acceptable range of C performance.

Not being paid is no excuse. When you advertise your software as being secure, you have a responsibility to make sure it is secure whether you are paid or not. If you can't commit to that, then don't mislead people and put a disclaimer on your website. The mistakes in OpenSSL are those of an amateur.
 
I'm sorry, but I'm afraid you are no expert on the subject. First, C is NOT a high-level language, it is a low-level language. The best example is the fact that is allows and requires you to allocate memory yourself.

Your proposition of using higher level languages for tasks such as OpenSSL is totally ridiculous. Data encryption is a compute intensive task and using higher level languages will lead to poorly performing systems. You may not mind if YOUR computer spends a little bit more time processing your bank transaction. But on the server-side, have a components that runs slower has an impact on capacity planning and thus will considerably impact the cost and complexity of the solution that needs to be put in place.

The point that the author of the article tries to make is valid. Proper financing would allow code revision (audits) to be performed. This would considerably reduce the risk of introducing a faulty piece of code in production code.

Another point we need to think about is that we rely on open-source programs assuming that since it's open-source, people have looked at the code and if there was an issue with it, it would have been identify pretty quick. Obviously this is not the case. I won't repeat all the arguments from the article since everyone can read it.

As a rule of thumb, any code the is part of the operating system or that is compute intensive should be developed using a lower and more efficient language by more experience programmers. AND every piece of code that goes into production should be reviewed by peers. And the number of auditors should be proportionnal to the important of the program. OpenSSL code should not be audited by only 4 or 5 programmers.

I never claimed that C was a high level language. I said high level software, not language. As in people are writing high level software in a low level language.

I'm pretty sure the banks would prefer to sacrifice a bit of speed if they knew it meant their customers data wouldn't get stolen (since the banks are the ones that take a hit in any kind of fraud due to fraud protection). It is cheaper to upgrade your hardware than to deal with the fallout of having your customers money getting stolen because you wanted that 10% performance boost. Machine time is cheaper than programmer time.
 
Security should take priority over performance. That being said, there are other safer languages that are native code such as Pascal that can come within an acceptable range of C performance.

Not being paid is no excuse. When you advertise your software as being secure, you have a responsibility to make sure it is secure whether you are paid or not. If you can't commit to that, then don't mislead people and put a disclaimer on your website. The mistakes in OpenSSL are those of an amateur.

Actually, they are the mistakes of humans. Up until recently (when I took a different job in the company), I was paid to write applications for E9-1-1 call centers. We made mistakes even though some of us have been writing programs for the company for almost a decade. We are all humans and make mistakes.
 
Security should take priority over performance. That being said, there are other safer languages that are native code such as Pascal that can come within an acceptable range of C performance.

Not being paid is no excuse. When you advertise your software as being secure, you have a responsibility to make sure it is secure whether you are paid or not. If you can't commit to that, then don't mislead people and put a disclaimer on your website. The mistakes in OpenSSL are those of an amateur.

Yes, the mistakes in OpenSSL are those of an amateur (as opposed to a PROFESSIONAL who is getting PAID to do the task).

And, while not getting paid is not an excuse for poor coding, it is an excuse for poor validation.

The point here being -- this is was a FREE, OPEN SOURCE project, no one said that all the major companies out there had to use it.

When it was being used for things as major as Google and Yahoo mail, what there is no excuse for is the companies USING it (and basing their entire security infrastructure on it) not validating it initially and not continuing to validate it as new changes were submitted.
 
Security should take priority over performance. That being said, there are other safer languages that are native code such as Pascal that can come within an acceptable range of C performance.

Not being paid is no excuse. When you advertise your software as being secure, you have a responsibility to make sure it is secure whether you are paid or not. If you can't commit to that, then don't mislead people and put a disclaimer on your website. The mistakes in OpenSSL are those of an amateur.

Nothing is 100% secure and nowhere does it say its 100% secure, so if you ever believed that you're going to love what i got to sell you.
 
The only way to guarantee data safety is to have a completely isolated, unplugged system, which defeats the purpose of the internet. IMHO security breaches will never go away, but will only cause people to be more careful about the amount of information they're willing to entrust to the internet. No matter how careful a designer is, they can't think of every angle that an ill intentioned coder will use to attack the system.

This. NOTHING is totally secure when someone can sit around all day and try to break into it (or just run a script 24/7 until they get something). Even the toughest safe can be cracked when you aren't worried about a guard catching you. And that is basically what the internet is, a safe with no guards. So it doesn't matter if the software is Open Source created by unpaid volunteers or something created in house by Microsoft, given enough time someone somewhere will find a hole and exploit it.
 
Do we have any security experts on here?

Curious why this stuff happens over and over. Is the technology just always vunerable, or is it just being lazy/cheap in implementing protection of this data.

Yes, I know of at least a few. This bug stems from systemic flaws in the way that openssl is written and left untested. While he can be opinionated and frustrates some people, Theo de Raadt made a great point in this post. You can find the analysis by Ted Unangst here and here. The short version is that there are ways to detect an issue like this, as well as mitigations in security-conscious memory allocation systems, yet OpenSSL disables or circumvents all of them through its custom memory management. Mitigations include using a memory allocator with allocates so a buffer ends at a page boundary; that crashes if the program tries to read beyond the page.

A lot of why this happens over and over is that there is a lot of area to test and verify. Tools make it easier, but have to be configured and pointed at the right area of code. Especially in systems that have evolved over time, there are a lot of connecting pieces that are difficult to adequately validate. That doesn't lessen the mistakes, but hopefully helps you understand why such code can get into the base.
 
This. NOTHING is totally secure when someone can sit around all day and try to break into it (or just run a script 24/7 until they get something). Even the toughest safe can be cracked when you aren't worried about a guard catching you. And that is basically what the internet is, a safe with no guards. So it doesn't matter if the software is Open Source created by unpaid volunteers or something created in house by Microsoft, given enough time someone somewhere will find a hole and exploit it.

I agree with your analogy but not the final premise.

It IS possible to write code without security holes -- given enough time, money, effort, and talent (none of which are usually available in large enough quantities).

Given enough time, someone can crack a safe through a brute force attack.
Similarly someone can break a SPECIFIC username/password combination or security key through a brute force attack given an infinite amounts of time to do it.
However, with a sufficiently large key, the amount of time needed to do a full coverage brute-force attack should be longer than the period of time the universe has EXISTED.

The point is that boundary applications like OpenSSL (your "guardians at the gate") need to be small, efficient, robust code that is able to be rigorously verified to make SURE that vulnerabilities such as this one do not exist.

Of course, this gets into the entire issue of code validation and what is required to prove a program is correct and doesn't have flaws. Years ago, I got into an argument with one of my college professors who was pushing code validation as the end-all of everything. He had written books and papers arguing that the code used for basically everything, needed to be proved mathematically correct and that, after it was, it would have no bugs, should never need fixing, and would have no security holes. So, I then wrote two fairly short programs that I was able to prove were 100% correct based on his proof models and showed them to him. I then had him attempt to run them -- they both crashed. One was written specifically to trigger a known error in the version of the compiler he was using and the other was written to trip over a specific errata in the processor in his machine. At which point I basically told him that, until he could validate all the tools he was using and THE UNDERLYING HARDWARE as well, his theories were pretty much useless. Let's just say he wasn't happy with me (but at least I did it outside class and proved my point -- a friend of mine decided to call him on it during class, in the middle of one of his lectures, and basically publicly called him an idiot to his face -- which probably was not the best of ideas considering he was also head of the department!).

However, the professor's point actually has merit, but so does mine. We DO need code validation and to prove certain pieces of software as close to 100% correct and near bulletproof as possible. Unfortunately, unless you can validate everything else underneath you (the compiler, the lib's, the OS, the hardware, etc), this is impossible.

So...the best answer is for a group of companies and individuals to actually create the equivalent of OpenSSL layer as an Open Source HARDWARE project at the actual gate level and then FULLY validate it and then allow it to be incorporated freely into anyone else's hardware designs going forward.
 
If anyone is still confused about how Heartbleed works and what all this talk about bounds is, xkcd has made an exquisitely clear description that almost anyone can understand:

http://xkcd.com/1354/
 
The original article is right! We need to design everything like the healthcare website.. billions of dollars and hundreds of coders make everything better. :)

Basically, if you've got good folks trying to do a good job you'll get good work most of the time. But there'll still be slip-ups. *shrug*
 
The original article is right! We need to design everything like the healthcare website.. billions of dollars and hundreds of coders make everything better. :)

Basically, if you've got good folks trying to do a good job you'll get good work most of the time. But there'll still be slip-ups. *shrug*

Yep -- too many cooks spoil the broth.

We need a few good coders or designers at the helm, but then we need hopefully hundreds or thousands of people actually validating the operation and reviewing the code for errors.

The blame for Heartbleed is not with the original coder, it is with the reviewers for letting it get checked into the mainline without proper review, and with the entire community and companies using it for not properly checking the new version before switching over to it.

Open Source is great because you CAN check the code -- but for companies NOT to check the code and to just assume that the most critical pieces of security software will "just work" is utterly moronic.
 
Or it could be they're all NSA and it was a tactical plan to slip the bug into the code.

/end conspiracy theorist
 
The code was open. If some company like Google didn't bother to check it just to be safe, well... can't ask for more of a team who works for free and still manages to deliver useful software.
 
oops meant to quote demicatz

yeah they shimmed it because they could not get it to stop pinging with bugs essentially took the battery out of the smoke detector now the building is on fire and everyone is sleeping.
 
you can put checks into C or C++ to make sure that you are not going beyond the bounds of your array. Shitting coding is the fault of the programmers not the code.
 
Yes, the mistakes in OpenSSL are those of an amateur (as opposed to a PROFESSIONAL who is getting PAID to do the task).

And, while not getting paid is not an excuse for poor coding, it is an excuse for poor validation.

The point here being -- this is was a FREE, OPEN SOURCE project, no one said that all the major companies out there had to use it.

When it was being used for things as major as Google and Yahoo mail, what there is no excuse for is the companies USING it (and basing their entire security infrastructure on it) not validating it initially and not continuing to validate it as new changes were submitted.

But but but..... open source is the best. Fuck people that get paid for programming and have closed standards. Everything should be free. Free as in cost, and free as in open for the world to use and see. That makes it 100% secure as the entire world then is looking at it for bugs. Right?
 
All the big sites that used this need to be whipped. They were doing shit on the cheap, subsidizing their profits by leveraging some "free" guys work for fun and profit.

Everything goes sideways and now they cry. Google, Apple Microsoft have thousands of programmers and $billions, do it yourself or FUND the projects that do it so it gets done right.
 
All the big sites that used this need to be whipped. They were doing shit on the cheap, subsidizing their profits by leveraging some "free" guys work for fun and profit.

Everything goes sideways and now they cry. Google, Apple Microsoft have thousands of programmers and $billions, do it yourself or FUND the projects that do it so it gets done right.

I will admit I haven't looked through every site listing everyone effected, but the one list I did look at had Microsoft sites listed as being ok. So I don't think they were using this but were using their on SSL software. Google, Yahoo and Facebook where the 3 I noticed on the list.
 
What are the alternatives to OpenSSL?

I saw PolarSSL, but it's 99 Euros a month for a basic license.
 
What are the alternatives to OpenSSL?

I saw PolarSSL, but it's 99 Euros a month for a basic license.

from the Wikipedia page for SSL

Libraries[edit]

Main article: Comparison of TLS implementations

Most SSL and TLS programming libraries are free and open source software.
* Botan, a BSD-licensed cryptographic library written in C++.
* Microsoft Windows includes an implementation of SSL and TLS as part of its Secure Channel package.
* OS X includes an implementation of SSL and TLS as part of its Secure Transport package.
* Delphi programmers may use a library called Indy which utilizes OpenSSL.
* OpenSSL: a free implementation (BSD license with some extensions)
* GnuTLS: a free implementation (LGPL licensed)
* cryptlib: a portable open source cryptography library (includes TLS/SSL implementation)
* JSSE: a Java implementation included in the Java Runtime Environment supports TLS 1.1 and 1.2 from Java 7, although is disabled by default for client, and enabled by default for server.[61] Java 8 supports TLS 1.1 and 1.2 enabled on both the client and server by default.[62]
* MatrixSSL: a dual licensed implementation
* Network Security Services (NSS): FIPS 140 validated open source library
* PolarSSL: A tiny SSL library implementation for embedded devices that is designed for ease of use
* CyaSSL: Embedded SSL/TLS Library with a strong focus on speed and size.

Yeah some may cost you money, but that might also mean they have a more in depth QA team checking it out.
 
All the big sites that used this need to be whipped. They were doing shit on the cheap, subsidizing their profits by leveraging some "free" guys work for fun and profit.

Everything goes sideways and now they cry. Google, Apple Microsoft have thousands of programmers and $billions, do it yourself or FUND the projects that do it so it gets done right.

Oh good god NO! NO NO NO NO NO!

Rolling your own SSL instead of using something like OpenSSL is easily the *STUPIDEST* decision any engineer could possibly make.

Yes Heartbleed affected lots of companies, but any home grown solution would be so, *sooooo* much more vulnerable and buggy. Heartbleed also got fixed damn fast after it was discovered, at least by the white hats. You cannot say the same for any home rolled solution.

Yes, the mistakes in OpenSSL are those of an amateur (as opposed to a PROFESSIONAL who is getting PAID to do the task).

Being paid does not make you a better programmer.

If anything the "amateurs" that work on OpenSSL are better coders than the vast majority of paid programmers.

When it was being used for things as major as Google and Yahoo mail, what there is no excuse for is the companies USING it (and basing their entire security infrastructure on it) not validating it initially and not continuing to validate it as new changes were submitted.

They were. Google Security engineer Neel Mehta is one of the finders of this bug. Codenomicon independently found the bug as well, another user of OpenSSL that was auditing it.
 
It is because people keep writing high-level software in unsafe languages like C.

There is no language that will protect you from security bugs. *NONE*. This particular issue would not have happened in some other languages, true, but you cannot state that this is proof that it should not have been written in C. OpenSSL *needs* the speed of C. This cannot be overstated. Your claim that security turmps performance is bullshit, it's never that clear cut. If OpenSSL was 2x slower, then HTTPS would not be the default for things like Google Search. And that would make everyone *LESS* secure, not more.

And there are many hundreds of thousands of programs written in higher level languages that are still vulnerable to high-damage bugs, be it SQL injection, XSRF attacks, XXE attacks, etc...

One of the biggest flaws in the open-source world is the tendency to view C as the One True Language (TM)

If you are writing a library C pretty much is the One True Language. It can be used by any other language and does not bring along a runtime with it. Also every OS in use can run C code "out of the box". There are very, VERY good technical reasons why major open-source libraries use C, and it has nothing to do with your random rantings.
 
Back
Top