cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,155
A security researcher from Tenable Research discovered a hardcoded backdoor in the 3.1.190 PremiSys IDenticard system that "allows attackers to add new users to the badge system, modify existing users, delete users, assign permission, and pretty much any other administrative function." Security researcher James Sebree says there is a IgnoreAuthentication() function in the standard, run-of-the-mill authentication routine that allows for the hardcoded backdoor to exist. He discovered this by reverse engineering the PremiSys .Net application with Jetbrain's free dotPeek utility. Tenable Research unsuccessfully attempted to contact the vendor before going public. The security company even disclosed the findings to CERT who were also unsuccessful in contacting the vendor. This led to the public disclosure by Tenable Research of the security backdoors in the PremiSys IDenticard system after 90 days passed. IDenticard users include Fortune 500 companies, K-12 schools, colleges and universities, medical centers, factories and local, state and federal government agencies and offices.

User credentials and other sensitive information are stored with a known-weak encryption method (Base64 encoded MD5 hashes - salt + password).

Identicard backups are stored in an "idbak" format, which appears to be a password protected zip file. The password to unzip the contents is hardcoded into the application ("ID3nt1card"). This password is not configurable by an end user, which limits the ability to adequately protect content stored in backups. An attacker with access to these backups could obtain access to potentially sensitive information within the backup. They could also arbitrarily modify contents of the backup, which could affect a future restore.

The Identicard service installs with a default database username and password of "PremisysUsr" / "ID3nt1card". Instructions are provided to meet password standards when domain policies requires over 10 characters. This password is simply "ID3nt1cardID3nt1card". Users are unable to change these passwords without vendor intervention.
 
I've dealt with a few of these solutions and I can tell you I am yet to be impressed with the security of these "security systems". Not just backdoors, but unencrypted databases, modifying databases and credentials for work-arounds, and lack of strong crypto to mention a few. These require stringent vetting before being brought into an environment.
 
  • Like
Reactions: PaulP
like this
That's really unacceptable security practices from a vendor that sells a security technology. What the fuck.


The problem is that you are talking backup data. On the one hand such data needs to be kept for a very long time meaning retention of old passwords becomes a serious issue as either everyone will know them or no one will.

Next, backup data is usually stored off-line in tape format. When this is the case, any compromise of the tapes means physical access and what do we all know is the result when an attacker has physical access?

As with all issues related to vulnerabilities, there remain risks. Risks must be measured and vetted and weighed against costs or loss in relation to compromise. It's easy to identify a vulnerability and point out all the nasty things that could happen, but it's an entirely different thing to look at exactly how such a thing can actually happen and exactly what the actual damages would be if it came to pass.

For instance, backup data that is on tape and stored physically secure and off site. An actor gains access to the data and can manipulate the data from tape, but how can he inject his changes into the production environment? He can use the information and create fake IDs, but how frequently are important format parameters in the ID Cards changed, ( "All of this year's ID cards have a different back ground and color which isn't part of the backed up data"). There can be several reasons that actually doing something with the data is far less of a true risk than it seems.
 
A security researcher from Tenable Research discovered a hardcoded backdoor in the 3.1.190 PremiSys IDenticard system that "allows attackers to add new users to the badge system, modify existing users, delete users, assign permission, and pretty much any other administrative function." Security researcher James Sebree says there is a IgnoreAuthentication() function in the standard, run-of-the-mill authentication routine that allows for the hardcoded backdoor to exist. He discovered this by reverse engineering the PremiSys .Net application with Jetbrain's free dotPeek utility. Tenable Research unsuccessfully attempted to contact the vendor before going public. The security company even disclosed the findings to CERT who were also unsuccessful in contacting the vendor. This led to the public disclosure by Tenable Research of the security backdoors in the PremiSys IDenticard system after 90 days passed. IDenticard users include Fortune 500 companies, K-12 schools, colleges and universities, medical centers, factories and local, state and federal government agencies and offices.

User credentials and other sensitive information are stored with a known-weak encryption method (Base64 encoded MD5 hashes - salt + password).

Identicard backups are stored in an "idbak" format, which appears to be a password protected zip file. The password to unzip the contents is hardcoded into the application ("ID3nt1card"). This password is not configurable by an end user, which limits the ability to adequately protect content stored in backups. An attacker with access to these backups could obtain access to potentially sensitive information within the backup. They could also arbitrarily modify contents of the backup, which could affect a future restore.

The Identicard service installs with a default database username and password of "PremisysUsr" / "ID3nt1card". Instructions are provided to meet password standards when domain policies requires over 10 characters. This password is simply "ID3nt1cardID3nt1card". Users are unable to change these passwords without vendor intervention.

Thank you for posting this article. You potentially just saved my company a lot of money, because I had no idea about it prior to reading it. One of my company's locations uses this system. We have about 250 locations state wide, using a myriad of solutions. This, however, may have earned me some brownie points with the boss for bring it to their attention. I, really, love you guys!
 
This stuff really shouldn't surprise anyone who has worked in IT for any length of time. If I had a dollar for every time I saw an application where everyone connected to the database as SA, or even when there was a dedicated group for access, it was assigned the sysadmin role at the server level in addition to dbo on the application's database, I would have retired 10 years ago and be living like a king. The sheer stupidity of some of these application designers - it's mind boggling. Why hack the server and any legitimate user actually has full rights to it all? Hard coded passwords? See it all the time. And it's so unnecessary - the user can still have a single sign on experience without having to key in extra passwords, there is no reason whatsoever to hard code the password. That then puts the onus on the customer to implement a secure password policy, but at least it CAN be secured. Indeed, stupid is as stupid does. I'm sure some of our clients use this system.
 
  • Like
Reactions: DocNo
like this
OH HEY, JUST TELL THEM TO LENGTHEN THE ADMIN PASSWORD AND YOU'LL BE FINE, RIGHT NONAME? HAHAHA:LOL::LOL:
 
I used to work for a major US software company - it was used by many large companies. We had "security" - the code was written in C. It took the original characters and used >> to shift the bits to the right. To "decrypt" the data, it used << to shift back to the left. The amount of bits that could be shifted was limited to the ascii table (so a regular fprintf() could write out the data to disk).
A bank discovered our "security" on their own - needless to say, they were not very happy with the situation. Our company was aware of the problem before this happened - I just don't think they thought any would notice. The company ended up buying another company and replaced the homegrown crap. My point - the company I worked for didn't give a crap about the problem until a customer found out.
 
I used to work for a major US software company - it was used by many large companies. We had "security" - the code was written in C. It took the original characters and used >> to shift the bits to the right. To "decrypt" the data, it used << to shift back to the left. The amount of bits that could be shifted was limited to the ascii table (so a regular fprintf() could write out the data to disk).
A bank discovered our "security" on their own - needless to say, they were not very happy with the situation. Our company was aware of the problem before this happened - I just don't think they thought any would notice. The company ended up buying another company and replaced the homegrown crap. My point - the company I worked for didn't give a crap about the problem until a customer found out.
So true. I worked for company that sold card based access control systems and they had pretty much the same attitude. I found many bugs, some serious, but they weren't particularly concerned unless it was something a customer complained about. And even then, the standard response was to blame the customer. They would be told they must have installed something wrong or did something wrong operationally - anything but admit that there were bugs in the software. Most of the time when I would fix these bugs, the company would just quietly slip them into the next release without any kind of disclosure. If the customer asked what was in the new version, they would just say "performance enhancements" - which wasn't always a lie because many times I was really fixing design problems and my code was almost always more efficient (because the original coder was an amateur), but it was still pretty unethical in my mind. Even though I fixed a lot of bugs and made numerous improvements to the code, by the time I couldn't stand the toxic work environment anymore and left, there were still more known bugs and the communication between the various boxes was still unencrypted. I would not consider any of the cheaper systems sold by the smaller companies to be secure. When it comes to security systems, don't skimp on the budget.
 
The problem is that you are talking backup data. On the one hand such data needs to be kept for a very long time meaning retention of old passwords becomes a serious issue as either everyone will know them or no one will.

Next, backup data is usually stored off-line in tape format. When this is the case, any compromise of the tapes means physical access and what do we all know is the result when an attacker has physical access?

As with all issues related to vulnerabilities, there remain risks. Risks must be measured and vetted and weighed against costs or loss in relation to compromise. It's easy to identify a vulnerability and point out all the nasty things that could happen, but it's an entirely different thing to look at exactly how such a thing can actually happen and exactly what the actual damages would be if it came to pass.

For instance, backup data that is on tape and stored physically secure and off site. An actor gains access to the data and can manipulate the data from tape, but how can he inject his changes into the production environment? He can use the information and create fake IDs, but how frequently are important format parameters in the ID Cards changed, ( "All of this year's ID cards have a different back ground and color which isn't part of the backed up data"). There can be several reasons that actually doing something with the data is far less of a true risk than it seems.


I'm not sure what you mean with backup data. This was a hardcoded bypass in the auth method that not only did zero validation on the password passed, just returned the password if you entered the correct admin user..... I work at a software development company and something like this would get you fired. We have yearly security training for anyone that touches code so NO one has an excuse for shit like this. I even get annoyed when they hardcode shit like this in dev environments for 'testing' because they will eventually forget about it and push the code through the environments. Also the reason to hook up vulnerability scanning to your git repos. It will flag/fail builds if they try and push code this shitty.

private static bool IgnoreAuthentication(string username, string password)
{
if (username == "IISAdminUsr")
return password == "Badge1";
return false;
}
 
I'm not sure what you mean with backup data. This was a hardcoded bypass in the auth method that not only did zero validation on the password passed, just returned the password if you entered the correct admin user..... I work at a software development company and something like this would get you fired. We have yearly security training for anyone that touches code so NO one has an excuse for shit like this. I even get annoyed when they hardcode shit like this in dev environments for 'testing' because they will eventually forget about it and push the code through the environments. Also the reason to hook up vulnerability scanning to your git repos. It will flag/fail builds if they try and push code this shitty.

private static bool IgnoreAuthentication(string username, string password)
{
if (username == "IISAdminUsr")
return password == "Badge1";
return false;
}

I thought it was clear enough when you read it;

Identicard backups are stored in an "idbak" format, which appears to be a password protected zip file. The password to unzip the contents is hardcoded into the application ("ID3nt1card"). This password is not configurable by an end user, which limits the ability to adequately protect content stored in backups. An attacker with access to these backups could obtain access to potentially sensitive information within the backup. They could also arbitrarily modify contents of the backup, which could affect a future restore.
 
I thought it was clear enough when you read it;

While that is a risk, it requires physical access to the box to get the backup..... The critical vuln is the code block I pasted, that will log you in as long as you enter the hardcoded admin user, no matter what password you give. Then you get admin access to the console remotely. That's some of the shittiest coding I've seen recently and gave us all a good laugh here. Plus we use tenable products, so it's nice to see articles from their security team.
 
While that is a risk, it requires physical access to the box to get the backup..... The critical vuln is the code block I pasted, that will log you in as long as you enter the hardcoded admin user, no matter what password you give. Then you get admin access to the console remotely. That's some of the shittiest coding I've seen recently and gave us all a good laugh here. Plus we use tenable products, so it's nice to see articles from their security team.


So if I understand you correctly, as well as this report, the researcher discovered the vulnerability by working with backup data and realized that it pointed to the same vulnerability from the backed up system itself. The backups were showing him that the production systems are vulnerable?
 
So if I understand you correctly, as well as this report, the researcher discovered the vulnerability by working with backup data and realized that it pointed to the same vulnerability from the backed up system itself. The backups were showing him that the production systems are vulnerable?

Are we reading the same article? I'm going to the link from the H news pages that takes to '...medium[dot]com/tenable-.....' The only reference to the backup DB is the image of the threat model overview. In the article he talks about decompiling the .net code to find that shitty authenticationbypass method. A search for 'backup' or even the hardcoded password you listed do not appear on that page.


*edit* found the block you mentioned, and none of that is in the main article they link to.... I always try and read the source since the little excerpts like that tend to leave a lot out. In this instance, the excerpt references something not in the article..... Now I'm confused. But from a security standpoint, the hard-coded console password is a much bigger vulnerability. Combined with their shitty backup encryption and hard-coded password, I'm sure they are on everyone's shit list now as they are completely incompetent.
 
Are we reading the same article? I'm going to the link from the H news pages that takes to '...medium[dot]com/tenable-.....' The only reference to the backup DB is the image of the threat model overview. In the article he talks about decompiling the .net code to find that shitty authenticationbypass method. A search for 'backup' or even the hardcoded password you listed do not appear on that page.


*edit* found the block you mentioned, and none of that is in the main article they link to.... I always try and read the source since the little excerpts like that tend to leave a lot out. In this instance, the excerpt references something not in the article..... Now I'm confused. But from a security standpoint, the hard-coded console password is a much bigger vulnerability. Combined with their shitty backup encryption and hard-coded password, I'm sure they are on everyone's shit list now as they are completely incompetent.

I quoted right from cagymaru's news post. I can't get to the linked article. US Army thought control blocking websites, etc.
 
I quoted right from cagymaru's news post. I can't get to the linked article. US Army thought control blocking websites, etc.

Well that explains our different arguments. I don't think I've seen a summary that was completely different information than the article itself.... Not even sure where they got that summary information from since it's not in their source.
 
Back
Top