Panera Bread Leaks Millions of Customer Records

I'm pretty sure (not a lawyer, so I reserve the right to be wrong) that just acknowledging "status" like patient (or account holder for panera) along with PII is enough to qualify as PHI. It just has to meet the requirements of relating to the provision of, or payment for past, present, and future services... being a patient is related to the provision of healthcare services.
Well, Any identifiable information plus healthcare related status =PHI under HIPAA. Panera is not a healthcare company (a "Covered Entitiy"), so HIPAA does not apply in this case. So, to trigger any state breach laws, you need to check certain boxes. in almost all states, the kind of information here, because it is either public, or cannot be used by a hacker to do anything, doesn't check those boxes.
That's the problem, we (as a society) haven't recognized the value of this data yet. I do think this is starting to change, but we've got a ways to go.
There's a good argument to be made here. The problem is how far do we go? I don't think we should go as far as the EU, where the privacy of personal information is a "fundamental right," as that makes many modern internet services nearly impossible. But I think breach laws as they stand do a pretty good job of delineating how much and what kind of data will result in a legal "breach."
Fines can be civil penalties too, criminal penalties like jail time are only warranted in cases of criminal negligence, and the burden of proof is understandably much higher. As far as WHO - it depends. There's plenty of precedence regarding criminal negligence, this isn't exactly breaking new ground. Small companies? Tough shit. Do we give small companies a pass on environmental regulations?
I'll just leave this here:
https://www.justice.gov/usao-edtx/pr/former-hospital-employee-sentenced-hipaa-violations
OK, so that's not a breach where there was a hacker that took information. This guy worked for a covered entity and intentionally took PHI for personal gain. That's a WHOLE different scenario, and one I can agree is deserved. Just as if a hacker in a HIPAA breach is found, he should go to jail too. But in neither case should the company be held responsible... or unrelated c-suite individuals.
 
Wait, why?
A bad guy hacked their systems and took customer data. Why should the victim company be fined?


Disclaimer: I work as a data breach counsel for companies that experience data breaches. As a result, I see how these things go down on a daily basis, and how despite entirely reasonable efforts to protect data, bad guys can always get in if they want to.

Because They storied it in plain text, meaning they took exactly Zero precautions.
Because They took 8 months to notify
Because They kept data they didn't need for that long.
Because they were using that Data to make money off while making zero effort to secure it all while feeding customers bullshit on why they wanted the data in the first place (Oh join our rewards club and get a free sandwich ever $100 you spend while we make far more off selling your info)

Would you like me to continue? These companies want your data so they can use it to make more money, then want to play fast and loose with security...You are Damn right they need to have the shit fined out of them when they get caught with their pants down.
 
Because They storied it in plain text, meaning they took exactly Zero precautions.
Because They took 8 months to notify Under state law they really didn't have to (except for like one state, and we don't know if they did or did not)
Because They kept data they didn't need for that long. (It seems like the kind of data they do need in relation to their loyalty program.)
Because they were using that Data to make money off while making zero effort to secure it all while feeding customers bullshit on why they wanted the data in the first place (Oh join our rewards club and get a free sandwich ever $100 you spend while we make far more off selling your info) (Fair, but not relevant to why the company should be fined because you disagree with their loyalty program)

Would you like me to continue? These companies want your data so they can use it to make more money, then want to play fast and loose with security...You are Damn right they need to have the shit fined out of them when they get caught with their pants down.
But again, where do you draw the line between doing too little and doing enough? It's a moving target, and shouldn't companies be able to take measures relative to the kind of data? Why spend millions to protect public data?

Of course, as I indicated above, they should not have been storing this data in plain text on a web-facing system. That's just dumb. But the potential harm from the release of this data is infinitesimal.
 
But again, where do you draw the line between doing too little and doing enough? It's a moving target, and shouldn't companies be able to take measures relative to the kind of data? Why spend millions to protect public data?

Of course, as I indicated above, they should not have been storing this data in plain text on a web-facing system. That's just dumb. But the potential harm from the release of this data is infinitesimal.

Why do you keep ignoring the plain text part? You're not a pro lol. That's called negligence.
 
Well, Any identifiable information plus healthcare related status =PHI under HIPAA. Panera is not a healthcare company (a "Covered Entitiy"), so HIPAA does not apply in this case. So, to trigger any state breach laws, you need to check certain boxes. in almost all states, the kind of information here, because it is either public, or cannot be used by a hacker to do anything, doesn't check those boxes.
I think you missed my point earlier - I believe that any company storing personal info (yes even name, email, and address) should be held to the same standards a covered entity is held by HIPAA regulations. I realize these regulations don't exist, I think they should.
There's a good argument to be made here. The problem is how far do we go? I don't think we should go as far as the EU, where the privacy of personal information is a "fundamental right," as that makes many modern internet services nearly impossible. But I think breach laws as they stand do a pretty good job of delineating how much and what kind of data will result in a legal "breach."
While I agree the internet as it exists right now isn't terribly compatible with the privacy laws the EU has enacted, I don't think that's an entirely bad thing - the way the internet treats personal data right now is inexcusable. Take a moment and go look at the level of detail in the personal details google collects on an android user - I'm not saying they shouldn't be allowed to collect all that data, but if they do they need to be held accountable for it. To be clear, I don't like the way the EU is trying to purge data that's already released, but I think it's a great idea to give individuals the ability to control what a company they interact with is allowed to store, and how that data can be used or shared. I'll use an analogy:
If I'm walking down the sidewalk yelling into my phone about a doctor's bill, and someone records a video of it, I don't think I should have the ability to have that video "purged". BUT - if I sign up for a rewards card at my local grocery store I think I SHOULD have the ability to not only cancel the rewards account, but also force the company to remove (or at least de-identify to the standards laid out by HIPAA) any information they gathered related to that account. In addition, prior to the creation of my account I should be informed of WHAT data they're collecting, and any other parties with access to the data. Increasing the scope of that data collection or the access to the data should require my specific authorization - not some stupid terms and conditions update or EULA.
OK, so that's not a breach where there was a hacker that took information. This guy worked for a covered entity and intentionally took PHI for personal gain. That's a WHOLE different scenario, and one I can agree is deserved. Just as if a hacker in a HIPAA breach is found, he should go to jail too. But in neither case should the company be held responsible... or unrelated c-suite individuals.
If the company that employed the guy knew what was going on and didn't take immediate action I'd hold them partially responsible. Lets say a CTO was informed that a vulnerability existed, that it was likely to be exploited, and if exploited would cause a high likelihood of harm to anyone impacted by a breach. The CTO discusses this with other executives at the company (for a nice open and shut case lets assume there are some memos on the subject) and they decide to do nothing while publicly denying the vulnerability. When some hacker exploits the vulnerability to great effect... I'd hold the hacker AND the company responsible there, wouldn't you?!

Like I said, criminal negligence isn't something new, there's plenty of precedence in other industries.
 
Maybe it's time to enact something like the GDPR here. Security breaches leaking personal data where negligence on the part of the company was involved subjects them to a fine equal to 2-4% of their annual global revenue. Let's get some teeth into penalties.
 
I think you missed my point earlier - I believe that any company storing personal info (yes even name, email, and address) should be held to the same standards a covered entity is held by HIPAA regulations. I realize these regulations don't exist, I think they should.
Got it.
That said, they already do. They are just state laws, not federal ones.


While I agree the internet as it exists right now isn't terribly compatible with the privacy laws the EU has enacted, I don't think that's an entirely bad thing - the way the internet treats personal data right now is inexcusable. Take a moment and go look at the level of detail in the personal details google collects on an android user - I'm not saying they shouldn't be allowed to collect all that data, but if they do they need to be held accountable for it. To be clear, I don't like the way the EU is trying to purge data that's already released, but I think it's a great idea to give individuals the ability to control what a company they interact with is allowed to store, and how that data can be used or shared. I'll use an analogy:
If I'm walking down the sidewalk yelling into my phone about a doctor's bill, and someone records a video of it, I don't think I should have the ability to have that video "purged". BUT - if I sign up for a rewards card at my local grocery store I think I SHOULD have the ability to not only cancel the rewards account, but also force the company to remove (or at least de-identify to the standards laid out by HIPAA) any information they gathered related to that account. In addition, prior to the creation of my account I should be informed of WHAT data they're collecting, and any other parties with access to the data. Increasing the scope of that data collection or the access to the data should require my specific authorization - not some stupid terms and conditions update or EULA.
This I generally agree with. The problem I see is this: For many applications, this is just too arfuous. Take a rewards card. If all they collect is your name, addresses and purchasing information, does a long complicated EULA/privacy release/consent form really make much sense? I mean, who cares who gets that data? It's either all public or not very useful (purchasing info).

But once you get into things like SSN, CC# and CVV, etc., I think we're on the same page. I would just want any alw that lays out such standards to have a very clear, bright line.WHile I make my living in the grey areas, it's a PITA for businesses to comply with.


If the company that employed the guy knew what was going on and didn't take immediate action I'd hold them partially responsible. Lets say a CTO was informed that a vulnerability existed, that it was likely to be exploited, and if exploited would cause a high likelihood of harm to anyone impacted by a breach. The CTO discusses this with other executives at the company (for a nice open and shut case lets assume there are some memos on the subject) and they decide to do nothing while publicly denying the vulnerability. When some hacker exploits the vulnerability to great effect... I'd hold the hacker AND the company responsible there, wouldn't you?!

Like I said, criminal negligence isn't something new, there's plenty of precedence in other industries.
To answer your question: not unless someone was actually harmed by the breach. See, any negligence requires a duty to the harmed person, a breach of that duty, AND some kind of harm caused to someone at least in part by that breach of duty. How can you link a particular harm to a particular data breach? There's no such thing as damages to be awarded based on potential harms, at least not alone.
 
This I generally agree with. The problem I see is this: For many applications, this is just too arfuous. Take a rewards card. If all they collect is your name, addresses and purchasing information, does a long complicated EULA/privacy release/consent form really make much sense? I mean, who cares who gets that data? It's either all public or not very useful (purchasing info).

But once you get into things like SSN, CC# and CVV, etc., I think we're on the same page. I would just want any alw that lays out such standards to have a very clear, bright line.WHile I make my living in the grey areas, it's a PITA for businesses to comply with.

I'll skip your other points for now, because I think you've missed the power of something as simple as purchasing history tied to a name. Thanks to machine learning (it's always been possible, just too time-consuming and expensive) the level of detail that can be extrapolated from what you buy (how much and how often) and where you're buying it. Here's an article on the subject that's nearly 5 years old - just imagine how much more sophisticated the data collection and analysis has gotten... and remember, this is just grocery stores: Link

Think about how much more information is available from google, "connected" cars, IOT devices (like a Nest thermostat,) cell phones, "smart" tvs, etc. Individual breaches might not represent as much harm as something like the equifax breach, but lots of breaches of smaller data-sets combined (I'd argue) represent the potential for more damage than something as large and high-profile. And more to my point - no breach needs to occur right now, do YOU know who samsung (or whoever) is sharing your tv viewing habits and IP address with? What about your schedule, temperature preferences, and how long your heating system needs to run to achieve said preferences stored by Nest? I'm pretty sure you could figure out the size of your home (and take an educated guess about your income level), your work schedule, your hobbies and interests, probably your age group, gender, and political affiliation to boot... just with those two data sets combined. Toss in your name and email and now we can target YOU specifically... either with lame ads on the "legitimate" side, or something more malicious.

Start combining even more sources of data and you have a level of intimacy most people wouldn't want (and aren't even remotely aware of) - and at present it's damn near un-regulated. Google by itself knows WAY too much (imo) about people who use multiple services. They know when you've got a job interview scheduled on your google calendar, when you leave for it, when you arrive, how long you stay, and where you go afterwords. They know your favorite pubs, who you talk / text / email with, they probably know what kind of mood you're in today.

Society needs to have a real, honest discussion about this before it gets "too big to fail." Data is quickly becoming our most valuable commodity, and it feels like most people don't have even a basic understanding of what's going on, or what's at stake.
 
Last edited:
The lack of federal oversight on PII data is why a lot of states are starting their own legislation and penalties on it. Some states have very strict rules on it and the fines are pretty hefty
 
I imagine, in your line of work, you asses their situation, show them what happened and make recommendations on how to shore up their security so it doesn't happen again? If so, that sounds a lot like if the company had allocated proper resources into security and infrastructure the first go around, your assistance wouldn't have been required. Nothing in security is impenetrable, but a lot of breaches can be avoided by following industry standards.
But there are serious costs to when it fails.

Not really the case at all. My engagements are usually very quick. USUALLY, unless it's a MASSIVE corporation with multiple servers at issue (as in more than 50), I've got them set up to notify impacted individuals within 30-45 days. GENERALLY it looks like this:
1. I take the first phone call from the client, wherein I get a breakdown of what they think has happened based on internal IT review. Internal IT is good as a first responded, but they are entirely incapable of doing forensic work. I don't care if you're a fortune 100 company.
2. Within 24 hours we have had a scoping call with an appropriate forensic vendor and local IT. This is usually where we have a good suspicion as to what happened, and about 80% of the time, the forensic review is there to validate those guesses and provide detailed vector data. 80% of the time it's an issue of PEBCAK. These days, Office 365 phishing is the #1 attack vector. Usually leads to an email download, spreading compromises through secondary spear-phishing attacks, or RDP compromise.
3. Within 3 weeks of the first call, we know what happened, how it happened and generally what was accessed. We have also begun or completed the process of determining whose data was accessed. (this is where the difference between 30 and 45 days comes in. If the data is well structured (databases, excel spreadsheets, etc, it's quick. If it is unstructured (emails, pdfs, etc), the personal data extraction and identification requires manual review, which takes time). We are also setting up a company to do notification services (mailing, credit monitoring, ID restorations services), and setting up a call center.
4. All that stuff is mailed and the call center is set up. I begin handling media inquiries and the periodic attorney letter.

I'm glad you enjoy your job enough to bloat all the steps you take in your line of work but as I said, you assess the situation, show them what happened, and then make recommendations on how to shore up their security. I never said, "Make hardware/software recommendations to improve security" or, "Companies don't spend enough on security hardware/software". It's a well known fact in the IT industry that our weakest link is the end user and the most dangerous link is the lazy administrator.
 
To add, we have a meeting next week with ForeScout and will have an engineer who's working the Atlanta breach present to discuss how they were hacked, I fully expect it to be a lazy admin got phished.
 
You
I'm glad you enjoy your job enough to bloat all the steps you take in your line of work but as I said, you assess the situation, show them what happened, and then make recommendations on how to shore up their security. I never said, "Make hardware/software recommendations to improve security" or, "Companies don't spend enough on security hardware/software". It's a well known fact in the IT industry that our weakest link is the end user and the most dangerous link is the lazy administrator.
Bloat?
I'm an attorney. What do you think it is I do? I don't fix tech issues. I wouldn't know a packet sniffer from a firewall. We help the client got external forensic experts to do that stuff. I just translate for the business people.
 
Back
Top