Rap News 15
- At October 31, 2012
- By Josh More
- In Psychology
- 0
In case there is anyone reading me that doesn’t also read Bruce, watch this video:
Hoaxicane Sandy
- At October 30, 2012
- By Josh More
- In Business Security, Psychology
- 5
It’s that time again.
Whenever a major media event happens (like hurricane Sandy), we are inundated with news. Sometimes that news is useful, but often it merely exists to create FUD… Fear, Uncertainty and Doubt. While I have not personally seen any malware campaigns capitalizing on the event yet, it is inevitable. The pattern is generally as follows:
- Event hits the news as media outlets try to one-up one another to get the word out.
- People spread the warnings, making them just a little bit worse each time they are copied.
- Other people create hoaxes to ride the wave of popularity.
- Still other people create custom hoaxes to exploit the disaster financially.
A few minutes ago, at least in my little corner of the Internet, we hit stage 3 where this image was posted:
( From here. )
Now, as someone who plays with photography, I was a bit suspicious, but as a security person, I can actually prove some things here.
The first tool I want to discuss is FotoForensics. Check out their analysis.
See how the statue of liberty and land on which she stands is much brighter than the background? That indicates that that image has been pasted on top of the other, so we know it’s fake.
Sometimes, though, this trick doesn’t work. If someone is making a good hoax, they can change the error levels to prevent easy detection. That’s where our next tool comes in. TinEye is awesome.
Look what happens when I do a reverse image search on the suspicious file: here. (TinEye results expire after 72 hours, so if you’re slow to read this, just past the URL of the photo into their search box.)
TinEye, by default, is going to try to find the best match. But that’s not what we want. We want the original. Luckily, when people make hoaxes, they usually shrink the image to make it harder to find the signatures of a hoax. So we just click to sort by size and there we have what it likely the original:
ETA: Original can be found in this set by Mike Hollingshead.
Then it lists a bunch of sites that have stolen this image to use without credit. (That’s a different post.) You can then click on the “Compare” link for the likely original and see what they did. By flipping between the versions, you can see that they added the statue of liberty, the water and the boat, shrunk the image and made it darker… ’cause darker is scarier, apparently.
The important thing to realize here is that the attacker is trying to manipulate you. By spreading fear, they are making you more susceptible to future attacks. By taking advantage of your uncertainty and doubt, they put you in a position where you will do unwise things to gain an element of certainty in your life. Does this matter that much in an image hoax? Probably not. Does it matter when you start getting emails exhorting you to “click here” to help victims of the hurricane, it’ll matter a whole lot more.
Uncertainty and doubt can work against you, but it can also work for you. When the attacks come… likely in a few hours, approach them with suspicion. If you’re in the path of the storm, trust the names you recognize, like Google and The National Weather Service. If you’re not in the path of the storm and want to send aid, go with The Red Cross. If anyone else you don’t know asks for your money or your clicks, ask yourself what they have to gain.
How do you respond when a moose is on the loose?
What would you do if you discovered that attackers had taken over your server and were in the process of stealing all your data?
What would you do if law enforcement came to your place of work and demanded all of your computers as part of an investigation?
What would you do if a tornado hit your building and spread all of your computers across a mile-wide radius?
If you are like most organizations, you don’t have a plan for everything. You can think of security (in an over-simplified way) of having three areas of control: Detective, Preventative and Reactive. We tend to start with Detective. When antivirus was new, it just alerted you when you had a problem. As the technology improved, it became preventative and would stop bad applications from running. Most security technology, in fact, has followed this pattern. Intrusion Detection moved to Intrusion Prevention. Patch Detection moved to Patch Management. Log Analysis moved to full-fledged SEIM systems.
However, this progression ignores a very powerful tool. As an example, here’s a video:
What would you do if you woke up one morning to find a moose in your swingset? Odds are you’d either deal with it yourself or call someone to deal with it for you. Response is key. When things happen, whether it involves an attacker taking over a system, an external agency taking your stuff or a natural disaster, reacting to the situation is important. You can either do it in an ad hoc way, or try to plan everything out.
In general, organizations that trust their people, just let their people do what they need to do. Organizations that do not trust their people, invest in planning and procedures. What’s interesting is that both methods work… though not always particularly well. Sometimes people hide behind policy and avoid doing the right thing. Sometimes, people hide behind uncertainty and avoid doing the right thing.
The problem here is that “right” and “wrong” are not always clear cut. Consider recent occurrences involving United Airlines, Penn State and FedEx. A reasonable response to events like these would be “we can’t trust our people,” and to address the issue by creating policies.
But, for an even more horrifying view of the world, check out this Google News search on “followed policy.” A wider search on this shows that people who follow policy result in death, brain death and murder suspects being released.
So it would seem that this is a “damned if you do, damned if you don’t” situation, right?
It turns out to err is human… but human error can happen whether or not we are constrained by policy. Using policy to prevent bad things from happening requires not only that you have people who will always follow the policy, but also that you have policies that are 100% correct and written by people who can see the future. Perhaps a better approach would be to use policies as guides that people can refer to when they’re confused. Then, build a culture around the fact it’s okay to make mistakes so long as you’re willing to apologize, attempt to fix things and learn from your error.
Not everything can be avoided. Sometimes you just have to deal.
More on the moose is here.
This article was originally published on RJS Smart Security.
Employee security awareness: it’s not about “should” or “shouldn’t.”
- At July 25, 2012
- By Josh More
- In Business Security
- 0
If there’s one myth in the footwear industry that just won’t die, it’s that everyone should have a pair of shoes. You can see the reasoning behind it, of course. We’ve all heard about the kid that ran around barefoot, stepped on a nail and had to get incredibly painful tetanus shots.
But do accidents like this prove that shoes are a must or is just the opposite? If people everywhere can get foot injuries with or without shoes, doesn’t that suggest that shoes really aren’t all that important?
One of the best examples ever of the limitations of shoes is Abebe Bikila, who won the 1960 Summer Olympics marathon without any shoes at all.
Fundamentally, what society is saying when demanding that people wear shoes is “it’s not our fault” if people take risks – like not wearing shoes – and get injured. But this is false. An individual has no control over where they put their feet and they don’t have the ability to recognize hazards like broken glass, nails or poisonous vipers. After all, is the average person really a match for a vicious snake? Blaming poisonings on a lack of shoes is misguided – particularly given the stabby nature of snake fangs.
I’ll admit, it’s hard to find statistical evidence that supports this point of view. Not surprisingly, shoe manufacturers don’t share data on how protective their products truly are …
That’s probably enough of that nonsense.
In case you didn’t know, this post is in response to Dave Aitel’s recent article at CSO. While I am hardly one to defend the status quo, there are two logical fallacies at play here. The first is binary thinking … effectively saying “if a defense isn’t 100% effective, it’s not worth doing.” The second is the flaw of hand-picking anecdotes to support your premise.
This is regrettable because the bulleted advice on page two of Aitel’s article is good, if somewhat standard. It’s just that instead of following this advice rather than “wasting time on employee training,” it should be done in addition to employee training.
To drastically over-simplify, security involves identifying what you need to protect and then protecting it. In a global security market (which we’ve matured into), you have a second rule … identify what you want and attack until you get it. These two rules play against one another, with both the attackers and defenders constantly increasing their capabilities until a defender somewhere gets compromised or an attacker gets sloppy, caught and removed from the game.
Then, you repeat the cycle ad infinitum.
In a world that operates this way, the weakest entity is going to be the first out, on either side. And, since security is multidimensional, it will be the first entity with weak enough security along any dimension … technology, process or people. By removing your focus entirely from awareness training to focus on technology and process, you defend only part of your organization. By focusing strictly on network-based defenses, you open a massive hole for non-network attacks.
As soon as it becomes easiest for an attacker to bribe an internal employee to sell them data, they will. As soon as it becomes easiest for them to bluff their way through a job interview to steal data, they will. As soon as it becomes easiest to put on a uniform to steal equipment, they will.
The attacker’s game is “whatever works,” and if we only focus on what is easiest for us to do, we open up doors for attacks.
So … stop spending money on awareness if you want … but only do so if you have taken a good view of your entire organization and have identified areas where those resources are better spent. Be aware, though, that just as we lack solid statistics on how bad awareness is as a defensive layer, we also lack solid statistics on how good it is. For every story I can tell on how I’ve found a person not doing what they should in an organization, I have one that talks about how good they are.
If you need contrarian advice, avoid those that are expressed as binaries. Consider the following:
- Does password rotation cause more trouble than it’s worth? If users are selecting bad passwords because they have to change them often, maybe it’s time to stop doing that.
- If you have security alerts that are being ignored by your people, your systems probably aren’t being maintained properly. As soon as you stop maintaining your systems, they shift from being assets to liabilities. Thinking about fixing them … or getting rid of them.
- Are your people overly constrained? If you have customer service employees following scripts, you’ve basically turned them into technology. Turns out that we have technology in the first place because people are bad at that sort of thing. Ponder that.
- Is a data breach all that bad? In some industries, sure … but if it were universal, it seems as though there’d be a lot more companies going out of business. Think about what a breach would really mean and how you’d handle it. Odds are, you’re far weaker in response capabilities than you are on defense. Instead of shifting defense dollars from people to technology, maybe you need to invest somewhere else entirely.
Basically, the core lesson here is “think before you spend.” Don’t blindly follow the advice of anyone (including me). Assess your environment, consider your goals and the events that could prevent you from achieving them. Then, and only then, look at how you choose to use your resources.
(This post was originally published at RJS Smart Security.)
LinkedIn Password Leak – Whose Interests Are Being Served?
- At June 07, 2012
- By Josh More
- In Business Security
- 0
As I’m sure most of you have heard, there is a LinkedIn password breach going on. As breaches continue to happen, they seem to move faster and faster. Within 24 hours of the breach occurring, 60% of over six million passwords were cracked. Since people are also reading blogs more quickly these days, I’ll leap straight into what you need to do. Then, you’re still interested, keep reading for a bit of analysis.
- Change your LinkedIn password to something random, long and complex… at least 20 characters.
- Do not use this password anywhere else.
- If you don’t remember these sorts of passwords easily, use a tool like KeePass, LastPass or 1Password.
- If you are responsible for the security of others, get them to change their passwords too.
That’s it.
Now, let’s look at what happened. First of all, a set of six million encrypted passwords appeared within the attacker community and help was asked for in cracking them. Now, the passwords are referred to as unsalted SHA1. This means that, while the passwords were encrypted using a reasonable algorithm, they were not salted. This means they are much easier to crack and this explains the speed with which they were found out.
The passwords were posted without email addresses. However, it is not reasonable to assume that malicious attackers would ask for help cracking passwords that they couldn’t use, so it is very likely that they have this information. They may well also have a pile of passwords that were NOT posted because they had already cracked those passwords. So, understanding these facets of the attacker community, let’s look at LinkedIn’s response.
- Members that have accounts associated with the compromised passwords will notice that their LinkedIn account password is no longer valid.
- These members will also receive an email from LinkedIn with instructions on how to reset their passwords. There will not be any links in this email. Once you follow this step and request password assistance, then you will receive an email from LinkedIn with a password reset link.
- These affected members will receive a second email from our Customer Support team providing a bit more context on this situation and why they are being asked to change their passwords.
On the face of it, this is reasonable. After all, if LinkedIn sent you an email with a password reset link, it’d look a whole lot similar to a fraudulent email with a password stealing link. So, props to LinkedIn for thinking this through.
However, there is still the matter of trust.
See, the key to this whole response is “Members that have accounts associated with the compromised passwords”. This concerns me as it implies that LinkedIn pulled encrypted passwords from their database and compared them to the PUBLIC breach data. This is will necesarilly miss any accounts that the attackers have not released. This could be accounts with simple passwords or particularly sensitive accounts. Suppose they filtered out all accounts that started as “ceo@” or “president@”. Intelligent criminals would want to keep those sorts of accounts to themselves, even if they took a while longer to crack.
One of the core rules of dealing with a data breach is that if you don’t know how it happened and can prove that it only affected a limited number of accounts, you must assume that they are compromised. In this case, a better security response would be to put information about the breach on the front page. At this time, there’s nothing there. Once I log in, though, there is a tiny link under “LinkedIn Today” that references an article on CNN about the breach. Basically, there is nothing prominent or official other than their blog… which you must be following to notice.
The response that I would like to see would involve the following pieces:
- Information as to what happened and what LinkedIn is doing to prevent a recurrence.
- Information about how to select a good password and change it on the system.
- This information sent out via email, posted on the blog and highlighted after logging in to the system.
Instead, the best we get is this advice, which is inadequate. Let’s pick this apart. The original is in italics. My commentary will be in bold.
Changing Your Password:
- Never change your password by following a link in an email that you did not request, since those links might be compromised and redirect you to the wrong place.
- I agree with this.
- You can change your password from the LinkedIn Settings page.
- If your account has been compromised, you should be locked out and unable to access the Settings page. They should direct people to the next bullet instead.
- If you don’t remember your password, you can get password help by clicking on the Forgot password? link on the Sign in page.
- This is good, as it requires any password to involve an out-of-band mechanism like access to your email account.
- In order for passwords to be effective, you should aim to update your online account passwords every few months or at least once a quarter.
- Bad bad bad! Needing to change passwords frequently implies poor security on the part of the administrators. If they are monitoring their systems and capable of knowing when an event occurs, they will tell you when to change your password. People that are forced to frequently change passwords tend to select weaker passwords and use them on more sites. This means that if ANY site is breached, ALL accounts are placed at risk. This is probably the worst advice they give.
Creating a Strong Password:
- Variety – Don’t use the same password on all the sites you visit.
- Good. Also, don’t use the same base. For example, if you pick “password123” as a base, and your LinkedIn password was “password123LI”, it’s not a big stretch to “password123FB” for Facebook or “password123WF” for Wells Fargo.
- Don’t use a word from the dictionary.
- I think we put too much emphasis on this. The fact is that the dictionaries we use in the security world are very different from your average Mirriam Websters or OED.
- Length – Select strong passwords that can’t easily be guessed with 10 or more characters.
- I think that 10 is too short. I say 20 above. Most of mine are over 30. The longer the password, the more time you have to deal with resets in the event of a breach.
- Think of a meaningful phrase, song or quote and turn it into a complex password using the first letter of each word.
- Passphrases are good… completely random strings are better. I like to use passphrases to access my password wallets, and the wallets to store the real passwords.
- Complexity – Randomly add capital letters, punctuation or symbols.
- I agree with the general intent here, but humans are bad at randomness. Let a computer generate your passwords and you’ll be a lot better off.
- Substitute numbers for letters that look similar (for example, substitute “0″ for “o” or “3″ for “E”.
- Bad advice. Most attacker dictionaries include these substitutions so it only makes things more difficult for you.
- Never give your password to others or write it down.
- Well, never give your password to others anyway. If you can’t remember a good password, write it down. Just store the paper in a secure place… like a safe. Better yet, store it in a password wallet system that keeps the datafile in a digital “safe”, properly encrypted and away from prying eyes.
A few other account security and privacy best practices to keep in mind are:
- Sign out of your account after you use a publicly shared computer.
- You know what would be better? “Don’t sign into your account from a public computer.”
- Manage your account information and privacy settings from the Profile and Account sections of your Settings page.
- If you understand the privacy settings in each social media system you use, give yourself a gold star. Might be better if sites like LinkedIn had secure defaults and users could choose to weaken them.
- Keep your antivirus software up to date.
- Yes, because of all the LinkedIn viruses we see running amok. This is like a car company issuing a brake recall with the advice “remember to only drive on roads”. The truth is that anti-malware systems are needed because our operating system and application vendors have failed in their jobs. It’s not LinkedIn’s fault, but the advice doesn’t really belong here either.
- Don’t put your email address, address or phone number in your profile’s Summary.
- Really? I mean, REALLY? Isn’t the whole point of LinkedIn to share your contact information with others? Hmm… perhaps LinkedIn’s stock does better if people only contact one another through LinkedIn’s “mail” system. Then again, perhaps more people would use that system if it worked more reliably. Perhaps I’m editorializing a bit more than I should be. ;)
- Only connect to people you know and trust.
- This is interesting advice, given that many people use LinkedIn to meet new people and get new opportunities. LinkedIn offers very little to people that would actually follow this rule, as if you already know and trust someone, you already have their contact information. LinkedIn never really took off as a content platform like MySpace, Facebook or even Google+. Everyone knows that no one is going to follow this advice. Besides, the greater risk here is leaking your personal information to someone you “know and trust” whose account has been compromised. This is a case for a security tradeoff and careful consideration of what you share. A blind prohibition is not useful.
- Report any privacy issues to Customer Service.
- Here’s a bit of advice. Only refer people to your customer service when you know it’s good. Just sayin’.
Basically, what we have here is a situation where LinkedIn has strong incentives to downplay the issue. They look bad already, so the smaller and less significant the breach, the less immediate damage they face. They also very much do not want the world to seriously consider the weigh the risks of sharing their personal information via the Internet. After all, the entire business model of social media is riskier than we’d like to think. The sooner everyone figures this out, the less money the owners make and the more people in the industry lose their jobs.
This is in direct conflict with that the users (or product) of LinkedIn need. We need to be able to trust the people we give our information to. We need to know that they are doing what they should, investing in good technology, people and processes and being forthright with us as to what is going on. We need a partner that communicates with us with our own needs in mind, not just their own.
When one person is best served with honesty and the person they are talking to is best served by lying, there are going to be problems. Consider this in the wake of any breach, whatever side you land on. The long term future of any relationship in conflict is less than rosy.