un-excogitate.org
what was I thinking? (Christian Frichot's ad-lib on security and what-not)

Secure software is difficult. Threats are evolving and expanding, compromises occur at all layers of the chain from OS Vulnerabilities to application vulnerabilities to transport vulnerabilities to end-point ownership and people hacking. The efforts of so many different groups are proof that it’s all gone awry. OWASP, SANS/MITRE and the Rugged Manifesto are a few prime examples, and even the message they carry now seems ancient – attackers aren’t targeting your OS or Services anymore, they’re after your applications, be they custom or not. They’re after your end-users.

Building secure software isn’t just difficult though because of the changing threat agents, it’s difficult because building good software of any calibre is difficult, from the grass-roots web development firms to the internally resourced development teams of large organisations to the large Independent Software Developers. It’s difficult because developers and architects may not get on (Here are 5 possible reasons), it’s difficult because even software developers hate software. This last point highlighted succinctly from Jeff Atwood on his post on why Nobody hates software more than software developers:

“In short, I hate software — most of all and especially my own — because I know how hard it is to get it right. It may sound strange, but it’s a natural and healthy attitude for a software developer. It’s a bond, a rite of passage that you’ll find all competent programmers share.

In fact, I think you can tell a competent software developer from an incompetent one with a single interview question:
What’s the worst code you’ve seen recently?

If their answer isn’t immediately and without any hesitation these two words:
My own.

Then you should end the interview immediately. Sorry, pal. You don’t hate software enough yet. Maybe in a few more years. If you keep at it.”

Jeff wasn’t the only person to talk about the difficulties of software development, in fact, on the very same day J. Timothy King was also writing about the 10 things he hates about software development, including:

“2. Snotty developers. I must confess to going through a snotty phase myself. It’s part of growing up. You have to live as a snot-nosed, young-whippersnapper, green-behind-the-ears code slinger before you can mature into a wizened Yoda figure. In fact, part of me still may be a snotty developer. I’m definitely snotty when it comes to my own work, because I don’t want anyone telling me how it should be done, as long as I achieve the intended results. But as someone who’s been doing this shtick for 20-something years, I’ve grown weary of junior colleagues telling me I don’t know what I’m talking about. And when something doesn’t work out as well as they thought it should, they persistently maintain that it had nothing to do with them, despite the fact that they had ignored every piece of advice I gave them. There’s only one sure-fire remedy I know of for this problem, and that is to insist on a higher rate of pay. People may balk at paying you through the nose, but when they have to—especially managers—they not only accept your advice, they seek it out, because for the money they’re paying you, they expect you to solve their problems.”

The whole soft skills versus hard skills has been a core issue for quite some time, and it’s one that is unlikely to simply end. There will always be arrogant developers out there, just as there will be arrogant managers or security people. But if your developers are behaving like a bunch of prima donna’s it means that you have an underlying problem that needs attention prior to having any sort of chance of addressing the security and risk concerns. (As much as I’d like to agree with the sentiment of Jeff Luckett in the previously linked article to just “Fire ‘em“, I’m unsure that’s actually the best solution).

There are some other alternatives of course to dealing with the symptom of “The Know-it-all“:

“This symptom is a manifestation of Arrogance. Arrogance is a defence against vulnerability and insecurity, often learned in childhood when parents constantly criticise a child for not being good enough. The person is so afraid of being seen as unworthy or incompetent, that they immediately throw up a defensive shield against any possible attack. This defence protects them for a while, but everyone else sees that it is false.

In the end, they lose credibility and respect — the thing they fear most.”

  1. When you see someone go into attack mode or excess defensiveness, recognize that it is useless to argue with them.
  2. Realize that the person is feeling very insecure at that time.
  3. Don’t continue to push them because they will only get worse.
  4. If the symptoms only seem to occur when the person is under stress, wait until another time to pursue the discussion.
  5. If they are always overly defensive or always attacking others, you may need to find another person to work with who does not have the same problem.
  6. Keep your own sense of self-confidence and don’t allow yourself to be verbally abused.
  7. If the difficult person is your boss, reconsider whether it’s time to find a job elsewhere.

Tags: , , , , , , , ,
Mar
23.

ShockedI think there’s something fundamentally wrong when you’re biggest fear at the end of a risk assessment isn’t so much that you’ve got a “critical*” finding, but it’s that you don’t know how to tell management. It’s an interesting phenomena and I believe most information security people run in to it all the time. What compounds it and makes me completely gob-smacked is when the discussion turns to ways that you can downgrade the finding.

Say what?

And don’t try and pretend you haven’t been privy to these discussions. We’ve all seen it or heard of it happening. “What if we only account for a small population of users? What if we nudge up the value of our controls? What if…” I mean what they’re basically asking is “What if we just change some of these values and downplay what we as a group agree is the risk?”

The good news is, the probability of the risk having been exaggerated in the first place is often quite high *phew* – so perhaps this “base-lining” is useful?

This is one of the reasons why I’m a fan of FAIR, it makes it easy to:

  1. Reduce the probability of exaggerated risk statements in the first place – or at least make it more difficult for them to make it through to the end; and
  2. Eliminate the fact that you can even result in a “critical*” finding without putting statements around the frequency of loss events and the probable loss, as opposed to worst-case loss – which we bang on about all the time which leads us to the sky-is-falling situation

*Nb: “Critical” adj. Whatever-the-hell you want it to mean.

Image thanks to: http://www.flickr.com/photos/pinksherbet/3484925590/


Tags: , ,

Lets state some facts:

  1. Most of your appliances (Firewalls, ID(P)Ses, Proxies, Email Gateways, Storage Devices, etc) have web interfaces for management
  2. Most vendors recommend that these web interfaces should not be accessible to the public (except those vendors that provide their interfaces over the Internet in some form of *aaS)
  3. All modern browsers provide a function to store your passwords

Now lets make some assumptions:

  1. Many admins are lazy (or just not aware of the risks of these types of interfaces and auto password fields)
  2. Most developers developing these backend web management interfaces are NOT accounting for external threat agents (i.e. – the only people who can access this interface are internal resources)
  3. Many developers are not mitigating against common web attack vectors due to the above

Result?

I believe that most appliances are vulnerable to common Cross Site Request Forgery (CSRF – Yeah, It Still Works) attacks. I don’t mean that they’re partially vulnerable by implementing basic (and known to be ineffective) referrer checking, I mean they’re probably not even doing the simple stuff like ensuring that parameters received are from POST requests as opposed to GET requests. I believe this so much that I even offered pints* out to those people finding interfaces without these weaknesses.

We’ve done test after test of appliance interfaces and it’s not even a surprise any more when you find non-idempotent GET methods that simply require an appropriate “Authorization” header to perform functions such as adding a new admin user, resetting the device to factory defaults, or simply shutting down the system. More often than not you don’t even need to lure an administrator into clicking anything, you can just include these GET statements in a bunch of webpages or emails (or RSS feeds) under the clever disguise of an <img> tag.

So come on appliance vendors, pick up your game. Stop trying to imagine that there is a ‘gator-filled-moat between the administrators accessing your products and the nasty web. The browser is the OS, and the people managing your appliances have Twitter and Facebook and God-knows-what open on different tabs. Look, we’ve made it easy for you – just have a read of the OWASP Cross Site Request Forgery Cheat Sheet. Even a little double-serving of Cookies can help (nom nom). Better yet, if you’re building a web management interface for your appliance utilise pre-built security controls, such as OWASP’s Enterprise Security API (ESAPI), this library even comes with FREE anti-CSRF methods? Amazing!

*Nb: You have to come to Perth to collect :)

(This interface goes from Shotgun to Hoover! – Which do you want?)


Tags: , , , , , ,

I’ve had the opportunity to digest a couple of good reads over the past week. First up was Charles Leadbeater’s Cloud Culture: The Future of Global Cultural Relations, and if you’re at all interested in emerging technology and the way it’s impacting (the global) society then this is a must read. I really liked the style used and how the 81 pages just flew by (maybe the formatting?). Some interesting pointers that stuck with me (nothing really new but worth paraphrasing none-the-less):

  • The future will be of many clouds. This can only be achieved by embracing an open source approach to technology and information.
  • For all the benefits that we’re starting to perceive in this new open communication platform, there are still powers that are working their tentacles to slow it down, for example, authoritarian governments. For example, Thai authorities “have used crowdsourcing to uncover the addresses of websites making comments critical of the Royal family..“. Maybe to a different degree our own government here in Australia and their unremitting push on Internet filtering.
  • “Cloud culture” will enhance the creativity of people, giving them new methods to collaborate, but this can only continue as long as we don’t make it too restrictive to share and work on material.

Of course, this could’ve just been written as the “Internet” culture, but it carries more weight when it focuses on the collaborative nature of how the Internet looks these days.

Secondly I had a chance to read something a little more local. The team over at KPMG have released their December 2009 Fraud Barometer and similar to above, nothing entirely earth-shattering, but sometimes it’s useful to cite local reports when trying to “scare” people about their control environment. And by scare, I mean reinforce your fantastic risk assessments on your projects and other important information assets. I also found it interesting to see the number of frauds committed against Government, considering they don’t appear to defraud that much money compared to say finance or commercial companies.

So the prize for “no-surprise-graph-most-useful-to-reinforce-or-scare” is Figure 6, Frauds by perpetrator. In particular, towards the bottom of the number of frauds is Management, but they’re responsible for the largest amount of money defrauded. On the opposite side of the table is the massive number of frauds perpetrated by employees and how little they defrauded. This makes sense of course, management have access to more resources and there’s less of them to normal employees. Pretty anyway.

Enjoy!


Tags: , , , , , ,

There’s been a lot of talk in the media recently about different ways in which companies can deal with the Global Financial Crisis (GFC). Redundancies, capping recruitment, capping pay, or perhaps promoting the four-day week. This last option has been getting quite a lot of press here in Australia and the rest of the world. So does this work with info sec staff? I think it depends on the role and the current number of people filling that role.

Lets start with analysts/engineers/first-line-responders, the guys in the trenches who turn the gears and make sure our security technology is working and monitored. Depending on the business environment this is perhaps one of those roles that can’t easily be sliced down to 4 days. The conflict here is if your business requires 24/7 monitoring, or at least 24/7 on-call, either way you can’t easily make people redundant (how will you cover all your hours?) or ask people to work 4 days (to cover your time you will require more staff – which negates the 4 day working day). The risks here are if your monitoring area misses incidents or make mistakes managing your security infrastructure because of time constraints.

What if you employ security auditors or security testers? These types of roles, similar to security architects & designers, will often see their workload ebb and flow depending on the number of projects. If the current GFC hasn’t impacted upon your project portfolio then your testers would probably still be required full-time, but whether or not they’re required for a full 5 days would entirely depend on how many projects there are, and the degree of detail that is required for each project. The issue here is that with your testers only working 4 days, then either the time dedicated to testing or the time they can focus on keeping abreast of testing techniques, may be impacted. This is of course taking into account that you employ these resources, as opposed to using consultants. These types of roles could probably pull back to four days without too much detriment, and knowing the types of people who perform these roles it’s not like they would simply fill their 5th day with veging out, they’d still be monitoring their feed readers and working on tools or other security related projects. The risk with your testers either not keeping on top of trends in the industry, or not focusing enough time on testing your systems is that vulnerabilities may make their way into production.

Employees involved with security architecture and design fall into the same area as your security testers. Depending on the project load, these roles could potentially work only 4 days, but would have to monitor their time spent on enhancing their knowledge and not falling behind the game. Similar with the testers, the risk of not having enough time for your designers is that perhaps systems are designed to a lesser degree of quality, or perhaps they will take longer to complete.

Your information risk specialists or analysts, perhaps not technical resources but sitting within a consulting or shared-service type of area, will potentially have a difficult time pulling back to four days. Core responsibilities here are advising the business of business and technical risks in the context of business-as-usual activities or changes and projects. Whilst the potentially dwindling number of projects may indicate that these roles can easily go to four days, with business trying to do the same amount of work as before, but with less time, they may have a tendency to look at riskier solutions. If this is the case, now is possibly the time for your risk specialists to refocus their efforts to ensure that risks are being considered and reviewed appropriately. The risk here of course is that activities get performed without the appropriate rigour being applied and “slip past the keeper” into production.

If your environment is fortunate to have dedicated security policy roles it would be fairly safe to say that the impact to the organisation wouldn’t be too great if they were asked to work 4 days. This is taking into consideration that the primary responsibilities include the maintenance and reviewing of policies and standard documentation. Depending on the size of your policy/standard set, the risk in reducing time in this area may lead to some of your documentation falling behind.

Security managers fall into a 50/50 bucket, depending entirely on the amount of people management they are currently doing, and how much time they work looking at the strategy of the security organisation (or if they leave that to the architects?). If your security managers’ core responsibilities include just these two activities, then perhaps shifting to a four day week wouldn’t be too difficult. I’ve seen in a number of situations consultants acting as security managers at multiple locations. One of the key risks I see here is the impact of security management decisions not happening as fast as they should. Of course there are ways to deal with this situation.

Are any of you security professionals currently working four days? I would like to hear your feedback!


Tags:

Powered by Wordpress
Theme © 2005 - 2009 FrederikM.de
BlueMod is a modification of the blueblog_DE Theme by Oliver Wunder