un-excogitate.org
what was I thinking? (Christian Frichot's ad-lib on security and what-not)

Secure software is difficult. Threats are evolving and expanding, compromises occur at all layers of the chain from OS Vulnerabilities to application vulnerabilities to transport vulnerabilities to end-point ownership and people hacking. The efforts of so many different groups are proof that it’s all gone awry. OWASP, SANS/MITRE and the Rugged Manifesto are a few prime examples, and even the message they carry now seems ancient – attackers aren’t targeting your OS or Services anymore, they’re after your applications, be they custom or not. They’re after your end-users.

Building secure software isn’t just difficult though because of the changing threat agents, it’s difficult because building good software of any calibre is difficult, from the grass-roots web development firms to the internally resourced development teams of large organisations to the large Independent Software Developers. It’s difficult because developers and architects may not get on (Here are 5 possible reasons), it’s difficult because even software developers hate software. This last point highlighted succinctly from Jeff Atwood on his post on why Nobody hates software more than software developers:

“In short, I hate software — most of all and especially my own — because I know how hard it is to get it right. It may sound strange, but it’s a natural and healthy attitude for a software developer. It’s a bond, a rite of passage that you’ll find all competent programmers share.

In fact, I think you can tell a competent software developer from an incompetent one with a single interview question:
What’s the worst code you’ve seen recently?

If their answer isn’t immediately and without any hesitation these two words:
My own.

Then you should end the interview immediately. Sorry, pal. You don’t hate software enough yet. Maybe in a few more years. If you keep at it.”

Jeff wasn’t the only person to talk about the difficulties of software development, in fact, on the very same day J. Timothy King was also writing about the 10 things he hates about software development, including:

“2. Snotty developers. I must confess to going through a snotty phase myself. It’s part of growing up. You have to live as a snot-nosed, young-whippersnapper, green-behind-the-ears code slinger before you can mature into a wizened Yoda figure. In fact, part of me still may be a snotty developer. I’m definitely snotty when it comes to my own work, because I don’t want anyone telling me how it should be done, as long as I achieve the intended results. But as someone who’s been doing this shtick for 20-something years, I’ve grown weary of junior colleagues telling me I don’t know what I’m talking about. And when something doesn’t work out as well as they thought it should, they persistently maintain that it had nothing to do with them, despite the fact that they had ignored every piece of advice I gave them. There’s only one sure-fire remedy I know of for this problem, and that is to insist on a higher rate of pay. People may balk at paying you through the nose, but when they have to—especially managers—they not only accept your advice, they seek it out, because for the money they’re paying you, they expect you to solve their problems.”

The whole soft skills versus hard skills has been a core issue for quite some time, and it’s one that is unlikely to simply end. There will always be arrogant developers out there, just as there will be arrogant managers or security people. But if your developers are behaving like a bunch of prima donna’s it means that you have an underlying problem that needs attention prior to having any sort of chance of addressing the security and risk concerns. (As much as I’d like to agree with the sentiment of Jeff Luckett in the previously linked article to just “Fire ‘em“, I’m unsure that’s actually the best solution).

There are some other alternatives of course to dealing with the symptom of “The Know-it-all“:

“This symptom is a manifestation of Arrogance. Arrogance is a defence against vulnerability and insecurity, often learned in childhood when parents constantly criticise a child for not being good enough. The person is so afraid of being seen as unworthy or incompetent, that they immediately throw up a defensive shield against any possible attack. This defence protects them for a while, but everyone else sees that it is false.

In the end, they lose credibility and respect — the thing they fear most.”

  1. When you see someone go into attack mode or excess defensiveness, recognize that it is useless to argue with them.
  2. Realize that the person is feeling very insecure at that time.
  3. Don’t continue to push them because they will only get worse.
  4. If the symptoms only seem to occur when the person is under stress, wait until another time to pursue the discussion.
  5. If they are always overly defensive or always attacking others, you may need to find another person to work with who does not have the same problem.
  6. Keep your own sense of self-confidence and don’t allow yourself to be verbally abused.
  7. If the difficult person is your boss, reconsider whether it’s time to find a job elsewhere.

Tags: , , , , , , , ,

Last week I was fortunate to be able to attend ECU’s SecAU Congress of 2010. As with most academic conferences it was mostly filled with research papers, or future project concept presentations, but there were a few outstanding talks that really stuck with me. With 4 streams running over 3 days I certainly found myself struggling to pay too much attention towards the end though.

There were a few presentations from Deakin University, one on information disclosure during a Victorian Police case, and a second on the impact of microblogging in the corporate environment. Both of these touched on issues of Gen Y and how the upcoming generation have a different take on information handling. A particular throw away comment from the presenter (a self-proclaimed baby boomer? Or Gen X?) that caught my attention was his generation was used to “need to know” as a core information paradigm, whilst the up-comers take a “need to share” approach to handling information.

The panel on the first day on Stuxnet was nothing too ground-breaking. Mostly the comments were mixed. It was a near-miss, it was an impacting event, it’s a sign of things to come, we’ve been talking about these sorts of things for a while, etc etc. They did discuss the number of people believed to have been involved with the creation of the malware and the expected cost, which was quite interesting, but nothing that wasn’t already researched by the AV companies I believe.

Mike Johnstone from ECU gave an interesting presentation on the MS Threat Modelling process. The core premise was around the use of DFDs, and how they’re quite an old style software engineering design tool. He instead created a mapping from DFD elements to more common swim-lane or ERD diagrams to see if MS’ processes would still apply. I think the concepts did align somewhat, in particular with swim-lane diagrams where different lanes defined different trust boundaries of some form. It’ll be interesting to see if Mike has any luck getting these concepts to work with MS’ Threat Modelling Tool.

Andy Jones from Khalifa Uni in UAE demonstrated a POC forensic registry analysis tool that looked like it had great potential. Small, lightweight, focusing on doing one thing well.

Damian Schofield’s talk on the use of 3d rendering, graphics and games engines for presentation of evidence in a court of law, was REALLY interesting. Damian gave the most entertaining, and therefore most enjoyable I found, presentation of the conference. Some interesting concepts that I jotted down:

  • Forensic evidence is complex. The more complex it is, the more difficult it is to understand.
  • Crux of a case is clear presentation. Presentation of technical issues is the most important thing.
  • The public have a very graphic literacy, but due to the “uncanny valley” you often have to combine abstract and realistic graphics.
  • The CSI effect is difficult to manage: everyone’s a forensic scientist, everyone believes that DNA is useful and accurate in all cases, juries expect fancy graphics

Damian worked on a project called JIVE (Juries and Interactive Visual Evidence) trying to measure the impact of these evidence sources to hypothetical court cases. They used real judges, barristers, on fake cases, and the results seemed to indicate that interactive, visual evidence had a relatively modest impact overall.

Murray Brand from ECU did a few talks on malware and the methods of deception and obfuscation used by these tools. His research indicates that the presence of anti-examination processes are indicative of malware, and would be a useful technique for malware detection.

The final day had a presentation on CCTV in London, and then a panel on the effectiveness of CCTV to increase public safety. The presentation was somewhat interesting, mainly in the scale and confusion of CCTV in London. Because CCTV operations are run on a regional basis with any sort of standardisation, law enforcement don’t even know how many cameras there are. The current estimate is around 4.2 million cameras. Another thing he mentioned was that during London Bombings there were about 25 staff available, but only about 8 that were fluent with the new digital CCTV systems that were being rolled out. Whilst the officers and superior officers all believed that the resources were infinite.

The problem with both the presenter and the panel is that all the participants (except for the academics) were pro-CCTV. Almost in a completely blind fashion. Noone could really say statistically that cameras have helped, but they all “felt” that the perception of CCTV certainly helped. I wanted to ask them their thoughts of if it was just a perception issue, perhaps better risk-based approach, and possibly more cost-effective, would be to intersperse fake cameras in the mix.

The only relief was when one of the panelist from Deakin discussed his experience in Melbourne. The local media had made a kerfuffle when the train station had some CCTV installed. They then went out to dinner and left their car there and caught a cab home. In the morning they went to get the car and it was vandalised. When they visited the CCTV office the single officer mentioned he would not have time to review over 20 or so hours of footage to find the culprits. They were never found. (And some of the other panelists were surprised.. wha?)


Tags: , , , , ,
Mar
23.

ShockedI think there’s something fundamentally wrong when you’re biggest fear at the end of a risk assessment isn’t so much that you’ve got a “critical*” finding, but it’s that you don’t know how to tell management. It’s an interesting phenomena and I believe most information security people run in to it all the time. What compounds it and makes me completely gob-smacked is when the discussion turns to ways that you can downgrade the finding.

Say what?

And don’t try and pretend you haven’t been privy to these discussions. We’ve all seen it or heard of it happening. “What if we only account for a small population of users? What if we nudge up the value of our controls? What if…” I mean what they’re basically asking is “What if we just change some of these values and downplay what we as a group agree is the risk?”

The good news is, the probability of the risk having been exaggerated in the first place is often quite high *phew* – so perhaps this “base-lining” is useful?

This is one of the reasons why I’m a fan of FAIR, it makes it easy to:

  1. Reduce the probability of exaggerated risk statements in the first place – or at least make it more difficult for them to make it through to the end; and
  2. Eliminate the fact that you can even result in a “critical*” finding without putting statements around the frequency of loss events and the probable loss, as opposed to worst-case loss – which we bang on about all the time which leads us to the sky-is-falling situation

*Nb: “Critical” adj. Whatever-the-hell you want it to mean.

Image thanks to: http://www.flickr.com/photos/pinksherbet/3484925590/


Tags: , ,

I’ve had the opportunity to digest a couple of good reads over the past week. First up was Charles Leadbeater’s Cloud Culture: The Future of Global Cultural Relations, and if you’re at all interested in emerging technology and the way it’s impacting (the global) society then this is a must read. I really liked the style used and how the 81 pages just flew by (maybe the formatting?). Some interesting pointers that stuck with me (nothing really new but worth paraphrasing none-the-less):

  • The future will be of many clouds. This can only be achieved by embracing an open source approach to technology and information.
  • For all the benefits that we’re starting to perceive in this new open communication platform, there are still powers that are working their tentacles to slow it down, for example, authoritarian governments. For example, Thai authorities “have used crowdsourcing to uncover the addresses of websites making comments critical of the Royal family..“. Maybe to a different degree our own government here in Australia and their unremitting push on Internet filtering.
  • “Cloud culture” will enhance the creativity of people, giving them new methods to collaborate, but this can only continue as long as we don’t make it too restrictive to share and work on material.

Of course, this could’ve just been written as the “Internet” culture, but it carries more weight when it focuses on the collaborative nature of how the Internet looks these days.

Secondly I had a chance to read something a little more local. The team over at KPMG have released their December 2009 Fraud Barometer and similar to above, nothing entirely earth-shattering, but sometimes it’s useful to cite local reports when trying to “scare” people about their control environment. And by scare, I mean reinforce your fantastic risk assessments on your projects and other important information assets. I also found it interesting to see the number of frauds committed against Government, considering they don’t appear to defraud that much money compared to say finance or commercial companies.

So the prize for “no-surprise-graph-most-useful-to-reinforce-or-scare” is Figure 6, Frauds by perpetrator. In particular, towards the bottom of the number of frauds is Management, but they’re responsible for the largest amount of money defrauded. On the opposite side of the table is the massive number of frauds perpetrated by employees and how little they defrauded. This makes sense of course, management have access to more resources and there’s less of them to normal employees. Pretty anyway.

Enjoy!


Tags: , , , , , ,

I’m surprised it took ISACA (or ISC^2 or maybe FAIR) this long to create an information risk certification. The first question that we asked when we saw this was “well what about all the other risk certifications, how is this different?” I immediately responded with how those other certifications or qualifications have been around for a long time, the disciplines they are based on are mature, whilst information risk on the other hand is still in its infancy. In addition, most of the existing certifications are based on financial risk.

Current tweets on the topic don’t appear positive, and until ISACA release some more information, or any information, I would tend to agree. Thinking about how such a certification may make an impact within my workplace my mind drew blanks. I mean will it make the people who perform risk assessments any better at it? Probably not. Will it increase their accuracy? I don’t think so. Would it make the people receiving the outputs of these risk assessments trust their output more? Probably not.

It wasn’t until I got home and started thinking about this post and re-reading the material before I realised that the certification appears more control based than risk based. (Emphasis placed by me)

The Certified in Risk and Information Systems Control™ certification (CRISC™, pronounced “see-risk”) is intended to recognize a wide range of professionals for their knowledge of enterprise risk and their ability to design, implement, monitor, and maintain IS controls to mitigate such risk.

I think this highlights some of the core issues with this certification. Knowledge of enterprise risk is something that is refined with time and experience. It’s a complex and almost completely people & process driven exercise. A certification will not help the people side of this exercise, you can’t get experience through a certification.

Therefore an IS risk certification’s strength relies on its ability to bolster the process not the person, but most of the current wordings appear to indicate that the certification is about designing, implementing, monitoring controls. This to me sounds like a mashup of a security architecture certification (SABSA perhaps?) and security operational certifications (with a splash of GIAC).

Regardless of all this, I think there will be a flurry of activity within the industry around April when ISACA open up the certification to the grandfathering program. I mean if you already do this in your job why not acquire this cert without having to sit an exam? We all have the experience, we’ve been doing risk assessments since we started to walk, followed swiftly by advising the business of why they shouldn’t do stupid thing X. If we can’t actually get more objective with our assessments, at least the certification will give the appearance of being more objective. Win win!


Tags: , ,

Powered by Wordpress
Theme © 2005 - 2009 FrederikM.de
BlueMod is a modification of the blueblog_DE Theme by Oliver Wunder