The aftermath of the WannaCry ransomware attack continued to dominate the headlines this past week, where the conversation ranged from who should be responsible to what can be done to stop future attacks. There were more than enough opinion pieces to get through, and I tend to favor the thoughtful analysis over the impulsive fear-mongering. A couple pieces I came across were of the former.
Dennis Fisher of On the Wire points out that we expected something like this, but we’re really at the beginning of a trend as the attacks will only get better with each passing attempt. While this certainly paints a grim picture, it does bring up an interesting thought – if we get to a point where we can expect attacks like this, we should be able to better focus our energy on mitigation, instead of the usual mud slinging response that only favors the attackers by wasting valuable time once in the wild. Imagine if every time there was a fire, the fire department spent a day arguing over how it started instead of putting it out. Counterproductive much?
In another solid article, Ben Thompson of Stratechery argues that it’s the business model of software to blame, not Microsoft, the government, or the users. Software is never perfect, and always evolving, which is misaligned with one-time licensing fees. His argument makes a strong cases for SaaS as vendors can continually update their software without forcing action on the customers.
As you know from last week’s newsletter, I gave a BeyondCorp talk at the Rocky Mountain InfoSec Conference. I already published the slides, but for those interested in the accompanying commentary, I’ll be recording a live webinar next week that will be very similar. You can reserve your spot here.
Here are a few additional things that caught my eye this past week.
Orgs overwhelmed by vulnerabilities, alerts, report says [SC Magazine]
Alert fatigue has hit anyone who has ever been on-call – security and operations teams alike. A report from Bay Dynamics finds just how dire the situation, with nearly 80% of respondents feeling overwhelmed by the sheer volume of threat alerts. I’ve seen some pretty amazing tools out there that can help filter out what’s really important, but still in a reactive manner. We may never fully solve for alert fatigue, but the more proactive our systems can be, the better.
In Search of an Rx for Enterprise Security Fatigue [DarkReading]
Speaking of fatigue, the pressure is rising on companies to protect their systems and data, as every week seems to bring a new high profile attack or breach. Many are feeling the fatigue of keeping up with the latest vulnerabilities and vendor solutions, as each seem to grow in parallel. Here Rick Orloff says that it’s important for companies to first and foremost collect as much data as they can, so they have a clear understanding of the environment they need to protect. Then and only then can a company implement the proper controls.
Diversify Your Cloud Portfolio or Bank on Failure [Built to Adapt]
As more enterprises plan their cloud deployments, Josh McKenty of Pivotal believes that it makes the most sense to spread the workloads across multiple providers, selecting the best services for the job. I generally agree with this approach, although there is a benefit in being able to tie events and services together in a cohesive manner. They key is finding the right abstractions that get the most out of the underlying platform, while still enabling flexibility and portability.
One Standard to Rule Them All: A Common Language for the Cloud’s Identity Management Crisis [The New Stack]
One challenge of a multi-cloud deployment is dealing with various identity and access management specifications. There is no one standard, nor will there be any time soon in my opinion. Here Jackson Shaw makes the case for a single standard across cloud applications – his example being SCIM, an open standard and REST API that defines schema and protocol for managing identity.
HTTPS on Stack Overflow: The End of a Long Road [Nick Craver]
A long, yet fascinating read on what it took to get Stack Overflow to HTTPS. Nick Craver goes into great detail with all of the technical considerations and learnings along the way. Well worth the read.
That does it for this week. Check back this time next week for another set of relevant news, articles, and events. Cheers,
Ivan at ScaleFT