Cloud based directory services – a panacea?

I was talking to someone at InfoSec a few weeks ago about cloud based directory services.  We were discussing some of the challenges associated with Identity Access Management and whether those would be more or less prevalent using a cloud-based solution.  They said that the great thing about having a cloud-based directory services solution is that it’s a clean environment and hence would not suffer from ‘legacy’ issues such as inappropriate access rights or rights accumulated over time.

So is a cloud based directory services solution a panacea for IAM?  Let’s look at some of the challenges:

  • Multiple entitlement stores – at idax we think it is important to have a consolidated view of user entitlements and so commend the idea of bringing together federated access rights from modern-day cloud services into a centralised repository.  idax supports one or many stores and have helped clients to rationalise their disparate entitlements store into a single view, and so a single store fits well into our vision.
  • New joiners & movers – we often find organisations who still grant access to new starters based on the access rights of someone they will be working with rather than based on the role they will be doing.  We also find a correlation between the amount of time a person has been at an organisation and the number of access rights they have which suggests they have accumulated rights over time which should have been revoked.  This problem will not go away with a cloud based solution, although clearly migrating to a suite of new cloud based services may provide an opportunity to clean up some of the legacy  entitlements. idax allows you to identify which access rights a person should have when they join or move within an organisation.  Many of these decisions can be automated with no need for manual approval.  idax then integrates with your existing provisioning solution, or has built in workflow to track any manual provisioning which may need to take place.
  • Role-based access – organisations have long struggled with role-based access rights.  As the number of people, applications and access rights increases, the problem gets exponentially more difficult.  We think this is likely to continue with cloud-based solutions as the problem of figuring out what access a particular person should have does not get any easier.  idax looks at the existing access rights within an organisation and establishes profiles to determine who should have access to what.  Furthermore, we do it right out-of-the-box; there is no need for a large analysis exercise to establish profiles and set up rules and typically, once the data is loaded, idax can get answers in hours rather than months.
  • Principle of least privilege – due to some of the challenges outlined above, the principle of least privilege has also historically been a difficult thing to achieve in practise.  Again, we believe that in a cloud-based environment, the same challenges will not only persist, but the risks of not doing it will be exacerbated.  One of the great advantages infrastructure as a service and software as a service brings is that it becomes much easier for organisations to provide access to their systems from different devices and locations.  This very flexibility means that organisations should be much more confident that people only have access to the systems they need to have access to in order to do their job.

In summary, we think cloud based directory services are an excellent tool for helping manage entitlements in a cloud based application architecture.  However, after a brief respite due largely to moving to new applications and demising old ones, organisations will find the challenges of identity and access management do not get any easier.  Further, because of the increase in the number of end-points where a piece of software can be used, the challenges become even more important ones to solve.

At idax we believe identity analytics is the way forward.  If you would like to learn more, please get in touch.

Machine Learning – is Amazon the answer.

Amazon Web Services Logo

 

A really great insight from Travis Greene on the launch of AWS Machine Learning cloud service and the impact it may have on IT Security.

I agree wholeheartedly with his commentary, but here are a few additional insights from work with our clients in the identity and access management space and the world of analytics. @idaxsoftware.

Read More

Data Theft, Breaches – IAM

BBC News LogoData theft, breaches and what that has to do with IAM

Mark Ward from the BBC has published an interesting article  concerning data theft and breaches; The PWC report it references also has some useful data on insider threats and the part that Access Control has to play.  .

Controlling insider staff access is unsexy, but absolutely critical. As with the leak of celebrity images from iCloud see our article on the Naked Ladies (which is giving us some interesting hits on our analytics!) – I would always favour internal theft against external hack as an explanation.

It never fails to astonish us how big companies struggle with this. Of course millions of access points needs an analytic, big data, Identity and Access Management approach because all the evidence suggests that just getting managers to work harder doesn’t work. At idax we’ve been preaching this approach for years now and are building a case history of dramatic governance improvements. The evidence sugges ts that  managers supported by analytics is clearly the way forward for IAM.

What the PWC report seems to suggest is that that expecting your managers to spend their time  – a scarce and expensive resource at the best of times – to regularly review the Access Rights of their staff may not actually be protecting you.

Our experience is that with the proliferation of technology – mobile, unstructured data, active directory – managers are rarely qualified to conduct full reviewsand are too busy doing their “real” job after all, generally they will have have no incentive, time or point of reference input to do the job justice.

Yes, a system of regular departmental reviews used to be enough for the Auditors, but increasingly they are also questioning the value of a process that seems to deliver more audit points than control.

The answer is one we’ve been promoting at idax for some time now:

  • Use analytics to understand the geography of access – who has access to what.
  • Use those same techniques to identify the access right that present a low risk to the organisation for lower priority reviews
  • Support reviews of high risk items with contextual risk analysis that gives managers a sporting chance of making a good decision.

If this can also be coupled with decision support in real-time at the point in the process at whi ch access rights are granted you can make a real contribution to reducing risk across the organisation rather than just ticking boxes.

 

 

 

Predictive Analytics: JPMorgan

images

Predictive Analytics: JPMorgan rolls out a program to identify rogue employees before they go astray.

A really interesting article on Bloomberg about how JPMorgan are using predictive analytics to identify outliers in their trading organisation:

“JPMorgan Chase & Co., which has racked up more than $36 billion in legal bills since the financial crisis, is rolling out a program to identify rogue employees before they go astray”

It looks like they’re taking many multiple inputs, comparing it to known patterns of rogue activity by back-testing to determine what activity is significant and then using that to predict future behaviour. At idax we’re working with a number of the big banks and I know of at least two other organisations that are doing something very similar, so no surprises there given the size of the fines.

However this raises a couple of questions in my mind. Firstly, to what degree are these tests successful. We all know that the problem tends to be false positives – you can find the outliers, but you find ten times as many other people too. Which begs the question: Is the major benefit of these exercises PR, or can you really find the bad guys. I guess we’ll find out.

But secondly, I’m guessing that they’re using a learning algorithm where you improve over time by adding more data and more business intelligence, referred to as “supervised” learning. It’s very effective but tends to be quite high maintenance. What we use at idax is, “unsupervised” learning, where you need no business knowledge and no back testing data. The advantages are: It’s much quicker to set up – hours rather than weeks; you get results straight away; there’s loads of high quality actionable information; and for access control, it doesn’t cost the earth.

NB: Bloomberg get a 9/10 from me on the article. Minus 1 for mentioning “Minority Report”. Can we please move beyond Tom Cruise.

http://www.bloomberg.com/news/articles/2015-04-08/jpmorgan-algorithm-knows-you-re-a-rogue-employee-before-you-do

Cost and Scale of Data Breaches Increase

Info Security LogoGreat Article here by Tara Seals in Infosecurity Magazine – Number 9 on her list – “Cost and Scale of Data Breaches” – is a much underrated risk and is indeed set to increase dramatically:

As cyber-criminals get smarter and the pace of communications accelerates, organizations are being forced to continually adapt and rapidly respond to a shifting threat landscape. The Information Security Forum (ISF) is taking a view to 24 months out, predicting that ever-faster internet speeds, tech rejectionists and even human death will all be hallmarks of the future security reality.

Threat Horizon 2017, the latest in a series of the ISF’s annual Threat Horizon reports, identifies nine specific emergent threats that encapsulate the imminent dangers that the ISF considers the most prominent. They all have the capacity to transmit their impact through cyber-space at break-neck speeds, particularly as the use of the internet spreads beyond the estimated 50 percent of the literate population who are already connected, the organization noted in its report.

The threats are:

  1. Increased Connectivity Speeds Present Issues in Organizational Response Time
  2. Criminal Organizations Become More Structured and Sophisticated
  3. Widespread Social Unrest Breaks Out, Led by ‘Tech Rejectionists’
  4. Dependence on Critical Infrastructure Becomes Dangerous
  5. Malicious Agents Weaponize Systemic Vulnerabilities
  6. Legacy Technology Crumbles
  7. Disruption to Digital Systems Leads to Verifiable Human Deaths
  8. Global Consolidation of Organizations Endangers Competition and Security
  9. Cost and Scale of Data Breaches Increases Dramatically

“The pace and scale of information security threats continues to accelerate, endangering the integrity and reputation of trusted organizations,” said Steve Durbin, managing director of the ISF. “Although cyber-space offers opportunities for leading organizations, this environment is uncertain and potentially dangerous.” He added, “We predict that many organizations will struggle to cope as the pace of change intensifies.  Consequently, at least until a conscious decision is taken to the contrary, these nine threats should appear on the radar of every organization.”

For instance, regarding the first point, it’s clear that reasonably-priced gigabit connectivity will become widely available to supply the growing demands of devices and users, signifying a dramatic leap forward, increasing both data volume and velocity. In an interview, Durbin laid out some of the risk scenarios for super-charged connectivity.

“As billions of devices are connected, there will be more data that must be managed,” he explained. “Conventional malicious use will increase rapidly, resulting in cascading failures between sectors. This will enable new and previously impracticable avenues for destructive activity online, increasing financial and reputational liabilities and overwhelming traditional defenses. When combined with the steady growth of processing power and storage, this increased connectivity will allow malicious actors to launch new attacks that will be both lucrative and difficult to detect. Businesses will struggle to keep up with these attacks.”

Also, as connectivity gets faster and more mission-critical functions are moved online and to the cloud, ISF predicts that the disruption of digital systems in transport and medical services will lead to verifiable deaths. Organizations should thus assess the exposure to and liabilities of cyber-physical systems, and revise corporate communication and crisis response mechanisms accordingly.

Related to the hyperconnectivity issue, increasing network scale, helped along by global consolidation, presents another emerging threat. As the pending Comcast-Time Warner Cable and AT&T-DirecTV mega-mergers demonstrate, broadband companies are interested in getting larger. Companies of all sizes will have fewer options for connectivity, which could give network operators undue influence (and create a known number of “super-vectors” for criminals to attack).

To address this threat, organizations need to first identify and assess risks related to dependence on the suppliers for which there are few alternatives; engage in dialogue and exchange information with governments to assess the extent to which markets remain either competitive or closed; invest in expanding and diversifying the suppliers of critical services; and, where diversification proves difficult, focus instead on embedding resilience in information security strategies.

“Network operators should be ever-mindful of the challenges that consolidation brings to the industry, and should proactively engage in dialogue with governments and regulators whilst continuing to operate in a transparent fashion with customers,” Durbin said. “This will be challenging and may bring them into conflict with government security agencies, as we have seen with Apple and Google, in terms of providing access to government agencies to core products, but will be essential as they are a provider of core infrastructure service which continues to grow in importance. Maintaining an objective stance will be difficult, but essential, to preserve the trust of the end user.”

Despite lightning-fast broadband, the report predicts that “tech rejectionists” will disrupt local economies in response to record levels of socio-economic inequality, leading to widespread, global, social unrest.

“Discontent will be driven by uncertainty and confusion and inflamed by job losses and displacement due to globalization and automation,” Durbin said. “Rejectionists will dismiss the benefits of technology-enabled globalization, pointing instead at the social and economic costs shouldered by those who are not among the economic elite. The resulting chaos will disrupt businesses and supply chains, and force countries to reconsider the balance between technological progress and long-established social and economic equilibriums.”

The future of data science

Thought provoking words from Hilary Mason about the future of data science

Hilary Mason is an important person in the world of data science and so her words are always worth listening to. This interview has some particularly thought provoking ideas.  https://lnkd.in/dHUzJ7B

Hilary Mason

As she rightly says “Things that maybe 10 or 15 years ago we could only talk about in a theoretical sense are now commodities that we take completely for granted. Hadoop existed, but was still extremely hard to use at that point. Now it’s something where I hit a couple buttons and a cloud spins up for me and does my calculations and it’s really lovely.”

My view is that it’s a lot more recently than 10 years that the data science toolkit has really entered the realms of the possible. Hand in hand is the fact that the majority of corporate technologists are unaware of how far data science has come and frankly disbelieving on the realms of the possible.

At Idax, we perform data science on identity and access management data, using unsupervised learning techniques to determine whether internal staff’s access rights are appropriate. As a result we tend to perform analytics on reasonably large data sets with hundreds of thousands of accounts and millions of permissions.

But the main observation from our clients is that for the non data scientist there’s still a lot of catching up to do. Of course, they love the results. Being able to dynamically determine a risk rating for all staff with no additional business knowledge being input is a huge benefit.

But their general unfamiliarity with the techniques means that firstly they can’t quite believe that their corporate entitlements database can be analysed in real-time on a machine no bigger than a high end gaming laptop. Secondly, that by using in memory databases and algorithm optimisation we can provide them with results across the whole domain in seconds and minutes rather than hours; and lastly, that the dirtier the data, the better the results.

As Mason says: “A lot of people seem to think that data science is just a process of adding up a bunch of data and looking at the results, but that’s actually not at all what the process is. To do this well, you’re really trying to understand something nuanced about the real world, you have some incredibly messy data at hand that might be able to inform you about something, and you’re trying to use mathematics to build a model that connects the two.”

Identity Management Analytics: What access?

What do you need Access to? Some weeks ago, I was discussing identity management analytics with a friend. He doesn’t work in IT, but he’s really bright and has held some pretty senior positions along the way. “Why don’t you just ask people what they need to have access to” he said. Spluttering over my curry, I trotted out the usual – it’s more complicated than that; you can do that in small companies, but not in big ones; what about if people lie. But in reality, current processes and controls don’t really work that well, tools are woefully inadequate and I’m sure many manager just ask their staff “so what do you need to have access to”. On the basis that this isn’t a great solution, what is the radical, game changing, answer.

In the last two years the reporting of data loss, regulatory breaches, and rogue trader activity has grown significantly. As a result, firms are stepping up their efforts to protect data and resources. But as boards and risk committees sign off ever increasing budgets what they may not realize is that they are sanctioning over reliance on manual processes, external auditors and consultants and embedding the shortcomings of manual processes into the organisation. Staff at the coal face are overrun trying to interpret the information they already have whilst new data arrives daily. And the one thing that’s certain is that working harder is not going to solve anything.

Though organisations are different distances along the journey, most fall into three categories. Reactors respond to immediate threats but don’t manage risk at a strategic level. Guardians improve processes and have more pervasive control systems, but the costs outweigh any perceived risk benefit, and real risks go unaddressed. Only Leaders have the deep analysis coupled with the right tools to manage risk in a cost effective way. Ensuring that firms avoid this cycle of audit, remediation and control failure is critical if firms are to gain real confidence that assets are protected.

So what should firms do if they want to be leaders? The first thing is to really understand what assets staff have access to and identify control failures and potential regulatory failures before they happen. To do this you need to have tools that analyse:

  • Identity and role: What systems allow users access to and how this matches peers.
  • Control and Process: Identifying gaps and how efficiency can reduce cost and risk.
  • Usage: The context in which staff access systems and how that changes risk.
  • Location: How usage of systems and data align with regulation and legislation.

When a firm can do that in a repeatable, sustainable, automated and predictive way it is on the way to really having confidence that it has control over what access it is handing out and the risks that poses.

Risk Management: Mo Farah wasn’t a Poisson

As we continue to bask in the post Olympic glow of national achievement and the “2012 effect” it seems strange to remember the dim dark days at the start of the games when team GB went a whole 3 days without winning a gold medal. As the press shrieked that we were heading for disaster, unable to meet our target of 20 golds despite massive investment, I asked myself what parallels were there with risk management and what really were Mo Farah’s chances?

Well, as we all now know, actually pretty good. Of course only an idiot would assume that winning 29 medals over 16 days should equate to 2 every day with Sundays off, but even so, how likely should a medal-less day be. Well if you assume a Poisson distribution – commonly used for estimating event frequency – and take an average of 1.8 golds a day, the chance of a day with no medals is 16%. The chances of a super Saturday were actually 7%.

The bad news is that, as you can see from the chart the Poisson doesn’t quite fit what actually happened. The good news is that a day without any golds was actually more likely at 38% of all Olympic days. The least likely (below 5) was a single gold day, which only happened once. The last day of the boxing since you ask.

So why does any of this matter? Because it shows we are very bad at estimating how frequently things happen even when its quite straightforward. We assume that events are evenly distributed and get confused when they’re not. Not much of a problem with gold medals, but quite a big problem when you’re tying to understand access rights, detect fraud, and regulate access to our highly valuable systems and data. And that goes double for those trying to write the regulation.

We assume that because failures are relatively unlikely they are also uniformly infrequent. Having spent the best part of a decade working on access control, risk and regulation, its clear to me that an approach that defines controls by exception management, otherwise known as – the boss checks my work – will perform splendidly with “frequent but not disastrous” but does nothing to stop the “very infrequent but quite awful”.

So a strange lesson from the Olympics is that risk management and regulation is going to consistently fail until we stop managing with our intuition, educate ourselves about understanding big data and start really using automatic analysis to predict and analyse.

So next time you ask yourself how can I protect myself from those with inappropriate access to systems and data, think automation and analysis and that way maybe the Olympic legacy can be more robust access security as well as more kids playing sport..