BLOG

Predictive Analytics: JPMorgan

images

Predictive Analytics: JPMorgan rolls out a program to identify rogue employees before they go astray.

A really interesting article on Bloomberg about how JPMorgan are using predictive analytics to identify outliers in their trading organisation:

“JPMorgan Chase & Co., which has racked up more than $36 billion in legal bills since the financial crisis, is rolling out a program to identify rogue employees before they go astray”

It looks like they’re taking many multiple inputs, comparing it to known patterns of rogue activity by back-testing to determine what activity is significant and then using that to predict future behaviour. At idax we’re working with a number of the big banks and I know of at least two other organisations that are doing something very similar, so no surprises there given the size of the fines.

However this raises a couple of questions in my mind. Firstly, to what degree are these tests successful. We all know that the problem tends to be false positives – you can find the outliers, but you find ten times as many other people too. Which begs the question: Is the major benefit of these exercises PR, or can you really find the bad guys. I guess we’ll find out.

But secondly, I’m guessing that they’re using a learning algorithm where you improve over time by adding more data and more business intelligence, referred to as “supervised” learning. It’s very effective but tends to be quite high maintenance. What we use at idax is, “unsupervised” learning, where you need no business knowledge and no back testing data. The advantages are: It’s much quicker to set up – hours rather than weeks; you get results straight away; there’s loads of high quality actionable information; and for access control, it doesn’t cost the earth.

NB: Bloomberg get a 9/10 from me on the article. Minus 1 for mentioning “Minority Report”. Can we please move beyond Tom Cruise.

http://www.bloomberg.com/news/articles/2015-04-08/jpmorgan-algorithm-knows-you-re-a-rogue-employee-before-you-do

Cost and Scale of Data Breaches Increase

Info Security LogoGreat Article here by Tara Seals in Infosecurity Magazine – Number 9 on her list – “Cost and Scale of Data Breaches” – is a much underrated risk and is indeed set to increase dramatically:

As cyber-criminals get smarter and the pace of communications accelerates, organizations are being forced to continually adapt and rapidly respond to a shifting threat landscape. The Information Security Forum (ISF) is taking a view to 24 months out, predicting that ever-faster internet speeds, tech rejectionists and even human death will all be hallmarks of the future security reality.

Threat Horizon 2017, the latest in a series of the ISF’s annual Threat Horizon reports, identifies nine specific emergent threats that encapsulate the imminent dangers that the ISF considers the most prominent. They all have the capacity to transmit their impact through cyber-space at break-neck speeds, particularly as the use of the internet spreads beyond the estimated 50 percent of the literate population who are already connected, the organization noted in its report.

The threats are:

  1. Increased Connectivity Speeds Present Issues in Organizational Response Time
  2. Criminal Organizations Become More Structured and Sophisticated
  3. Widespread Social Unrest Breaks Out, Led by ‘Tech Rejectionists’
  4. Dependence on Critical Infrastructure Becomes Dangerous
  5. Malicious Agents Weaponize Systemic Vulnerabilities
  6. Legacy Technology Crumbles
  7. Disruption to Digital Systems Leads to Verifiable Human Deaths
  8. Global Consolidation of Organizations Endangers Competition and Security
  9. Cost and Scale of Data Breaches Increases Dramatically

“The pace and scale of information security threats continues to accelerate, endangering the integrity and reputation of trusted organizations,” said Steve Durbin, managing director of the ISF. “Although cyber-space offers opportunities for leading organizations, this environment is uncertain and potentially dangerous.” He added, “We predict that many organizations will struggle to cope as the pace of change intensifies.  Consequently, at least until a conscious decision is taken to the contrary, these nine threats should appear on the radar of every organization.”

For instance, regarding the first point, it’s clear that reasonably-priced gigabit connectivity will become widely available to supply the growing demands of devices and users, signifying a dramatic leap forward, increasing both data volume and velocity. In an interview, Durbin laid out some of the risk scenarios for super-charged connectivity.

“As billions of devices are connected, there will be more data that must be managed,” he explained. “Conventional malicious use will increase rapidly, resulting in cascading failures between sectors. This will enable new and previously impracticable avenues for destructive activity online, increasing financial and reputational liabilities and overwhelming traditional defenses. When combined with the steady growth of processing power and storage, this increased connectivity will allow malicious actors to launch new attacks that will be both lucrative and difficult to detect. Businesses will struggle to keep up with these attacks.”

Also, as connectivity gets faster and more mission-critical functions are moved online and to the cloud, ISF predicts that the disruption of digital systems in transport and medical services will lead to verifiable deaths. Organizations should thus assess the exposure to and liabilities of cyber-physical systems, and revise corporate communication and crisis response mechanisms accordingly.

Related to the hyperconnectivity issue, increasing network scale, helped along by global consolidation, presents another emerging threat. As the pending Comcast-Time Warner Cable and AT&T-DirecTV mega-mergers demonstrate, broadband companies are interested in getting larger. Companies of all sizes will have fewer options for connectivity, which could give network operators undue influence (and create a known number of “super-vectors” for criminals to attack).

To address this threat, organizations need to first identify and assess risks related to dependence on the suppliers for which there are few alternatives; engage in dialogue and exchange information with governments to assess the extent to which markets remain either competitive or closed; invest in expanding and diversifying the suppliers of critical services; and, where diversification proves difficult, focus instead on embedding resilience in information security strategies.

“Network operators should be ever-mindful of the challenges that consolidation brings to the industry, and should proactively engage in dialogue with governments and regulators whilst continuing to operate in a transparent fashion with customers,” Durbin said. “This will be challenging and may bring them into conflict with government security agencies, as we have seen with Apple and Google, in terms of providing access to government agencies to core products, but will be essential as they are a provider of core infrastructure service which continues to grow in importance. Maintaining an objective stance will be difficult, but essential, to preserve the trust of the end user.”

Despite lightning-fast broadband, the report predicts that “tech rejectionists” will disrupt local economies in response to record levels of socio-economic inequality, leading to widespread, global, social unrest.

“Discontent will be driven by uncertainty and confusion and inflamed by job losses and displacement due to globalization and automation,” Durbin said. “Rejectionists will dismiss the benefits of technology-enabled globalization, pointing instead at the social and economic costs shouldered by those who are not among the economic elite. The resulting chaos will disrupt businesses and supply chains, and force countries to reconsider the balance between technological progress and long-established social and economic equilibriums.”

The future of data science

Thought provoking words from Hilary Mason about the future of data science

Hilary Mason is an important person in the world of data science and so her words are always worth listening to. This interview has some particularly thought provoking ideas.  https://lnkd.in/dHUzJ7B

Hilary Mason

As she rightly says “Things that maybe 10 or 15 years ago we could only talk about in a theoretical sense are now commodities that we take completely for granted. Hadoop existed, but was still extremely hard to use at that point. Now it’s something where I hit a couple buttons and a cloud spins up for me and does my calculations and it’s really lovely.”

My view is that it’s a lot more recently than 10 years that the data science toolkit has really entered the realms of the possible. Hand in hand is the fact that the majority of corporate technologists are unaware of how far data science has come and frankly disbelieving on the realms of the possible.

At Idax, we perform data science on identity and access management data, using unsupervised learning techniques to determine whether internal staff’s access rights are appropriate. As a result we tend to perform analytics on reasonably large data sets with hundreds of thousands of accounts and millions of permissions.

But the main observation from our clients is that for the non data scientist there’s still a lot of catching up to do. Of course, they love the results. Being able to dynamically determine a risk rating for all staff with no additional business knowledge being input is a huge benefit.

But their general unfamiliarity with the techniques means that firstly they can’t quite believe that their corporate entitlements database can be analysed in real-time on a machine no bigger than a high end gaming laptop. Secondly, that by using in memory databases and algorithm optimisation we can provide them with results across the whole domain in seconds and minutes rather than hours; and lastly, that the dirtier the data, the better the results.

As Mason says: “A lot of people seem to think that data science is just a process of adding up a bunch of data and looking at the results, but that’s actually not at all what the process is. To do this well, you’re really trying to understand something nuanced about the real world, you have some incredibly messy data at hand that might be able to inform you about something, and you’re trying to use mathematics to build a model that connects the two.”

Identity Management Analytics: What access?

What do you need Access to? Some weeks ago, I was discussing identity management analytics with a friend. He doesn’t work in IT, but he’s really bright and has held some pretty senior positions along the way. “Why don’t you just ask people what they need to have access to” he said. Spluttering over my curry, I trotted out the usual – it’s more complicated than that; you can do that in small companies, but not in big ones; what about if people lie. But in reality, current processes and controls don’t really work that well, tools are woefully inadequate and I’m sure many manager just ask their staff “so what do you need to have access to”. On the basis that this isn’t a great solution, what is the radical, game changing, answer.

In the last two years the reporting of data loss, regulatory breaches, and rogue trader activity has grown significantly. As a result, firms are stepping up their efforts to protect data and resources. But as boards and risk committees sign off ever increasing budgets what they may not realize is that they are sanctioning over reliance on manual processes, external auditors and consultants and embedding the shortcomings of manual processes into the organisation. Staff at the coal face are overrun trying to interpret the information they already have whilst new data arrives daily. And the one thing that’s certain is that working harder is not going to solve anything.

Though organisations are different distances along the journey, most fall into three categories. Reactors respond to immediate threats but don’t manage risk at a strategic level. Guardians improve processes and have more pervasive control systems, but the costs outweigh any perceived risk benefit, and real risks go unaddressed. Only Leaders have the deep analysis coupled with the right tools to manage risk in a cost effective way. Ensuring that firms avoid this cycle of audit, remediation and control failure is critical if firms are to gain real confidence that assets are protected.

So what should firms do if they want to be leaders? The first thing is to really understand what assets staff have access to and identify control failures and potential regulatory failures before they happen. To do this you need to have tools that analyse:

  • Identity and role: What systems allow users access to and how this matches peers.
  • Control and Process: Identifying gaps and how efficiency can reduce cost and risk.
  • Usage: The context in which staff access systems and how that changes risk.
  • Location: How usage of systems and data align with regulation and legislation.

When a firm can do that in a repeatable, sustainable, automated and predictive way it is on the way to really having confidence that it has control over what access it is handing out and the risks that poses.

Risk Management: Mo Farah wasn’t a Poisson

As we continue to bask in the post Olympic glow of national achievement and the “2012 effect” it seems strange to remember the dim dark days at the start of the games when team GB went a whole 3 days without winning a gold medal. As the press shrieked that we were heading for disaster, unable to meet our target of 20 golds despite massive investment, I asked myself what parallels were there with risk management and what really were Mo Farah’s chances?

Well, as we all now know, actually pretty good. Of course only an idiot would assume that winning 29 medals over 16 days should equate to 2 every day with Sundays off, but even so, how likely should a medal-less day be. Well if you assume a Poisson distribution – commonly used for estimating event frequency – and take an average of 1.8 golds a day, the chance of a day with no medals is 16%. The chances of a super Saturday were actually 7%.

The bad news is that, as you can see from the chart the Poisson doesn’t quite fit what actually happened. The good news is that a day without any golds was actually more likely at 38% of all Olympic days. The least likely (below 5) was a single gold day, which only happened once. The last day of the boxing since you ask.

So why does any of this matter? Because it shows we are very bad at estimating how frequently things happen even when its quite straightforward. We assume that events are evenly distributed and get confused when they’re not. Not much of a problem with gold medals, but quite a big problem when you’re tying to understand access rights, detect fraud, and regulate access to our highly valuable systems and data. And that goes double for those trying to write the regulation.

We assume that because failures are relatively unlikely they are also uniformly infrequent. Having spent the best part of a decade working on access control, risk and regulation, its clear to me that an approach that defines controls by exception management, otherwise known as – the boss checks my work – will perform splendidly with “frequent but not disastrous” but does nothing to stop the “very infrequent but quite awful”.

So a strange lesson from the Olympics is that risk management and regulation is going to consistently fail until we stop managing with our intuition, educate ourselves about understanding big data and start really using automatic analysis to predict and analyse.

So next time you ask yourself how can I protect myself from those with inappropriate access to systems and data, think automation and analysis and that way maybe the Olympic legacy can be more robust access security as well as more kids playing sport..

Famous film stars, data breaches

Famous film stars, data breaches and why CEOs should be worried

So the latest not-so-surprising story concerning data breaches is that, in addition to containing pictures of ladies in underwear and pictures of famous film stars, the internet also contains pictures of famous film stars in their underwear.

http://www.bbc.co.uk/news/technology-29237469

Jennifer Lawrence

I don’t mean to trivialise the impact of private pictures splashed all over the web. It’s clearly unpleasant, morally indefensible and probably illegal, but plenty of others have discussed the data breaches themselves at length. At Idax we are more interested in the lessons to be learned about the breaches of internal security rather than speculating on external threats.

When the story broke, commentators focused on the “how”. The favoured theory was an evil genius who hacked into the main iCloud computer. Presumably someone halfway between Kim Dotcom and Ernst Stavro Blofeld working from an evil lair in a hollowed out volcano. I have little experience of evil hacker geniuses, but if they exist, I suspect they are more motivated to steal credit card details from the many than private pictures from the few.

The second theory was that our protagonists had guessed or otherwise obtained the email addresses and passwords for iCloud accounts – a “phishing” attack. Given that a lot of celebrity details are in the public domain and most people are chronically bad at setting passwords, this is pretty credible. Spoiler alert: When asked for your date of birth you don’t have to use your real date of birth; the one that’s also on your Facebook page.

But let’s suppose for a moment that there was no evil genius and no phishing attack, how else might the caper have been done. Simple as it may sound, I’d get myself a job as an iCloud database administrator and then wait until I could steal the pictures.

Now I have no inside knowledge of what goes on at Apple and my approach may sound too obvious. Apple may be the exemplar of corporate governance and security as they are in many other things. But at Idax our experience is that the corporation is nowhere near as secure as your CEO would like to think, and data breaches mostly occur when staff routinely have access to resources that have nothing to do with their job and are either historical or just plain wrong. In a corporation of any size keeping track of access rights is a major headache.

In this context coercion, collusion and avarice are great motivators, especially when the disgruntled developer routinely has uncontrolled access to production data.

So, we may never find out how the images got onto the web and only a cynic would point out that it’s in everyone’s interest to perpetuate the story of the complex con, rather than the corporate cock up. But clearly protecting your corporate data from both internal and external threats has to be a priority for all organisations.

I’ll leave you with a last thought. Under EU data protection legislation a company can be fined up to 10% of global revenue for losing personal data. So if it’s conceivable that you might lose all your customer files if a laptop was inadvertently left a train or a DBA sent a file to his home email, maybe you should look into how you manage internal identity management.

Mark Rodbert is CEO of Idax Software, the identity management analytics company.

The Outlier Risk under their noses

The Outlier Risk under their noses. It seems funny at first, but not really. Could have been really nasty indeed. Of course, if they’d been using idax they’d have spotted him in a heartbeat.

This article from “Here In The City” tells the tale of how the simplest of actions can completely bypass complex security .

The guy just turned up on thefake_rolex_34316 trading floor one day.

Bloomberg News reports that KK Ho appeared out of nowhere last year on the Royal Bank of Scotland’s London trading floor.

He had freshly printed business cards identifying himself as a bond salesman. He met with customers and impressed executives in internal meetings with his talk about rich clients he knew, according to two people familiar with the matter.

Then, just as suddenly, he was out the door several months later after bank managers began asking questions about him, questions that led to the realization that he wasn’t a bond salesman after all, the people said.

RBS informed regulators but otherwise kept the matter private, according to another person familiar with the matter. Some details became public when another bank executive mentioned it in a complaint he filed over an employment dispute.

Ho had been a manager in RBS property services facing a layoff, and was given a desk to help him find a new position, according to the three people, who asked not to be identified because they weren’t authorized to discuss it. Ho wasn’t assigned to a team, had no manager and wasn’t authorized to sell securities or meet clients.