"Many companies view being hacked as some kind of stigma on their reputation"

An interview with an ethical hacker

Ahead of his appearance at this year's European Communication Summit (June 13-14 2918), we spoke to hacker-turned-consultant Dr. Karsten Nohl on fighting cyber-crime in a time of GDPR, the inherent risks of legacy systems, and how far you should trust your smartphone.


You've been described as “an ethical hacker”. What does that mean?

All hackers share curiosity to explore technology but then an ethical hacker sees certain boundaries, where your curiosity needs to stop, where somebody else’s privacy starts, versus in an unethical hacker would step over that boundary and would perhaps also step over boundaries where actual financial rewards are to be had. So an ethical hacker lives out that passion for technical exploitation without harming anybody else's privacy or finances

Which psychological characteristics make up a hacker - whether ethical or unethical - that makes them curious about this world?

I don't know what makes us curious but that's what we share, curiosity about technical things. We look at machinery or some piece of software and we don't just accept that it does a certain function but we want to understand why and perhaps even how we can modify it to fulfil some other function or to enhance it. Hackers have been around ever since machinery was invented and somebody who may be a generation or two ago would tinker with a car is today involved in a computerized system. It's the same mind-set that you want to understand something to the point where you can then modify to better meet your needs.

Security Research Labs advises companies about computer security. What are the most common fears that a big fortune 500 company has about security and are these fears the right things they should be worried about?

Many companies view being hacked as a stigma on their reputation. That's one of the main motivations to prevent hacking. It’s not so much what you lose through a hacking incident, be it money or proprietary information, it's more that the fact that you were hacked at all is like a disease that nobody wants to admit having. This attitude is to a large extent irrational if the result of the hacking is acceptable, if only very little money is lost or data that isn't so expensive to generate. Of course, media and public attention drives this stigmatization. Companies have to convince themselves that getting hacked in minor incidents once in a while is just part of being in the internet game and can’t be prevented, other than by cutting yourself off from innovation and the internet.

Under GDPR, companies could face fines of up to 4% of their annual turnover for non-compliance. Have you seen an increase in activity now that people are aware of this fine coming up?

GDPR has definitely brought pressure, not so much on making things harder to hack but much more on how to possibly claim innocence in case something does happen. The penalties are quite large and nobody knows how the regulator is going to choose whether or not to penalise, so everybody is now getting ready to argue to the regulator that it wasn’t their fault. I've not seen a surge in activity to prevent any hacking, perhaps even a little bit of a damp down in activity to find past hacks, because if you're not obliged to report something that you don't know about, why start looking?

Is part of the problem that what organizations are doing now to create value is exactly what's making being hacked more likely? Continuous innovation, new business models, global expansion, mergers and acquisitions - all of these are designed to share information, not protect it.

The amount of things that can get hacked grows continuously, and that automatically leads to more hacking, even if the technology becomes more secure than it was before. Take another developed technology, automotives. A lot of people died over the years because of cars and today we put that down to bad driving, drunk driving, insufficient regulation, not enough speed limits. But in the first decades the amount of car death grew not because people drank more when driving but purely because there was more driving - the amount of driving, cars, roads, everything grew exponentially.

The same is true right now for technology. And by and large that's a good thing: unlike in car crashes, people don't die through hacking, most of the time. They suffer in other ways but, just like with cars, the benefits outweigh the disadvantages by a large margin.

Is the transformation of the entire world onto the cloud opening up a whole new dimension of being hacked?

It does and it doesn't, there's two contrary forces at work here. Obviously, as you put IT on to the cloud there’s more companies that can access it - at the very least, the cloud provider, who wasn't in the equation before you went on the cloud. So there's at least one more company that can now access your data if they really wanted to.

But at the same time you are handing over important maintenance work to professionals. These cloud providers are, by and large, much better at managing infrastructures than individual companies are. So, yes, you're sharing responsibility for your data now, but in a lot of cases you're sharing it with a company that does a better job than you ever did. It’s not clear whether the sum of those two is negative or positive.

Which industry is investing most heavily in cyber security?

There are two reasons why a company would invest in in cybersecurity. One is that they're losing money to criminals and would rather invest it into security products. That's one driver shared by everybody who has high value electronic transaction streams that can be influenced. And the other reason is regulation. GDPR is one example, but of course different regulators per industry and per country have already out-regulated this way ahead of GDPR.

Banks happen to share both drivers - if they don’t invest, they lose money and they get in trouble with the regulator, so banks definitely invest the most total amount. But since some of that is purely for reporting regulatory reason they are usually matched in security efficiency by the technology companies who just by a total amount may invest a little bit less but are much more focused on actual hacking defence, so get greater benefits through that.

And which industries are falling behind?

The contrary to the above: companies where neither the regulator says this is necessary nor where they're feeling the pain of hacking. For instance, one industry to single out is the logistics industry. They are highly computerized but nobody knows how to monetize those data streams yet. You break into a bank, you get money, you break into a shipping company, what are you going to do with that data? But they often become casualties of hacking attacks that weren't really targeted at them but that destroy the data or cause harm in other ways.

One of the main victims of last year's NotPetya attack was the shipping company Maersk, which lost a self-declared 300-plus million dollars due to the unavailability of their computer systems over weeks. That's because they are neither regulated nor usually financially incentivized to invest in security. After this incident, this company now understands the financial repercussions of being in among the least secured industries. But I'm sure many other logistics companies are still far behind.

You mentioned criminal hackers. What's the difference in approach by hackers from a criminal background and state-sponsored hackers?  

Of course, both of them conduct something criminal, at least in the jurisdiction in which it happens. But the first case has the goal of immediate monetary reward and it is rather difficult to hide tracks - if you have money then somebody else misses that money, so there's no hiding a financial theft.

Whereas state-sponsored spying usually tries to be clandestine. If I copy your data, you still have it and you may never know. The biggest distinguishing factor is that the amount of criminal hacking for financial gains is obvious, whereas the magnitude of state-sponsored espionage hacking is completely unknown.

A lot of your work shines a light on weaknesses in networks and technology. To what extent are these weaknesses known to the engineers and producers of this technology, and are they guilty of deliberately trying to hide these oversights?

What often happens, especially mobile networks, is that vulnerabilities become known when technologies are already standardized and deployed. So the engineers who originally built and designed networks didn't intentionally take this risk, they just didn't know about the attacks. But because these systems are so large and so rigid, it's hard to upgrade them just to fix security issues.

Security issues tend to died out organically as one network technology is completely replaced by another technology. But in a mobile network that can take ages. So 2G technology is still running in parallel to 3G and 4G, and it may well run in parallel to 5G except in very few countries. For example, Singapore decided to switch off 2G completely, so they run 3G, 4G and soon 5G. But that's still three technology generations -  it's like having three different computers from three different decades on your desk. In no other technology do we have long-lived weaknesses like that.

Is that what is meant by “legacy systems”?

‘Legacy’ is old technology, but it's also old ideas in new technology. Typically, parts of new technology are standardized and try to reach a consensus on current best practice, but in other parts of the same systems they are unspecified as far as global standards are concerned. Most companies usually just look at what they did in the last generation and use that same thing continuously until somebody steps in and says "this is now too old for us". So even in newer systems are usually a mix of new, recently researched ideas and old ideas that should never happen in the first place but definitely according to current standards are outdated.

How secure is email today?

People have worked for many years to convince us that email is insecure. But at the same time engineers have worked during those same years to make email more secure. It’s definitely better than its reputation right now. It doesn't traverse the internet unencrypted anymore, at least in most cases, so if somebody is tapping an undersea cable between England and the US, or whatever we think the NSA is doing, the chances are they can't read the email from just that cable.

But email is still an open door for many attacks, because the authenticity of an email is very hard to verify. People usually just read a name and believe that name and the email address behind it are legitimate. So email as a communication method has improved, but the way people use the email has not improved. If I send an email to somebody spoofing the email address of their boss and ask them to open a file, chances are that some people will open that email because email itself does not show the authenticity of that communication.

Is one brand of smartphone more secure than another brand?

I would not say that one brand is better than another, but if we truly look at hacking resistance there are certainly different design choices, and, as with everything else in life, there's a trade-off between freedom and security. Apple heavily restricts your freedom about what you can do on your phone and who can programme what kind of functions, and that comes with a security gain. Whereas Android allows developers and users to explore technology a lot more widely, but that freedom comes with a question about security.

How hackable is encryption as a security method?

Encryption today is generally believed to be unhackable mathematically. You need the right key to open the lock in that sense. However, most hacks involve asking the user for that key and they happily hand it over. One very popular hacking method for the last at least 10 years is sending somebody an email and saying something like “your password will expire soon, please click here, type in your old password and type in your new password”. So you're giving me your password and I can use that to decrypt all your data. Encryption is only as secure as the weakest individual who has the key to encrypt the data.

My new iPhone comes with my first ever fingerprint-scanner. Can I trust it?

Fingerprints are generally less secure than a really good password. But at the same time they’re more secure than almost everybody's passwords because most people don't choose very good passwords. The problem with fingerprints are that we leave them everywhere - on glossy surfaces, on the same phone that the fingerprint is supposed to protect. Fingerprints are everywhere so, with some luck and patience, a hacker who stole the device could recover the fingerprint and use it to unlock a device.

The same cannot be said for your password. If the password is very secure, other than asking you or observing you while you type it in somewhere, there is no way to get hold of your password. But most people's passwords are not like that. Most people use their date of birth for instance. That means that the fingerprint is a security upgrade for almost everybody who previously used a not-so-good password

At the heart of corporate communications is establishing trust with stakeholders. How do you advise a corporate communicator on establishing trust even in a context of inevitable hacking?

First, it's important to quantify the damage of hacking in some plausible way. A statement like “let's prevent hacking at all cost” can never be rational. Nothing is ever “at all cost”. There must be an acceptable amount of damage and you should only prevent something if it has more externality costs than the thing you're trying to protect. . And once you're over that initial hurdle of getting people to quantify hacking, the next most important step is to expect the hacking and prepare for it. And the more you prepare for it, the more calmly and well-informed will you be able to handle the hacking situation and ensure it doesn’t become a crisis.

Usually hacking crises that stigmatise the reputation of a company are the fault of communication during the crisis, and not so much the technical circumstances that led to the crisis. Many large companies get hacked on a regular basis and they catch the hackers early enough to prevent major damage, then they calmly inform affected customers, they calmly work with the regulators and then they calmly try to improve the technology so that same kind of hacking does not happen again. But at the same time they expect some other kind of hacking to still affect them. That is the only way to approach the topic of hacking in a way that still allows you to innovate at the same time, because any prevention “at all cost” includes an implicit assumption that you don't want to touch technology ever again, keeping yourself away from any future, untested, technology.

Karsten Nohl

Dr. Karsten Nohl is cryptographer and one of the most renowned white hats, with a focus on critical infrastructures. Karsten has spoken widely on security gaps since 2006. Karsten’s team has uncovered flaws in mobile communication, payment, and other widely-used infrastructures. In his work as Interim CISO at Jio and at Axiata, Karsten engineered security to protect 400 million subscribers.