CISA’s Bob Lord coined the term ‘hacklore’ for ‘cybersecurity folklore’, the stories we tell ourselves and others about the nature of technological risks and ways to avoid them that are grounded in fear rather than fact, rumour rather than evidence, antiquity rather than the present day. I see this everywhere, even in high-end corporate ‘cyber awareness’ programmes. Beware of charging your phone from a public USB socket, beware of accepting browser cookies, beware of updating devices on untrusted networks and so on. Of these stories Bob says “people get hacked every second, just not that way”. Perhaps the risks used to be greater and have since been mitigated with better technology; perhaps they were never that great to begin with but we told the story because it was easy or it aligned with a bias somewhere along the line. Bob has explored this topic at length, highlighting the danger of filling the limited cybersecurity memory buffers of non-techy folk with dross that doesn’t help them actually get or stay secure. I’d like to contribute my opinion to the flip side of this conversation, namely the effect on us techy folk, the industry insiders.

Public Wifi Perils?

An oft-cited nugget of hacklore is the danger of public wifi, that using it will ipso facto compromise your device and online accounts. 15 years ago that would certainly have been a possibility; back then there was a good chance that there’d be some traffic sent in the clear, perhaps even application data such as session cookies that could be pilfered and replayed by an attacker to compromise an account. That’s a far cry from the post-Snowden, TLS-everywhere world we now inhabit though. These days it’s exceedingly rare to find anything that isn’t encrypted by default and many technologies and standards have advanced to wrap security around our online interactions regardless of whether we’re at home, in the office or on the roam. To name a few, strict transport security prevents the removal of encryption, public key pinning hardcodes trusted encryption certificates, encrypted server name indication conceals the websites being visited, and the combo of DNS over HTTPS and DNSSEC provides end-to-end authenticated name resolution that’s immune to spoofing.

In fact this all works so well and is so airtight that in a corporate or education context where we have a legal duty to log or filter Internet use, we have to break all of this encryption to do so. This involves using administrative privileges to install a trusted certificate onto every device that forces it to accept a ‘man in the middle’ that is privy to its unencrypted conversation with the outside world.

A recent academic paper entitled “Blind Trust: Raising awareness of the dangers of using unsecured public wifi networks” claimed that after running several free wireless networks, the researchers then “informed the users about their leaked credentials and private data”. This sounds pretty damning right up to the point where they explain that “we implemented the interception of HTTPS traffic through the use of custom certificate authority (CA) certificates”. Well of course they did! Without installing their custom certificate to break encryption, no private data on their test wifi could have been exposed. After all, if there were another way of accessing private data it would already be in use for traffic inspection in corporate and education networks worldwide (and probably mitigated in the next version of TLS). This is in no way a reflection of real-world risk.

To close the case on this, a recent blog entitled “How to hack wifi” does a historical review and highlights that even a decade ago when encryption was much less prevalent, attackers already weren’t bothering to “observe or affect the web traffic of their targets”, plus generally wifi as an attack vector is inefficient as it’s just going to yield “mountains of worthless junk data belonging to random unknown strangers”.

Wifi security is a moot point; recent Microsoft issues notwithstanding, we should be able to assume it’s compromised because security is wrapped around all of the data we send and receive through it (and that’s without using a VPN to add an extra layer).

The Evil Twin

In my adopted home town of Perth, Western Australia, a man was recently charged by the Australian Federal Police with “creation of evil twin wifi networks to access personal data”. The AFP started investigating “when an airline reported concerns about a suspicious WiFi network identified by its employees during a domestic flight” and they found that “when people tried to connect their devices to the free WiFi networks, they were taken to a fake webpage requiring them to sign in using their email or social media logins. Those details were then allegedly saved to the man’s devices [and] could be used to access more personal information, including a victim’s online communications, stored images and videos or bank details”.

Reading between the lines, the guy was using a malicious captive portal and presenting it to users who connected to his wifi, offering them a ‘sign in with your social media account’ option. He would then save the password and reuse it to access the individual’s account.

This was all over the news and the media was quick to call upon experienced voices in cyber for interviews on the subject. They said things like “avoiding public wifi is my top advice”, “people don’t know how dangerous this can be”, “puts financial data at risk” and so on. Without exception they portrayed public wifi as highly risky and said it should be avoided if at all possible.

Passing The Buck

So what’s wrong with this response? Well, firstly this widespread public messaging creates a huge amount of unfounded fear. The AFP themselves stated that “anyone who connected to free WiFi networks in airport precincts and on domestic flights is recommended to change their passwords and report any suspicious activity on their accounts”. What, all of them? Can you imagine being someone who used airport wifi to do a financial transaction and then you get this message? I’d be terrified that my bank accounts were hacked. Yet this concern is likely unfounded since breaking TLS for every client session is as exponential an increase in complexity from running a password-stealing captive portal as building a jet engine is from banging rocks together to make fire.

More importantly, when we believe the hacklore we shunt the locus of responsibility onto the user. If we can simply shake our heads and declare all public wifi to be inherently, irredeemably insecure, we’re absolved from asking further questions about how we got into this mess and how we could get out of it, such as:

  1. Why are Passkeys floundering? Is it true that we missed our golden chance to eliminate passwords? If so then too bad there’s nothing better coming, so how do we make phishing-resistant MFA a no-brainer for the world at large? If we had a critical mass of folk using it, dumb password stealers wouldn’t even be worth wasting time on.
  2. Is a wifi captive portal page so different from a phishing email or any number of other ways we can be tricked or socially engineered into landing on a page that’s maliciously harvesting credentials? Surely we should be addressing the root cause rather than treating wifi as an exception?
  3. Why aren’t we applying zero trust principles to this? Why can’t we safely assume that every network we’re connected to is spying on us, trying to hack us, or both? How can we make it easy to be verifiably secure and obvious when that isn’t the case? Do browsers bear some responsibility for this?
  4. Why are we advising the hard things like “use a reputable VPN” (hey AFP, consdider how much there is to unpack in those 4 words for a non-technical person!) rather than the easy ones like “make sure you’ve done your updates before you connect” (which didn’t get a mention at all)?
  5. Isn’t OpenID Connect due some blame? It’s certainly easier to log in with an existing account by clicking a button than it is to create an account for each app or service individually – but this has normalised the ’log in with Facebook’ button without a mechanism for validating its legitimacy. It’s also not widely known how much information OIDC hands over to the identity provider, particularly when the identity of the relying party reveals sensitive information about the user. Privacy-preserving models have been suggested for years but it doesn’t seem like there’s much will to implement them. Should we be doing more to promote awareness that this is a convenience / privacy tradeoff?

All of this is entirely my own opinion; if any readers have their own thoughts, please comment below.



Comments