Category Archives: InfoSec

DNS Rebinding Attacks — How do they work?

DNS rebinding attacks seem rather trivial to exploit once you get someone to click your link or visit a site that you have a banner ad on, but the behavior required by the authoritative DNS server is what I want to understand. Let take a look:

It seems like it would require an unusually coordinated effort on the part of the initial malicious website and the corresponding authoritative DNS server. 

How does victim make a request to malicious dot com, receive a DNS message with the IP address of malicious dot com, then send out a refresh request to malicious dot com because of the very low TTL and get a response containing the attack payload? — This needs to be figured… 

It seems like the attack vector requires victim interaction with an initial HTTP request through social engineering, a specially configured DNS server and an advanced knowledge of the victims interior network. 

How are these things managed?

We get the malicious request that serves up malicious dot com — and it can embed a JavaScript scanner in it to do discovery, per this website here: 

http://rebind.network/rebind/index.html, courtesy of Brannon Dorsey where his site scans for some well known IoT devices. The way this is accomplished is also very interesting as the browser is able to collect information about your network hosts through a side-channel discovery technique which allows it to find out if something is at a particular address by making a request to it and gauging the response time — no content would be discovered at this time as the Same Origin Policy will prevent that, save the website has CORS enabled.

So, we have established the method of ingress and discovery, but still not the tooling of your DNS server to serve up the locally addressed DNS message attack payload. How is this done?

And we found something awesome! Here is a complete framework for setting up this attack:

— https://kalilinuxtutorials.com/singularity-dns-rebinding-attack/

Okay… pay dirt. This is what I was looking for. When you implement this framework, you are deploying a dynamic DNS service that can be configured based upon a host and timing strategy that you can designate. This is how you get the DNS server to play ball when it comes time to poison the victims DNS cache by serving up a subsequent request that points into their own network.

When the service is launched, you get full access to DNS twisting functionality that you can see in the instructions to set it up:

Launch the Singularity binary, (singularity-server), with the -h parameter to see its parameters.

DNSRebindStrategy string : Specify how to respond to DNS queries from a victim client. The supported strategies are:

  • DNSRebindFromQueryRoundRobin
  • DNSRebindFromQueryFirstThenSecond (default)
  • DNSRebindFromQueryRandom
  • DNSRebindFromQueryMultiA (requires Linux iptables)

This represents the gearbox for how the DNS cache on the victim host is poisoned when reaching out for another FQDN hostname resolution after the 10 second TTL message has indicated its expiry. Brilliant

 

Studying TLS 1.3 — Part 1: The New Handshake

The new TLS 1.3 protocol as seen in RFC 8446 can be described very well by looking at the new handshake:

Client: Key Exchange — Establish shared keying material and select the
cryptographic parameters. Everything after this phase is
encrypted. ( Spooky!

Server: Server Parameters: Establish other handshake parameters
(whether the client is authenticated, application-layer protocol
support, etc.).

Client: Authentication – Authenticate the server (and, optionally, the
client) and provide key confirmation and handshake integrity.

And thats it…

I would say that its quite an improvement over TLS 1.2. Lets take this apart and look at the differences between the constituent elements of the client hello.

TLS 1.2 Client Hello: 

The first HTTP packet to touch the server from the client is called the hello message. It consists of everything you may need to start an encrypted conversation:

A preferred protocol version designation 
A GMT time variable
A 28 byte randomly generated structure
A session ID 
A list of supported cipher suites 
A list of supported compression methods
Possible Extensions information 

Getting to this level of detail is only possible by going to the RFC pages ( Request for Comment — its like the internet Rosetta Stone  ) and reviewing the C code and its component expansions that the IETF explains pretty well. Its important to learn the C programming language — one reason in particular is that sometimes all the documentation you will have on something will be an implementation written in C. And the C language is so universal, it actually has an ANSI standard, the same way we have a standard for how long a meter is or how much a kilogram weighs — pretty cool huh? You can read more about it on the ANSI website or wikipedia.

So, speaking of implementations — this is now getting to the bigger picture for this read. Its critical to remember that as we learn about TLS and the comparison of 1.2 to 1.3, we are talking about standards. Lets take a look at the TLS 1.3 client hello:

TLS 1.3 Client Hello: 

A preferred protocol version designation for TLS 1.2 Backward Compatibility 
A 32 byte randomly generated structure
A legacy session ID – another BC component for TLS 1.2 
A list of supported cipher suites with PSK establishment 
Legacy compression methods – BC for TLS 1.2, otherwise one byte set to zero
Extension information

The modification to the TLS protocol shows up right away in the hello message as it will have already gone through the trouble of establishing a pre-shared key for the protocol it thinks the server is likely to support. This is the basis for the improvement model revealing  itself in the first hello message leading to a more performant interaction. Remember, there are only two more steps until a symmetrically encrypted link is established and communication begins.

Below is an excerpt from a paper published by the University of Oxford on the scientific analysis of the TLS 1.3 standard using the Tamarin Prover by Cas Cremers, et al:

The default mode of TLS 1.3 allows for ephemeral Diffie–Hellman (DH) keys to be established either over a finite field or using elliptic curves. In an initial (EC)DHE handshake, as depicted in Figure 1, the client sends a ClientHello message containing a random nonce, i.e. a freshly generated random value, and a list of symmetric algorithms. The client also sends a set of DH key shares and the associated groups, KeyShare, and potentially some other extensions.

This is figure 1, taken straight from the paper in question.

The magic of this improvement lies in the assumption of a usable Key Exchange method that you can see when the client sends a ECDHE KeyShare in the first request along with its very small list of supported ciphers. This results in a 3 step handshake instead of twenty thousand or however many there were before. Massive improvement.

If you want to check out what I mean by spooky, I recommend watching a very succinct and clean review of Elliptic Curve Crypto with a good explanation of the math behind the Diffie-Hellman key exchange by Will Raedy: Link

On a final note before we move on…The important thing that I want to remember here is that regardless of this new standard, and regardless of someone saying:

I’m using TLS 1.3“,

you need to look a little deeper than that pretty little fractionated numeric value. When you look at little closer, you will find that TLS is a design standard, much like ISO27001 is a standard. Standards have versions, and they also have variability around the way they are implemented.

Go here:  https://tls.ctf.network  and watch what happens!

At the time of this writing, there is a script on the page that tests which version you are using. Visiting it with Chrome Version 68.0.3440.106 (Official Build) (64-bit), I got the following message:

You are connecting with TLSv1.3 Draft 23.

And….. whoah… Yesterday Firefox was connecting to said page and the message came back with Draft 28. Now the page wont even render… strange: FireFox Quantum version 61.0.2 seems to have a problem as Im getting a single line text in the response from the ctf domain that says:

You are connecting with TLSv1.2, you can enable TLS 1.3 using the latest Chrome Canary (chrome://flags/#ssl-version-max) or Firefox Nightly.

Firefox seems to be having strange difficulties of late….

Anyway… Lets turn our attention over to the GitHub repo that manages all the mainstream TLS implementation projects here.

There are about 20 implementations there, each that specify the draft of TLS 1.3 from the IETF that they are using for their frame of reference. One in particular you may have heard of is BoringSSL, which was spawned by a fork from Google back in mid 2014 when they decided to sunset their use of OpenSSL. You can see they wrote it in C, they work with two drafts and they directly reference RFC 8446 ( don’t you just love those :).

Key takeaways:

  • TLS 1.3 is ready for action ( symmetrically encrypted communication ) within 1.5 round trips between the client and server. 1.2 was boasting 6 steps of communication.
  • TLS 1.3 accomplishes this feat by making a key sharing protocol assumption and packages a PSK using the Diffie Helman protocol and available cipher suites.
  • The Cipher Suite options make this really exciting because instead of 317 there are five.

Some things we will look into in our part 2 article:

P-256
X25519
HelloRetryRequest
resumption
0-RTT
KeyUpdate

Spooky! 🙂

TLS 1.3 is here. Why? — Ya suspect!

I was reading up on the new spec for TLS 1.3 and I caught a snippet of text that stuck out.

“TLS 1.3 also enables forward secrecy by default which means that the compromise of long term secrets used in the protocol does not allow the decryption of data communicated while those long term secrets were in use. As a result, current communications will remain secure even if future communications are compromised.”

Current communications will remain secure? I would sure hope so! What are they talking about? Why would we need to worry about current traffics remaining secure in the future?

I smell something cool about to be learnt here. Put your thinking gloves on and keep your ears to the grindstone cuz’ here we go!  (“Ya Suspect!”)

Lets see if we can take this apart as it is eluding to several things:

Forward Secrecy: ( or Perfect Forward Secrecy ) Implementing the use of an ephemeral shared secret for encrypting payloads such that an attacker can not reuse it to attack other transmissions.

Long Term Secrets:  I know what they mean when they say this, but I’d like to come up with a precise definition of long term secret… we’ll come back to this one.

Current Communications Remaining Secure: This is the part that really got me. Why would someone have to worry about this? This is absolutely savage. What is going on here? Well.. here is what I can deduce: If future communications are compromised, that implies that someone has gotten a hold of a key to decrypt an intercepted payload and done just that…. but the need to protect encrypted payloads transmitted in the past also implies that THEY HAVE THE TRAFFIC.

Are you kidding me? They are on your network running a rogue service with access to an outbound port. Lets get serious about what this means. Attack surface is everything; if you do not have control of what is running in your network, you have bigger problems than will be addressed by TLS 1.3. This is making me too upset and I need to go calm down. While Im counting to ten, lets have you go back to defining the meaning of Long Term Secrets.

The Slack Developer Blog remarked about the usage of a Long Term Secret when they introduced Request Signing and Mutual TLS Verification for their API integrations:

Both Request Signing and Mutual TLS allow verification without sending a secret along with every request, preventing the exposure of long-term secrets — even in the presence of someone intercepting a request or TLS-terminating infrastructure that should not have access to the secret.

Seeing as how they used to issue a predetermined secret over HTTP, I can see how they made the move for protecting their integrations from spoofed requests, but this still doesn’t give us an understanding of the complete details of this secret and what makes it long term. Lets keep digging…

And… yay! It looks like I found something perfect ( ha ) for describing this kind of secret… Im looking forward ( Okay, i’ll stop ) to seeing where it leads. John Deters scribbled out some prose on the Crypto subdomain of stack overflow which will encapsulate this nicely:

A long-term key is one that is deliberately stored somewhere, either on a computer disk, flash memory, or even printed on paper. The key is intended to be used at multiple points in time, such as “I will use this key to encrypt this secret file today, and use it again to decrypt my secret next week.” A long term key can be used for any purpose, including stored information as well as transient communications.

So, what does this point us to? Certificates! Private Keys! The most protected element of the cryptography process. Now, as my intolerance for ambiguous verbiage has made itself known to all on this soul searching quest to find meaning ( for a word :), the beauty of the analyst’s quest for exacting definitions is on display. Sometimes you just have to go hunt some things down and make the kill… the best part of which is all the other little mysteries and rabbit holes that make themselves known. If we read further down the page we see John continue with ( may I call you John? ) a specific point of interest:

* Note that an attacker can certainly record the session key exchange, and if they can break the key exchange cryptography they can decrypt the session key.

“Do you know what this means?”“It means that this…damn…thing…doesn’t work at all!”

Key Exchange Cryptography?! What does that mean!!? How exciting… a new snake pit for us to jump in. We will talk about that next time….but for now, the moral of this story has really been the underlying cause for the importance of Forward Secrecy — the fact that you may have someone on your network or on your workstation running a tcpdump of every network request in sight. Sort of parallels the focus on output sanitization for mitigating your XSS issues.

Take care. And remember: Ya suspect!

Reviewing the improvements to Burp Suite – The Crawler + Basic Setup

The Crawler – Still arachnid based, never fear.

I have always wondered how application crawlers were going to deal with the problem of infinitely crawl-able application spaces that self enumerate your efforts to map an application into oblivion. Now with Burp’s new crawler being able to map a given application through induction, I think it will greatly speed up the process and present data in a much more understandable way. I will post more once I have tried it out.

But first!… I think that Haddix published a great starter intro to Burp Suite in case you haven’t used it before. Check it out here.

Great things to remember that Haddix points out:

  • Setting up multiple profiles in Chrome to keep Burp slender and fit
  • A VPN is useful for masking your IP in case your testing traffic gets you banned
  • Getting your Burp Cert installed in the keychain on your mac for Chrome can be tricky. Un-tricky it here.

 

 

DNS Security Distillation

Why is DNS security important? 

DNS is the lifeblood of the internet and is a fundamental service that most of us take for granted. It is a highly specialized yet simple protocol that renders the vastly diverse resources abound on the internet available to its human users via a written linguistic abstraction. You interact with it every day when you type google.com, but you know this already so I will spare you the remedial math. As DNS functions via a mechanism that is invisible to us, it is easily manipulated without our knowledge. This article will demonstrate the methods and goals of bad actors that target the DNS attack service.

In a nutshell ( or an oystershell for that matter ), DNS Security is the discipline of preventing redirection attacks or having your traffic routed to a place you dont want to go. It also involves insulating your DNS servers from unwittingly being conscripted into DDOS attacks. 

For the most part, these kinds of attacks are discussed in highly abstracted terms and they fail to describe the method of engagement, i.e., where the attacker has to be and what he has to have in order to execute the attack — (read: gripe) I find the ommision of this detail to be a failing of most security articles — more on that later…

DNS Cache Poisoning: In its most exciting detail, cache poisoning is a form of DNS Spoofing that occurs when forged DNS entries have been injected into a resolver’s cache, then subsequently served as legitamate answers to a querying host. Imagine opening up your phone’s address book and clicking on someone’s name, just to have your phone call a different number!

Now unimagine it. After researching this for a while, it appears that this kind of attack ( on-LAN, brute-force method of pushing answers into a DNS server after making a request and relying on a race condition ) has been largely mitigated by using randominzed port number along with a randomized QueryID.

2. Examples of being poned via DNS 
cache poisoning, Reflection Attack
:
3. Describe malware’s DNS necessity
4. How to address DNS security issues
5. What is all the fuss about 1.1.1.1?