Monthly Archives: July 2018

Backend Development With Flask

Context

I distinctly remember how much I liked writing backends (well, I tried). I also remember thinking, I will never become a frontend engineer. Ever. Like many software developers in my circle, I hated writing CSS, and JS hadn’t clicked until then. And these were the days before I knew anything about automated deployments. Deployment, for me, was spinning up a Digital Ocean droplet, installing everything manually, setting up the database (and copy pasting the db creds into code), and then running the development server and keeping it running. Stop laughing, please.

Motivations

So I decided to pick up some backend skills again. The main reasons for this were that I’ll need a full-time job soon, and it would be much better if I get to write the full stack of the web instead of just the frontend. Secondly, knowing full stack is a superpower that I’d not want to not know. It comes in incredibly handy when working on some personal projects that require the web as an enabler. Thirdly, I wanted to refresh my Python skills. I liked python a lot back in the day, but I had lost touch since the last couple of years. So with those goals, I started with Python and Flask.

Python & Flask

I liked flask, but I’m still struggling with some basics, especially working with app instances for writing tests. Yes, there are a few differences working with the backend this time versus like three years ago. I’m following (or trying to follow) the best practices, writing tests for the code, I have a nice CI pipeline which starts with testing of the code and ends with deploying the app on Heroku. Most importantly, I have an excellent mentor for my backend adventures who’s a badass python programmer.

Flask is fun to work with. The framework is minimal, kinda like React. There’s a lot of support online, and there are great plugins already available for most common functionality. Databases are one of my weakest points in web engineering, and I’m trying to experiment a lot of things on the model layer with SQLAlchemy and Postgres backend. One more novelty for me was asynchronous programming. In Javascript, you had to beg for things to be synchronous. But here you face a different problem; If something is slow, then the entire thread is blocked. For taking care of things that are slow, say sending an email, one could use Celery with RabbitMQ’s backend. All of this is given to us ready-made by Heroku. So no more manual DevOps work, and fewer variables to worry about.

The other motive was to learn quality Python 3. In python, you have a pythonic and many non-pythonic ways of doing things. There’s no point in writing Python like C. I wish to learn the philosophy of the language so that I can make the right choices when deciding how to solve a problem. There’s nothing like writing clean and elegant code that others can appreciate. In the last week, I also got exposed to a lot of different data structures that the python library provides for specific use cases. In the right scenario, using an appropriate data structure can be the best optimization you can do to your code. I am looking forward to getting a hold of this as well.

Lastly, one thing that I never thought I’d so, but I’m doing, is trying to learn object-oriented programming. I had done some OO python in the past, but it never clicked. Laster, with JS, it was all about functional programming. Now, I’ve rediscovered OO programming and wish to relearn it, apply it and try to make it click. I like the contrasts in both paradigms of programming, and it could not be better explained than this StackOverflow answer.

  • Object-oriented languages are good when you have a fixed set of operations on things, and as your code evolves, you primarily add new things. This can be accomplished by adding new classes which implement existing methods, and the existing classes are left alone.
  • Functional languages are good when you have a fixed set of things, and as your code evolves, you primarily add new operations to existing things. This can be accomplished by adding new functions which compute with existing data types, and the existing functions are left alone.

Overall, I feel I’m becoming a little (very little indeed) mature with programming. Instead of sticking to paradigms and trying to defend the one that I’m most comfortable, I’m trying to see why those paradigms exist and what problems are they helping me solve. And since python supports both object-oriented as well as functional programming, it will be fun to work on any such problems.

I would write some technical articles on the subject when I feel confident enough in the near future. Just wanted to give you an update on what’s happening on my front in this one. Hope you found it useful. I’ll leave you with an interesting video about ‘Duck typing and asking forgiveness, not permission’ which is a design pattern in Python. Thank you for reading.

ELI5 – How HTTPS Works

Let’s start with some basics. Just like when you want to talk to another person, you talk in a language that both of you understand, every system that wants to talk to another system needs to talk in a commonly understood language. Technically, we call it a protocol. HTTPS is a protocol, and so is English. Every protocol is designed with some goals in mind. For real-world languages, the goals are simple. They are usually communication, literature and so on. With computer protocols, the goals have to be more stringent. Usually, different computer protocols have very different purposes. For example, File Transfer Protocol (FTP) was (and still is) widely used for transferring files, Secure Shell (SSH) is used for remote administration and so on.

Note that we’re only talking about application layer protocols in the Internet Protocol Suite. Once the appropriate protocol in the application layer creates a packet for transmission, this is encapsulated in many coverings, one by one, by all the layers beneath it. Each layer attaches its own header to the message, which then becomes the message for the next layer to attach its header on. A reverse of this process happens on the recipient’s end. It is easier to imagine this process as peeling of layers of an onion.

So having that set, we’ll start out discussion about HTTPS. HTTPS, or HTTP Secure, is an application layer protocol that provides HTTP traffic encryption using TLS (Transport Layer Security) or its predecessor, SSL. The underlying application doesn’t have to worry about HTTP or HTTPS, and once the initial handshake is done, for the most part, it is just an HTTP connection, one that is over a secure tunnel. I’ve been a frontend engineer and I’ve never written any specific HTTPS code, ever. That’s the magic of TLS.

What’s TLS?

So HTTP that is encrypted using TLS is HTTPS. Got it. But what about TLS then? For starters, TLS is a hybrid cryptosystem. It uses multiple cryptographic primitives underneath its hood to achieve its goals.

Aside on cryptographic primitives: Cryptographic primitives, like symmetric encryption, block ciphers and so on are designed by experts who know what they’re doing. The role of protocol implementers is to take these primitives and combine them in useful ways to accomplish certain goals.

TLS uses symmetric key encryption, asymmetric key encryption, and (sometimes) message authentication code to establish an encrypted bidirectional data tunnel and transfer encrypted bits. We’ll try to explore how each primitive is used to attain some goal in a bit. With these primitives, particularly with public key infrastructure (PKI), TLS establishes the identity of one or both the parties involved in a communication (your browser and the web server in most cases). Then, a key is derived at both the ends using another primitive called Diffie Hellman or RSA which are asymmetric key crypto algorithms. Once the key is derived, this key can be used as the session key to be used in symmetric key algorithms like AES. If an authenticated encryption mode is not used (such as GCM), then a MAC algorithm might also be needed (such as HMAC). Also, a hashing algorithm (such as SHA256) is used to authenticate the initial handshake (and as a PRF if HMAC is used). Let’s try to follow a typical HTTPS handshake and see what we learn during it.

In the beginning…

In the beginning, there was no connection. You open your browser and type in nagekar.com. The following things will happen in that order, more or less.

  • Your browser send a DNS resolution request for nagekar.com.
  • Your router (or any DNS resolution service) will provide you with the IP address of the host
  • Now the three way TCP handshake that we studied in our networking classes happen (SYN -> SYN/ACK -> ACK).
  • After establishing a TCP connection, your browser makes a request to 104.28.11.84 for host nagekar.com. The server responds with a 301 Moved Permanently as my website is only accessible over HTTPS and with the WWW subdomain.
  • Now starts the TLS handshake. First client sends a client hello. It contains the following important pieces of data:
    • A random 28 byte string (later used for establishing session key).
    • Session ID (used for resuming a previously established session and avoiding the entire handshake altogether, 0 here because no previous sessions found).
    • Cipher suites supported by the client in order of preference.
    • Server name (this enables the server to identify which site’s certificate to offer to the client in case multiple websites are hosted from a single IP address, as in the case with most small/medium websites).
  • Then server sends a server hello which has the following important pieces of data:
    • Another random 28 byte string (later used for establishing session key)
    • Cipher suite selected by server (in our case, the server selected TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 which was our most preferred cipher suite)
  • At this point, both client and server have the necessary information to establish an encrypted tunnel, but one important detail is missing. No party has verified the identity of the other party (and if not done, it really defeats the purpose of whatever came before this since an active man-in-the-middle adversary could easily break this scheme). This is done in the certificate message. In most cases, only the client will verify the identity of the server. Here’s how it looks like:
  • And this is exactly what you see when you click on the green padlock icon in your address bar and try to see more information about the certificate offered by the website.
  • At this point, the server hello is done. It is indicated in the message that the server won’t be asking the client for a certificate.
  • The server sends one half of the Diffie Hellman key in a separate Server Key Exchange message. Following this, the client sends other half of the Diffie Hellman key exchange. After that, the client sends a Change Cipher Spec message which means any subsequent message from the client will be encrypted with the schemes just negotiated. Lastly, the client sends the first encrypted message, an encrypted handshake.
  • On similar lines, server issues the client a Session Ticket which the client can then use to resume connections and not go through the entire Diffie Hellman procedure again (although it is valid only for 18 hours in our case). The server sends a Change Cipher Spec message, indicating that no more plaintext messages will be sent by the server. Lastly, the server sends its first encrypted message, an encrypted handshake, just like the client.
  • That’s it. We have established a secure connection to a computer on the other side of the planet and verified its identity. Magic!

Crypto Primitives

Let’s discuss what goal of cryptography is achieved by what part of this entire handshake. Remember the cipher suite that the server choose? It was TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256.

ECDHE: ephemeral Elliptical Curve Diffie Hellman, as we saw, is used to establish a shared secret session key from the random values our client and the server exchanged (over an insecure channel). It is a key exchange crypto.

ECDSA: Elliptical Curve Digital Signature Algorithm is used to verify the public key supplied by the server, in our case nagekar.com, issued for Cloudflare by COMODO.

AES 128 with GCM: AES is a block cipher. Being a symmetric key encryption algorithm, it is much faster than the asymmetric key ones, and hence used for encryption of all the data after the initial handshake is done. 128 is the size of the key in bits, which is sufficiently secure. GCM stands for Galois/Counter Mode, which is an encryption mode used with AES to provide authentication and integrity along with the regular confidentiality.

SHA256: Since we’re using AES with GCM, we won’t be using this hash function for message authentication. However, since TLS 1.2, SHA256 is used as a PRF. It will also be used to verify that all content exchanged during the handshake were not tampered with.

Security Considerations

About trust: As you might have noticed, all the above steps were essentially so that two random computers can come up with a shared secret session key. The other part to this is Certificate Authorities. Why did we trust the certificate that the server sent us? Because it was signed by someone, whom we trusted. At the end of it all, you still have to implicitly trust someone to verify the identity. In this case, we’re trusting COMODO to have only signed one certificate for the domain in question.

About browser and updates: If you see the version of TLS that we used, it is 1.2 which is not the latest. The cipher suite is also not the best we could’ve got. Why did that happen? Simple, because we were using an outdated browser which didn’t support the strongest cipher suites and the latest version of TLS. Since that was a test machine, it doesn’t matter a lot. On any up to date browser, this is what you should see.

About cryptographic primitives: We saw some of the most understood crypto primitives being used in the handshake. This is a pattern you’ll see often while reading about cryptology. It is a sin to implement your own crypto, especially the primitives. Use a library that implements these primitives, or better yet, the entire cryptosystem.

About mathematics: The reason that we think the above scheme is secure, that no data is leaked even though key was generated using the information sent in clear, is because the basis of some of these cryptographic primitives are hard problems in mathematics. For example, since mathematicians believe that discrete logarithms are easy to verify but are hard to calculate the other way, we say that Diffie Hellman (which makes use of discrete logarithms) is secure. Similarly with RSA, mathematicians believe that factoring large prime numbers is a hard problem, hence RSA is considered secure as long as the numbers are large enough. Of course, not always is a mathematical proof available. For example, AES is considered secure, but there’s not proof that it is secure. We think it must be secure because the brightest minds in cryptology have spent thousands of man hours trying to break the encryption algorithm but they haven’t succeeded (yet?).

In Closing

As you can guess, a lot of important details are skipped in this article. There are two reasons for that. 1. I lack the necessary knowledge to simplify the deeper parts and 2. It would be boring to read if the post felt like a spec. If you wish to read more, refer to the list of references below this section.

References

Thank you for reading!

ELI5 – Deterministic Encryption

Suppose you have a database full of confidential information such as emails of users. As a responsible sysadmin, you’d not let such data exist in plaintext in your systems and therefore you decide to encrypt everything. But now, the application needs a searching functionality where users can see their emails in the system.

Essentially, you need to run a where email = ‘ ‘ query on the encrypted database, get all the rows that match, decrypt them and send the decrypted data to the application layer. With traditional encryption modes like CBC or modern authenticated encryption modes like GCM, this would be impossible (or extremely inefficient). This is where deterministic encryption comes into the picture.

Deterministic Encryption

Deterministic encryption is nothing fancy. Instead of using modes like CBC and CTR where each block of the ciphertext is dependent on the previous block or the message counter, in deterministic encryption, data can be imagined to be encrypted in EBC mode or the IV kept constant. No nonce is involved. Basically, a plaintext message M will always map to the same ciphertext C in a given deterministic encryption scheme under a particular key K.

Once the data is encrypted into ciphertext, it is sorted and stored. Now, when a search term comes up, it is encrypted using the same key, then the database is queried for this ciphertext which returns all the rows that match. The application can then decrypt these rows with the given key. The search takes logarithmic time (since for the database, this is a normal text search) and the database never sees any data in plaintext.

Even with all of this, deterministic encryption faces all the issues that plague encryption modes like EBC. Namely, if two plaintexts are same, their encrypted ciphertexts would be equivalent as well, thus leaking information to an adversary. Formally, we say that deterministic encryption can never be secure under chosen ciphertext attack. Although that doesn’t diminish its value when we have to deal with searchable encrypted datasets.


From Wikipedia’s encryption modes of block cipher page

This means that deterministic encryption cannot (or rather should not) be used when the message space M is small. It can only be used on unique values such as email address or usernames which are designed to be unique in a database.

Further reading:
https://crypto.stackexchange.com/questions/6755/security-of-deterministic-encryption-scheme
https://en.wikipedia.org/wiki/Deterministic_encryption

Thank you for reading.

ELI5 – Format Preserving Encryption

Most block ciphers work on the bytes and bits of data. It doesn’t matter to them if the data is a video, a phone number, a credit card number or a meme of Patrick Star. And that’s good. It means that a generic block cipher can handle almost all the traffic encryption over TLS while we’re busy browsing Reddit with YouTube playing music in another tab. And since the underlying network handles binary data just as well and efficiently, no one complains.

And it is generally a bad practice to write domain-specific encryption algorithms. That is the reason you’ve never heard of AES Image Encryptors. But sometimes, for very specific use cases, it becomes necessary to have something like that. Something like an AES for images, so to speak.

The Problem

What if a legacy application needs to integrate encryption of some of its data but not change any of the existing data structures? AES works on 128-bit blocks of data, DES on 64, so it won’t work for phone numbers or credit card numbers. At least, not without changing the underlying data structures required to store the encrypted data. And suppose we cannot change the underlying architecture for various reasons, one of them simply because we do not control some of the machines passing our data. Yes, that’s where we need format preserving encryption (FPE).

Format Preserving Encryption

While researching about this topic, I came across this beautiful construct by cryptographers John Black and Phillip Rogaway. The construct is simple and the best part is that it uses a block cipher as a pseudo-random function (in case of block ciphers with larger block sizes, we truncate their outputs to the desired bit size by taking only the N least significant bits), thus inheriting all the goodies of the underlying block cipher. Let’s look at a brief working of this method.

Let the message space be M. In case of phone numbers, that’s from 0 to 9,999,999,999 (that’s for India, and while the actual message space is much smaller than that, no harm in assuming for the entire range). The number of bits required to store this information is ln(10^10) = ~24. So we can fit the ciphertext in 24 bits assuming no padding or integrity checks. Now imagine two sets, X and Y. Let X be a superset of Y. In this construct, X represents the set of all possible ciphertexts that we can get on encrypting each Mi with our block cipher. Y represents the set of allowed ciphertext, that is, ciphertexts that are equal to or less than the max value of our message space M (which is 9999999999 in our example).

Now, when you encrypt a phone number with a block cipher, there’s a good probability that the value would be less than or equal to our phone number block size (10 digits or 24 bits, assuming we’re truncating AES output to 24 bits as well). If that’s the case, that’s our answer. If not, encrypt this ciphertext and check again. Continue this until you reach a number that can fit in 10 integer digits.

Now while some of you might think (I certainly did) this would result in a long loop, it would not (with high probability). This solution not only works but works efficiently (on an average, the answer will be found in 2 iterations with 50% plus probability of finding it in each iteration). That’s pretty cool if you’d ask me!

In an ideal world, you’d want to rewrite the logic and underlying data structures such that native AES is possible. In this one, format-preserving encryption would work just fine. Thank you for reading.

ELI5 – Key Derivation Function

We’ve heard that AES and other block ciphers require specific key sizes; 128, 256 and 512 bits. But I don’t ever remember having to calculate my password length based on the underlying key size. Never have I read on a website “passwords need to be of 16 ASCII characters, 1 byte each, to make a total of 128 bits of key material”. So what lies between me entering an arbitrarily sized password and the encryption algorithm receiving a 128/256 bit nicely sized key. Let’s find that out in this ELI5.

Key Derivation Function

A Key Derivation Function (wait for it…) derives cryptographic key(s) from a password. Generally speaking, the passwords we humans come up with are something like “MyAwesomeDog007” which, while long and easy to remember, just don’t have enough entropy for cryptographic applications. On the other hand, a key derived from a simple password “ml6xU*dwGS5rvE!dcIg6509w$$” (that’s not a real key, a real key would in most cases be binary) is complex and entropy rich. This is the first purpose a KDF serves; to increase the entropy of a password and making it suitable for use in other algorithms such as AES.

The second purpose that KDFs serve is that they make brute forcing infeasible. Due to the high computational costs of running a good KDF, brute forcing is typically not achievable for any half decent password. Of course, it won’t protect a user from a dictionary attack if she selects a password such as “password123”.

Working

A KDF takes an arbitrarily sized input that has low entropy (user-supplied password, for example), runs some hash-based algorithms on it, and output a random looking fixed sized cryptographic key (which becomes input key to encryption and MACing algorithms later). A KDF can be thought of as a pseudo-random function (PRF) which maps an input password to an output key. As a PRF, the input and output mappings should look completely random to an attacker and in no circumstance should he be able to get the original password from a cryptographic key (that is, the function should be one way). The high iteration count makes computing KDF an expensive affair. This is acceptable for a legitimate user but will prevent brute forcing of the password.

Typically, key derivation functions employ keyed hash algorithms or HMAC. Cryptographic salt is used to prevent rainbow table attacks (precomputed hash lookups). The number of iterations (in the order of tens to hundreds of thousands) of the hash function is selected to slow down bruteforce attacks.

Implementations

A simple key derivation function is Password Based Key Derivation Function 2, PBKDF2. It takes as input a pseudo-random function (such a SHA-256), user supplied key, salt (64+ bits), number of iterations, length of output key, and outputs a key of specified length.

Although PBKDF2 is still used and recommended, modern alternatives such as Scrypt and Argon2 offer much better resistance to bruteforce.

ELI5 – Message Authentication Code

You need some urgent cash to buy today’s lunch. You throw a paper chit at your colleague, “Hey, I need you to transfer 100 bucks in my account number 10022, urgent”. Eve, a bad actor in your office, intercepts the chit, changes the 10022 to 10033, which is her account number, and forwards it to your friend. Your friend, intending to help you, transfers the amount and you both get duped!

The Problem

The above is not a overly rare event, far from it. Such attacks happen all the time on the internet, and the reason is the lack of (cryptographic) authenticity built into core internet protocols. We learned in Authenticated Encryption that confidentiality alone doesn’t mean anything if the attacker can perform active attacks on your communication channel (just like Eve could). We need something better. We need MACs.

Message Authentication Code

As the name gives away, a MAC is an authentication code associated with a message which verifies the integrity of the message and, assuming that the key is only known to you and the message’s sender, its authenticity. Just like with encryption, you give a MAC algorithm a message and a key, and it gives you a tag. This tag is unique to your message and the key pair, and an attacker shouldn’t be able to forge a valid tag for any random message of his choice even if he’s given an infinite number of ciphertext-tag pairs to analyze.


From Wikipedia’s MAC page

In concept, a MAC is similar to a hash function, such that given an arbitrary sized input, you get a fixed-sized output (digest) and this can be reproduced (‘verified’) on other machines as long as one can find the same hash function’s implementation. This is how your download manager ensures that the file it has downloaded from the internet is not broken, by calculating the hash digest and comparing it with the one the website claims. A MAC differs from a traditional hash function in that along with a message input, it also takes a key and as such, knowledge of the key as well as the underlying MAC algorithm is needed to verify (or create a new) a tag.

In fact, one of the most popular MAC algorithms is based on hash functions. The algorithm is called HMAC for Hash-based Message Authentication Code. It works by hashing key material with the message while taking preventive measures for popular attacks on hash functions such as length extension attacks. Any reasonable hash function can be used for the purpose of MAC’ing, including SHA-1 and SHA-256, (MD5 isn’t recommended).

Encryption of the underlying data is not a prerequisite for using MAC, and they can be used irrespective of whether the data being MAC’d needs confidentiality or not. Use MACs whenever data integrity is needed. One caveat to look out for; MAC algorithms by themselves do not prevent replay attacks.

Aside on Replay attacks: A replay attack may happen when, say, you owe Eve some money. You send a note with Eve for your bank saying, “Please give Eve Rs.100 from my account, Signed: Bob”. Now there’s nothing preventing Eve from being greedy and using that same note again some days later. This is prevented in the real world by making cheques unique and one-time use only. Similarly, ciphertexts must embed information (such as packet number, timestamp, session counter etc) that will expire once received and not let Eve re-send it at a later time.

Thank you for reading.

ELI5 – Authenticated Encryption

The core goals of cryptography and any application of cryptography are confidentiality, integrity, and authenticity. Let’s begin with a short one liner on each:

  • Confidentiality: No one should be able to read the contents of the message except the intended recipient.
  • Integrity: No one should be able to tamper with the message without going unnoticed.
  • Authenticity: The recipient should be able to confirm that the message indeed came from the sender.

There are other goals that we do not need to touch upon in this article, such as non-repudiation and plausible deniability.

The Problem

Now the problem with using just an encryption algorithm like AES with a non-authenticating mode like CBC is that anyone can change the ciphertext during transmission. And while you might think, “but the modified ciphertext, with high probability, will decrypt to something gibberish”, this isn’t the right argument because the recipient will have no way of knowing for sure, which is a problem, a huge one.

Secondly, there’s also no way of knowing if the message was sent by a person you’re expecting it from. It might have come from any middleman intercepting your network and you wouldn’t be able to tell a difference. And for this reason, encryption without authentication and integrity completely destroys the purpose of encryption. An example of this in the real world is when you see an error such as the following:


https://support.mozilla.org/en-US/kb/what-does-your-connection-is-not-secure-mean

While this can mean that the encryption mode used by the website is weak, more often than not, this means that the browser was able to establish a secure connection but the identity of the website is unknown. This defeats the purpose of encryption because even if the connection is secure, the fact that you don’t know if you’re receiving a message from your intended recipient or if the message hasn’t tampered with defeats the purpose of using cryptography.

Enter Authenticated Encryption

Authenticated encryption solves this problem by introducing authentication and integrity as freebies that you get when you use an authenticated encryption mode along with an encryption cipher such as AES. Examples of authenticated encryption modes include GCM and CCM. In fact, if you check the connection info of the site you’re reading this on (click the green icon and then select more info or something similar on chrome and firefox) and check the technical details part, you’ll see something like this, depending on your browser.


Yes, I’m the most active visitor of my blog

Here, AES_128_GCM is used for symmetric encryption of the content you exchange with the server with AES providing confidentiality and GCM providing authentication and integrity. SHA256 is used to authenticate the initial handshake and as a pseudo-random function (PRF).

In a nutshell, these authenticated encryptions usually take a message, encrypt it, then MAC the ciphertext (and IV) and then append the MAC to the ciphertext. This is called Encrypt-then-MAC. Now if the ciphertext is changed, the MAC won’t match and the receiver can easily discard such messages without having to touch the contents of ciphertext. There are other variations to this method, namely MAC-then-Encrypt and MAC-and-Encrypt, with benefits of going with each although most experts recommend doing Encrypt-then-MAC.



From wikipedia page on authenticated encryption. This is Encrypt-then-MAC

As you can imagine, this can be easily done manually (and until some years ago, it was mostly done by developers). But since it is easier (and much more secure) to standardize such modes and leave the secure implementation part to the experts, these ‘readymade’ modes have picked up wide adoption and as you saw, you’re currently using GCM to ensure confidentiality, integrity, and authenticity of this very line. Thank you for reading!