XSS, CSRF (or XSRF) and SSRF are common vulnerability in modern web applications where an attacker tries to imitates either a legitimate client to an unsuspecting server or a legitimate server to another unsuspecting server. The basic underlying principle behind each of these attack remains the same; performing action on behalf of a legitimate entity. Let’s look at each of them in a bit more detail and learn about how to protect our web applications against each of them.
XSS (Cross Site Scripting)
XSS or Cross Site Scripting occurs when an attacker manages to execute malicious script code in a victim’s browser as the victim. Browsers store a lot of sensitive information in them. Some of this information is used to identify a user on a website.
A script loaded from a website can access information stored on your browser through that website, which is how sessions work in your browser. That’s how Facebook or any other website knows to show you your personalized information and not someone else’s.
XSS occurs if an attacker gets control over the scripts running in your browser. If they can execute code, they can steal your login credentials and trick you into installing malware on your computer.
There are different kinds of XSS attacks and they depend on where the payload is stored.
Reflected XSS
A reflected XSS vulnerability occurs when a piece of data from a URL is reflected back into the website code unsanitized and can be injected into. This can be a result of a GET or a POST request, and it is especially severe as an unauthenticated GET request as that URL can be shared on social media and anyone clicking on it gets compromised.
Remediation of reflected XSS – Sanitization of all user inputs before passing it back into the view
Stored XSS
A stored XSS vulnerability occurs when a web application stores an XSS attack payload without sanitizing it and then displays it back to the same user or a different user. A notable recent example is British Airways website getting compromised and exposing sensitive data including credit card information of 380,000 transactions.
Remediation of stored XSS – It is the same as with reflected XSS: Sanitization of all user inputs before storing the data in the database.
DOM based XSS
Unlike reflected/stored XSS, a DOM based XSS occurs only on the client’s side. This can be a result of a user typing in a string into an input field that gets parsed and executed as code. An attacker can trick a user to paste a string into their browser which will execute due to insecure parsing and compromise a user’s credentials.
Remediation against DOM based XSS – Display text as text, and nothing else. Instead of element.innerHtml use element.innerText or element.textContext to ensure the data displayed back to a user is purely text.
CSRF (Cross Side Request Forgery)
CSRF occurs when a malicious website makes a request to a legitimate server through an unsuspecting victim.
Web applications communicate with clients through HTTP requests. When a request is made, the browser attacks all information that it knows about the website along with the request, including login/authentication credentials (called cookies).
If the web server doesn’t have protective measures, a request made through a legitimate website and an attacker’s website look exactly the same (or they can be forged to look the same). As a result, an attacker can make a request telling the victim’s bank to transfer $100 to the attacker’s account, and since the request is made through the attacker’s browser, the bank’s server will process it as a legitimate request.
Remediation of CSRF – CSRF can be easily prevented by requiring any unsafe request to validate itself using a valid CSRF token that can only be found in the website’s code and changes on every use. Additionally, authentication/login cookies can be marked as sameSite only, such that any third party website making the request doesn’t contain the sensitive authentication cookies.
SSRF (Server Side Request Forgery)
SSRF is similar to CSRF, but instead of an compromised client making a request to an unsuspecting server, here a compromised server makes a request to itself or another unsuspecting server.
Since a server might be a privileged node in the network, the attacker can make the server access and return sensitive information or perform privileged actions that the attacker’s account wouldn’t allow.
SSRF can also be used to trigger code execution in servers where the vulnerability can be exploited using the privileges of the server itself.
Remediation of SSRF – Any outgoing request needs to be explicitly allowed from the application by maintaining an allowlist of domains and servers a given server can connect to. The scope of these requests should be made as narrow as possible.
In conclusion
I hope that was an interesting quick read on one of the most common vulnerabilities in modern web application. Injection and SSRF are two of OWASP’s top 10 for 2021, so it is definitely worth looking into them and protecting our web applications from potential vulnerabilities.
ModSecurity is a web application firewall. It can protect your web application from preying eyes of vulnerability scanners and attackers. It is extremely customizable, and when paired with OWASP’s Core Rule Set, covers quite a lot of web technologies and frameworks.
In this article, we’ll set up ModSecurity on an AWS EC2 Server running Nginx web server.
For this tutorial, we’re using AWS LightSail’s Ubuntu image. Choose any instance size depending on your requirements. I’ll choose a 40$ / Month instance with 8GB RAM and 2vCPUs just so that the compilation of ModSecurity is faster.
Once the instance is created, log into the instance with SSH and update packages
$ apt update && apt upgrade -y
Install Nginx
$ sudo apt install nginx
Check what version of Nginx did we get from our package manager. This will be used when compiling Nginx later.
$ nginx -v
I got the following output:
nginx version: nginx/1.18.0 (Ubuntu)
To make sure the webserver is successfully installed and running, simply visit the IP address of the server. It should look something similar to this:
Set up ModSecurity
First we’ll need to install compilation and other dependencies.
Next we’ll clone the ModSecurity repository into the /opt directory
$ cd /opt && sudo git clone --recursive https://github.com/SpiderLabs/ModSecurity && cd ModSecurity
Next we run the build script
$ sudo ./build.sh
Next we’ll run the compile script that will fetch all the dependencies for the compilation
$ sudo ./configure
It is possible that this command fails and reports you of any dependencies that are still missing. You can simply google them with “install XYZ on Ubuntu” and run the configure command again. Ideally it will just exit without any errors. Next we start with the actual compilation of ModSecurity
$ sudo make
A reason why I didn’t go with the smallest server was that this step is resource intensive and could take 15 minutes or more depending on your server’s CPU and memory.
If all went through, we can now install ModSecurity
$ sudo make install
If all went through without any errors, we have ModSecurity installed.
Set up ModSecurity <-> Nginx connector
We start off by downloading ModSecurity-Nginx and Nginx source code. Note that the version of Nginx in the next command must match the version installed on our system. For me, that’s 1.18.0 but it could be different for you.
$ cd /opt && git clone https://github.com/SpiderLabs/ModSecurity-nginx.git
$ cd /opt && sudo wget http://nginx.org/download/nginx-1.18.0.tar.gz
Untar the Nginx source. Replace the Nginx version in the next command if needed.
$ sudo tar -xvf nginx-1.18.0.tar.gz
Next, we need to grab configure arguments. For that, run the nginx command with a capital ‘V’ flag.
Don’t copy the above command. You must use the configure arguments supplied by your installation of Nginx.
Next we build the modules
$ sudo make modules
This is a compilation step and may take a little while (a minute or so) to complete. The final step here is to copy the compiled modules to a place from where we can reference them from our Nginx config.
OWASP’s Core Rule Set is a set of rules that cover most common frameworks and technologies as well as cover signatures for common web application attack payload. It is a good place to start if you don’t want to write custom rules for many common attacks.
So far, we’ve configured everything but if we restart Nginx now, it won’t filter attacks but only detect them since the default operating mode of ModSecurity is to only log malicious requests. To change that, let’s open the file /etc/nginx/modsec/modsecurity.conf and change the line
SecRuleEngine DetectionOnly
to
SecRuleEngine On
For our changes to go live, we’ll need to restart Nginx.
$ sudo systemctl restart nginx
Let’s test our ModSecurity installation. Open your browser and send a sample payload in the GET parameter. It doesn’t have to be a real parameter, but just something that can trigger an XSS filter.
That’s ideal. ModSecurity is working and blocking seemingly malicious requests to our web server. Now any application that sits behind our web server will be protected against many generic web application attacks, even OWASP Top 10 thanks to OWASP’s CoreRuleSet.
In conclusion
It isn’t the most straightforward of installations, but it isn’t very difficult either. The hard part, however, starts here and it is to get rid of all false positives and tweak the installation such that it fits the needs of your specific web application. Depending on how complex an application you’re trying to protect, it can be fairly time consuming.
I’ll write an article on how to tweak the parameters of ModSecurity and make it fit our needs in the future in a separate article.
That’s it for this article, thank you for reading!
WordPress has been powering my blog since the start of last year. In fact, migrating my Jekyll template to WordPress was one of the highlights of my new year 2021 and I’m very happy that I did, although I didn’t publish as much as I had hoped for. Fortunately, I’ve learned a lot more about WordPress over the course of a year than when I started. In this short primer, I hope to go into a bit more depth on how to securely run a self hosted WordPress website.
Prerequisites
Before we get started, there are a few things that we need to make sure we have to
Self hosted WordPress installation with SSH access
I wish I could just sticky something like this on top of most of my articles, but most people trying to attack our websites don’t have the time or resources to develop and use 0days. They use existing exploits out in the wild and some of these exploits can be months old, if not more. WordPress core and plugin authors can only do so much more than promptly releasing patches for security vulnerabilities that they find.
So then it is up to us as site admins to make sure we patch as soon as is feasible. Having worked on many large codebases, I know automatic updating isn’t always possible or even desirable, but having an eye on the changelog can definitely help not get compromised.
I’d also recommend a web security helper plugin that sends alert emails when it detects outdated plugins / themes / core.
2. Fix file permissions
During development, many files and directories permissions are way too open to make it easy to set up the website and all plugins. In production, however, the permissions can be dialed down a notch to prevent anyone with any access on the server to take over the whole website.
Similarly, attackers typically upload shell code using the uploads functionality, and if code execution is disabled in the directory, we make it harder for this attack to succeed.
Administrator accounts have many powers on a WordPress website, and a compromised administrator account can lead to uploading of PHP shell code leading to command execution and server compromise.
To make sure admin accounts are extra secure, enforce 2FA on all administrator accounts. This can be done by any 2FA or login security plugin on the WordPress plugin store.
4. Set up auto banning of failed logins
Since WordPress doesn’t ship with any builtin way of auto-banning failed login attempts, we have to rely on plugins like WordFence. WordFence will need to be configured with options to block login attempts after a certain number of failed attempts.
WordFence can also help you disable execution in upload directories, block IP addresses making malicious requests and much more.
5. Enable regular backups
While we can take preventive measures against mishaps, we can never be sure. Hence it is imperative that the website is backed up regularly. Backing up can be done at multiple places. The database can be backed up separately from the static assets and files. There are many plugins, like WPVivid, that help you fine tune what gets backed up and where it gets stored. It is always nice if you can afford an external backup location, like AWS S3.
The hosting provider might also have ways of backing up the website. For example, AWS Lightsail has daily instance snapshots which backs up the entire disk.
6. Disable XML-RPC
If you don’t use plugins that rely on XML-RPC or using the WordPress mobile app, it is wise to disable XML-RPC which removes another widely used attack surface by attackers. Many plugins allow the disabling of XML-RPC, including the aforementioned WordFence.
7. Disable file editing in WordPress admin
Disable editing of files from WordPress admin as that’s almost never a good idea, especially if you can achieve the same using more secure methods like SSH. To disable file editor, simply add
define( 'DISALLOW_FILE_EDIT', true );
to your wp-config.php file.
8. Use a Web Application Firewall
A firewall plugin like Sucuri or WordFence can identify attack signatures and block malicious requests. Many also include IP address block lists that prevent known malicious IP addresses from reaching your WordPress website.
For more control, there’s ModSecurity. ModSecurity needs to be installed alongside the web server and it can detect and block known attack signatures for not just WordPress but just about any popular web framework. It does require a deeper technical know how to setup and maintain ModSecurity, and a plugin might work be a better approach for most people.
9. Don’t forget the usual web security measures
A WordPress website is, at the end of it all, a website. While there are WordPress specific ways of hardening a WordPress installation, there is also a whole plethora of best practices that apply to every website, including the WordPress ones.
Use HTTPS – SSL/TLS certificates are free, and usually come by default with many hosting providers and CDNs. Don’t forget to turn it on and enforce it in strict mode.
Use appropriate security headers – Headers tell the browser how to handle your website’s content. Many client side attacks can be mitigated by using the right set of headers. A detailed list of useful headers can be found on OWASP’s website: https://owasp.org/www-project-secure-headers
Use CAPTCHA on login page – to prevent bot submissions and more sophisticated bruteforce attacks, enforce a CAPTCHA like reCaptcha on login page. WordFence supports this out of the box (needs an API key from Google).
Handle user input with care when using a custom theme – when using a custom theme that accepts user input in the form of query parameters to show filtered content, the regular best practices around user generated input has to be followed. Embedding user input in output can lead to Cross Site Scripting, while passing it straight to the database can lead to SQL Injection.
In conclusion
I hope that was useful. If you have any questions around WordPress or suggestions to improve this article, feel free to reach out to me via email. Thank you for reading!
A long time ago I worked on a theme called Elementary for my Jekyll blog. The goal was simple, to create a website that just works, and works fast. In fact, I’ll just paste the line from the readme of the GitHub repository.
This is my personal blog’s Jekyll template that I’ve been optimizing for performance, accessibility, usability, readability and simplicity in general.
I personally do not approve of personal blogs bloated with hundreds of kilobytes of trackers and analytics code, and hence, this is an attempt at creating something that I’d be comfortable with using on my website.
The goal was accomplished. I managed to get a perfect score on many of the pages. But I wanted to write more and while on the go, and plaintext editing on phones is a pain. Then the other problem was to add it to git and push it. In short, working with a static blog from an Android phone wasn’t easy.
That’s when I moved to WordPress. I ported the theme to Elementary-WordPress, which is essentially the same theme but in a WordPress shell. It worked really well, but the problem was all the bloat that WordPress sends to the frontend. For a while I didn’t care enough. I was still serving a fast website, albeit with Jquery, emojis and other code that wasn’t getting used anywhere else.
Today, that changed. I finally took some time to optimize the website and got back my perfect 100/100 PageSpeed score. Here’s how I did it.
If your website isn’t ancient, there’s a good chance you’re not using it. If some plugin you’re using is using jQuery, consider alternatives. It will save you ~30KB and an HTTP request. Adding the following to the functions.php should do it.
If you’re not super keen on using the smart browser detection functionality that Google Fonts offers and are happy only supporting modern browsers, simply downloading the font files and linking them with @font-face can save an additional DNS and HTTP request.
Use font-display: optional property
I’m using font-display: optional; CSS property on my @font-face and it pushed my PageSpeed score over the top. Essentially it prevents the CLS, or Cumulative Layout Shift metric of Core Web Vitals from getting affected due to page shifting due to slow loading of font files.
Building pages to serve the users is expensive as it involves the database, but isn’t something that needs to be done for every visitor visiting the same page. A plugin like W3 Total Cache coupled with a Memcached instance (could be running on the same server as the website) could enable caching of pages among other resources in memory, reducing the load on the server and improving performance for cache-hit pages.
Fix conflicting cache strategies
I’m using W3 Total Cache plugin that helps minify and cache CSS and JS files. But I wasn’t seeing any minification happening. Upon some reading, it turns out that CloudFlare’s minification conflicts with W3 Total Cache’s. Disabling it on CloudFlare’s side fixed the non-minification problem for me.
Use a CDN for asset delivery
Once the thing to deliver is optimized, it is a good idea to optimize the delivery pipeline as well. Since my server is in the same country as me, it is easy to make a mistake of thinking every visitor of the website is seeing a 50 milliseconds time to connect to the server. The further the user is from the origin server, the longer it could take.
Hence, an global CDN like CloudFlare should be used which can serve static content from its edge node physically closest to the visitor.
TODO: Inline all CSS and Javascript
It doesn’t go beyond 100, but I’d still like to improve it further. For one, the little bit of CSS and JS that does exist doesn’t have to need two additional HTTP requests. Inlining that bit will mean that blog posts without an image, which for me are most of them, will get served in only three HTTP requests; the document, the font file and the favicon. Pretty cool, huh?
Conclusion
I’m pretty pumped about the 100/100 score. WordPress has a reputation for being slow and bloated, but with some simple optimizations, it starts performing like how you’d expect some text on a page to perform like.
We’re finally out of 2020, yaay! It has been, for lack of a better word, an interesting year. Not intending on becoming Abhi News Network, I’ll spare you from having to read about the events of the past year for the thousandth time. Like many people, I realized my full nerd potential and learned how to live indoors for weeks at a time. I also unlocked a new hobby, Chess. Some other things like traveling and in-person events definitely took a backseat but can’t do much about that.
This short post is about moving this blog back to WordPress. I say back, but the fact is that this website was never on WordPress. I started this blog on ghost.org back in early 2014, but had to quickly move it away from there in spite of absolutely loving Ghost (mostly because of the $5/month fees). Next up was Blogger before finally settling on GitHub Pages which, by the way, if you’re just starting out with blogging and can find your way around git on a terminal, you should give a try. Now, feeling the need for a much more elaborate CMS, I’ve migrated to WordPress running on AWS Lightsail. It does cost money, but this time I can afford it.
Before this blog existed, I used to write on WordPress on an older blog. That feels like an eternity ago, which it was in internet time. I used to write about latest smartphones and compare them against each other (nothing that actually needed to be done by hand, now that I think about it; 8mp vs 5mp camera, 1gb vs 2gb ram and so on). I would walk into Samsung stores and try to make ‘hands-on’ videos of their latest phones. I can’t imagine doing that today, mostly because of how much the smartphone industry has expanded since 2012-13. Also because it doesn’t interest me anymore.
With WordPress, I hope to be able to write on the go using nothing more than just a browser. “On the go” might take some more time to become a normal everyday phrase again, but when that happens, I’ll be ready with my Thinkpad and a backpack. To not need a text editor to write Markdown/HTML, terminal to commit and push, and to see previews without a developer server would be very liberating. I’m excited about this future.
I’ll end this article with a nice picture I took today. Hope you enjoy looking at it as much as I did looking at Stitch in my house today.
Today, we’re going to learn how to be a 10x anything. We’re going to do that by putting slack on mute. But not the simple way. That works, but don’t expect 10x results. Also, this guide assumes that the most distraction during worktime comes from Slack. If that’s not the case for you, you might end up with ~4.5x results (scientific).
What we’ll essentially be doing is:
Set up Pi-hole on our home network. Pi-hole is a DNS based ad blocker which sinks requests if they’re for ad network.
Use Pomodoro app on KDE to trigger scripts when the focus session starts and ends. If you haven’t heard about Pomodoro technique, read up more here: https://en.wikipedia.org/wiki/Pomodoro_Technique.
The scripts themselves will block any website / app that you don’t want distracting you when the focus time is on.
Pi-hole
Pi-hole is amazing! No, seriously. It blocks all ads and tracking, has a very good interface which supports custom rules for each device, groups and more. All of this is free, open source and pure. Did you star the repository already?
Pi hole can not only block DNS queries for ads/tracking networks, but also anything you ask it to. We’ll use it to block Slack on our network.
Raspberry Pi
We need something to run the Pi-hole, so grab a Pi zero or a normal Raspberry Pi. A virtual machine / docker would work but then you need that thing running 24/7.
Pomodoro app that supports scripts
I’m sure your operating system of choice has a Pomodoro app made for it. The trick here is to find one that supports executing scripts on certain events like focus time start and end.
As you can see, Fokus on the KDE store supports this functionality and is perfect for me since I’m already on KDE.
Router with custom DNS setting
We need to change the DNS settings in our router and point it to our Raspberry Pi’s IP address. As a backup, we still keep 1.1.1.1. Most routers support this setting. If for some reason you’re not able to do it at the router level, you can still set custom DNS on each of your device, which is a bit more work.
Ability to not overthink
Given how futile this whole exercise is, if you start questioning yourself why am I even doing any of this when you could just mute Slack?, this exercise (or this blog) is not for you.
Tutorial (kinda)
The initial setup is very standard. Install Pi-hole on your Raspberry pi by following their official guide. Get the Pomodoro app up and running.
Once that’s done, we create a directory on our computer and create two files in it: focus-start.sh and focus-end.sh and mark them executable
As the file names suggest, the focus-start.sh will execute when our focus time starts and focus-end.sh executes when our focus time ends. It should look like this in Fokus, our pomodoro app.
So now that this is configured, we need to enable passwordless ssh access to our Raspberry Pi. I followed this guide here. What that enables is just typing ssh [email protected] will log us into the Pi without a password.
Next, we open our files and add the commands to enable and disable Slack’s domain using a wildcard blacklist/whitelist entry.
The focus-end.sh command is only different in that has the -d flag, which removes the wildcard domain from the blacklist. We can even try running that command on the terminal and verifying if it creates an entry in Pi-hole. Remember to substitute raspberrypi.local with your Raspberry Pi’s hostname / IP address.
That’s it. The other command works as expected too, removing the slack.com’s wildcard entry at our discretion.
So, how does it (not) work?
Theoretically, we’re sinking any DNS query for slack.com and any of its subdomain meaning the APIs won’t work once the blacklist entry has been created. But there are a couple of problems with our approach.
One, Slack’s messages are transferred over Websocket and once that’s established (when you open the website / app), it doesn’t need DNS to work (resolve) for sending and receiving messages.
Even if it did, DNS queries are cached (by a variety of entities like browsers, operating systems etc), so it isn’t like this is fool proof and starts working the second it is turned on.
Third, many apps use their own DNS and have little regard for your home DNS. For example, I tried to block Whatsapp this way (using this list) and it just doesn’t work, at least on Android.
A complete fail then?
Not exactly. You can still block websites that you open in browsers, like reddit.com and youtube.com if you spend too much time on those like me. In any case, it is a fun way to learn about how web apps, DNS and ad blocking work and involves a lot of trial and error to get things to work. Oh and yes, we do have a network wide ad blocker which is what Pi-hole does best, so there’s also that.
I’ve long established that adding heavy duty analytics and tracking scripts to my blog pages isn’t the right thing to do. Personally, it is also a bit liberating to not know which article of mine is getting a lot of traffic and which isn’t, because then I’m not biased by what the internet is searching for and can write about pretty much anything that I feel like writing about. 10 programming languages you should learn in 2020 has exactly the same weight as Let me tell you a funny story from last night, so which one do think I’d write about?
The analytics and tracking world has come a long way and is viewed very negatively in the light of recent internet incidents. But it started off very simple and had a very simple and non-malicious idea at its core: Getting to know your user better so that you can serve them better.
That thought made me search for a simple analytics solution that I could run on my blog for a couple of weeks and get enough insights to make informed decisions regarding the frontend design changes while not compromising on the privacy of the visitors. If I’m completely honest, I was also just curious to know these things with no agenda behind it.
I looked into Simple Analytics, a nice solution that does exactly what I needed (perhaps a bit more than that), but a little expensive for me at USD 19 a month. There are also self hosted analytics solutions like Plausible, but that was too much work for realizing this simple thought. So I decided to put something together quickly and the following is what I ended up implementing.
Client side JavaScript
On the client side, I needed to get the data that interested me. It was details like the browsers used by my visitors, platform, width of their screens etc. More technically, the user agent, platform, screen width, referrer and the current page’s url (although I don’t plan on using it for this article. Spoiler: One of my lowest effort articles is pulling more than half of all pageviews which is a bit saddening).
There’s not much happening here. Just checking if the user prefers to not be tracked, else get the desired data and POST it to our analytics endpoint using the navigator.sendBeacon API.
Server
We need to implement the endpoint that’s listening for the POST requests from our client browsers. I decided to go with Firebase’s functions for handling the request and Firebase’s realtime database to store the data.
constfunctions=require('firebase-functions'constadmin=require('firebase-admin'admin.initializeAppconstcors=require('cors')({origin:true,exports.handler=functions.https.onRequest(async(req,res)=>{if(req.method==='POST'){constsnapshot=awaitadmin.database().ref('/hit').push(JSON.parse(req.bodyreturncors(req,res,()=>{res.json({message:'success'}else{res.json({message:'have a good day!'}
Now this is super bad code for a variety of reasons, but it worked for my temporary needs. I deployed this, waited for a couple of weeks and had some data to answer some basic questions about my blog’s visitors.
Parsing data
So at this point I had let this code run long enough to have accumulated couple of hundred entries. It was time to analyze. Firebase allows you to easily export the database in JSON format. Using some basic Python-fu, I created lists of each dimension and passed these lists to Python’s builtin collections.Counter (which is perfect since I’m only interested in aggregated stats), and then take the top 5 most frequent items using the .most_common method. Finally, we plot bar charts for these top 5 values across each dimension using Matplotlib to visualize the results.
So that’s it for this little article. I’m happy with the outcome given how little effort went into this whole assignment. I hope you enjoyed reading it. As always, write me an email in case you have any comments!
I recently wrote a tutorial on getting started with web development. It was a frontend only (meaning covering only HTML, CSS and JavaScript), 5 day tutorial that covered very basic web development topics and concepts like HTML elements, CSS selectors and JavaScript language semantics. Along with learning the basics of frontend web development, the course takers built their very first website which was a simple, one page portfolio site listing their interests and some pictures.
I’d not go into the technicalities of the course itself, as that’s off topic. What I’d like to do in the little post is list down some things I learned while I was writing the course, about writing the course and also about writing your thoughts down in general.
If you’d like to test your understanding, try explaining it
“The person who says he knows what he thinks but cannot express it usually does not know what he thinks.”
— Mortimer Adler
The above quote from this interesting blog post about Feynman Technique on Farnam Street captures the gist of this learning. We’re good at convincing ourselves that we understand something when in reality we might not. It is similar to when we speak a language natively and are confident in our knowledge, but then when a language-learner asks us a simple question, we don’t have an explanation but rather know it ‘intuitively’.
As an example, when I was writing the chapter on JavaScript, I was tempted to write that arrow functions have replaced the ‘function()’ functions. I asked myself why I thought that was the case, and I didn’t have an answer. I had just ‘believed’ that to be the case.
Upon researching, I learned the differences in their workings and their use cases, and I came out of that a bit wiser than before. That was just one instance where I wrote something, then asked myself why I thought that was the case, and learned that that was in fact not the case.
The bottom line here is, if you want to learn something well or test your understanding, try explaining it. Interestingly, that’s just me rediscovering the Feynman technique.
Many obviously true beliefs that you hold probably aren’t true
This is an extension of the previous point. But with a different takeaway. Since we’ve now established that many of our beliefs are wrong, it is wise to not be too confident in them and always practice humility when it comes to your knowledge, and consequently, your opinions and worldview. In other words, while it is great to put in time and energy to learn something properly and have opinions about it, it is also important to be ready to accept that you could be wrong and change.
This point is best illustrated by the “Strong Opinions, Weakly Held” philosophy, best outlined by a little paragraphy from this post.
A couple years ago, I was talking the Institute’s Bob Johansen about wisdom, and he explained that – to deal with an uncertain future and still move forward – they advise people to have “strong opinions, which are weakly held.” They’ve been giving this advice for years, and I understand that it was first developed by Instituite Director Paul Saffo. Bob explained that weak opinions are problematic because people aren’t inspired to develop the best arguments possible for them, or to put forth the energy required to test them. Bob explained that it was just as important, however, to not be too attached to what you believe because, otherwise, it undermines your ability to “see” and “hear” evidence that clashes with your opinions. This is what psychologists sometimes call the problem of “confirmation bias.”
While you’re at it, I recommend reading this article by Jeff Atwood where I first read it. There’s also this nice TED talk along the same lines.
Explaining is an art
It is not easy to explain why linear-gradient works the way it does. It is hard to explain parameters of a function to someone who has never written any code, and you cannot use words like parameters, arguments or function call before explaining them. In short, writing a beginners programming course is some work (who would’ve thought?!). Now I look back at all those books, articles and tutorials that taught me the basics of everything I’ve learned and realize how great it was to have had all of that top quality learning material for free on the internet.
My personal experience is that we don’t notice when something is very well written or explained, especially with technical writing and documentation. It feels very natural and in-flow. But try to remember the last badly written article that you read. It was exhausting, you had to re-read through paragraphs to make sense of the text and you probably didn’t even finish it. That’s why, the next time you read through something and don’t notice anything wrong, take a moment to appreciate the effort that might’ve gone into making it come across the way it does.
The role of an editor
I learned what an editor does while writing this course. I would typically submit a day’s work as a document, fairly confident that I had done a good job only to find out the next day that the document has 200 new comments and edits. How was that even possible? I’m not a good writer, accepted, but those many edits? I would genuinely fear submitting my work for editing, just like my younger self would fear exams for all the bad grades I could get.
But in this case, the editor is really there to make the text readable, check if the sentences flow naturally and there’s no discontinuation of thoughts (and of course, spelling and grammatical shortcomings in the text). All in all, after the edit the content doesn’t look anything like the initial draft I sent for editing. If you ever get the chance to get your work reviewed by an editor, don’t miss it. You’ll learn a lot.
Writing, for me, requires a lot more focus than programming
I realized how much more focus writing content needed as compared to programming. I could just not do it in the office. Every 15 minutes I’d lose my train of thoughts due to some or the other distraction. I believe this could be just because I’m not used to the idea of writing content in the office, or writing professionally in general. I would end up taking work from home days to make sure I’m making progress. I had not expected this to happen, especially after having been an amaeteur blogger for a while now. But there I was, trying to think of the next sentence while repeatedly reading the paragraphs above.
Having a continuous thought train for a multipart article is a lot more difficult than writing a one off article
So I feel quite comfortable writing something like this very post. It is not as long, and one can write the whole thing in a couple of sittings. Then there are also not a lot of different ways of presenting here. Just text followed by headings followed by more text. That’s quite a lot easier than writing a multi-day course with each day three or four times as long as a typical blog post with many screenshots, git commits, code snippets and, of course, text. It is important to keep track of all your resources, and any mistake you find later on means all the screenshots and git commits from that point on needs to be updated, which is quite a hassle.
Have a plan or outline
To avoid finding technical faults / discontinuities in the text much later in the course writing, having a plan or an outline about the content is very important. Each chapter, and every topic in the chapter should be outlined before even starting with the actual writing. Ideally, even the outline should be reviewed by someone who knows about how learning works (yes, that’s an expertise), and the final outcome should be communicated well in advance. You want to avoid making the course too difficult (and have people drop out after getting stuck) and too easy (and have people drop out after getting bored).
Know your audience’s technnical competence
When you’re writing a beginner course in software development, you have to explain every bit of technicality. From creation of a file with special extension to what a git commit is. Any assumption you make regarding the ability of the user to understand the course’s substance can backfire resulting in many course takers abandoning the course mid way or flooding the communication channels with their questions. To avoid this, it is important to know the technical competence of your target audience. You cannot cater to a wide range of expertise, and no matter what you write, many people are going to be left out. But that’s okay.
In closing
I’m glad I found this opportunity to do some professional writing. I learned some important aspects of writing, and I tried to share my experiences with you through this post. I hope you find something useful out of this. As always, thank you for reading!
As web developers we’ve used countless UI libraries that give us an interface for creating grids. Now things have become much easier thanks to Grid and Flexbox in CSS, but it is still nice to have everything arranged in nice consistent classes for production use. Recently, I set out on a mini-mission to find a grid library for our new design system. In my search, I had to go through many libraries, their features and their source code. Most of them were 90% fit for my use case, but then I’d find some flaw that’d put me off. This continued for a couple of hours, at the end of which I uttered the golden developer words “I know, I’ll just implement my own grid library”. The old folks here know what follows. They also know, irrespective of whether you know if this is a bad idea, the temptation to start working in an empty file without any dependencies is too real to just ignore.
Anyway, after spending some time baking my own grid library, and slowly realizing that in probably a week or so, I’ll end up with exactly the kind of code that I’d rejected earlier, given how similar everything was looking, I decided to go with something readybaked. But this little exercise of trying to make a grid library taught me a lot about how these libraries are implemented. I wish to share some of that excitement, the learnings with you in this article.
By the end of this article, I want you to have at least a basic understanding of how a grid library is written and be confident to dig into the source code in case you need to customize a library to fit your needs. Note that I’m using Flexbox. Bootstrap 3 used widths and floats, and Bootstrap 4 uses Flexbox. I believe you can also use CSS3 Grids. Use what feels natural to you.
What’s a grid system?
A grid means what you’d expect it to mean. It is a two dimensional cellular structure, like a cupboard or a chess board, and you can put/place stuff on/in it. In the context of web development and design, grids usually just define the vertical or columnar properties. The reason is that we do not want to design for horizontal scrolling, and assume that all navigation on a page will be based on vertical scrolling. As a result, whatever the screen width is, we take that as 100% and divide it into the number of columns that we prefer.
First step is to set the number of columns that we’d like the designs to use. 12 is commonly used because with 12, we can divide the page into 1, 2, 3, 4, 6 and 12 parts (notice that those are just divisors of 12). Then we set the gutter size. Gutter is the gap between two columns. Usually we set it to 1rem which, if we haven’t modified, should be 16px.
After that, we set breakpoints. Breakpoints are points where our UI changes in response to a change in the screen size. For modern websites, we have to cater to a wide variety of browsers and screen sizes, and defining a set of screen sizes is our first step towards it. For example
0-450px can be considered to be small mobiles
450px-750px can be large mobiles
750px-1000px can be tables
1000px+ can be desktops
This is all very arbitrary, of course. we can set whatever sizes fit most of our users’ device types or just copy what Bootstrap or other popular libraries do (in general, when in doubt follow the standards).
Next is to actually implement designs that adhere to this grid system. Here we make an assumption that we have a UI hi-fi design mockup that was built using the same 12 column grid as base. For ease of implementation, we can define the CSS classes that we frequently use, which while not absolutely necessary, is helpful.
Basic CSS
While I mentioned that it isn’t absolutely necessary to have CSS classes that help us with grids, more often than not we’ll want to have some of them. The way we do it is we write classes for all the possible widths that a div might take on the screen, right from 1/12 to 12/12 (which is essentially 100% width).
In this short snippet, a .row would be the wrapper class for one or more .col-X classes.
Styling our HTML
Consider the simple markup. This is a typical layout for a blog with a navigation section on the left, blog content in the center and a sidebar on the right with some widget.
Now, if we want to split the page into three columns; left sidebar (25% or col-3 containing the “.navigation”), main content (50% or col-6 containing “.content”) and right sidebar (25% or col-3 containing “.blog-roll”), we just have need to add the relevant classes to our markup
It should give you a grid like the following.
Gutters
If you ignore the basic styling and borders, you’ll notice that there’s still no spacing between the columns (or gutters) here. We need spacing because otherwise we’ll have to add internal padding to all the columns and we do not want content from one column sticking to the content from another. To have some of that, we define a gutter variable and add half gutter width padding on right and left side of our columns.
Note that --gutter-half-width is just a variable in CSS that we access with the var() function. & > *[class^="col-"] selects all direct descendants of row class which have the col-* class set. :root selects the document root.
We don’t want gutters on the extreme left and extreme right of our grid. We compensate for that with negative margins on the .row class.
If you were to re-run the CSS again, we’d have something like the following.
Mobile responsiveness
At this point, our grid system is ready. Ready for one screen size, that is. Let’s see what happens if we switch to mobile view?
I mean, it kind of does what we asked it to do, but if those columns were in fact two sidebars and one main content section, and this being displayed on a 320px screen, your users would be pretty irate (or worse, your website becomes popular on r/CrappyDesign). You don’t want that, don’t you?
To fix that, we’ll use the breakpoints that we had discussed about earlier. Essentially, we want to have three columns on desktops (which we’ve already made), one on tablet and one on mobile phones in a row. We start off with writing the media queries for each of our breakpoint.
Next, we write the column CSS classes for each media query (same code, but we’ll rename them a bit so that we can tell which class is for which viewport). Notice the col-[viewport-name]-[column-width] format. You’ll might recognize this from libraries like Bootstrap (for example, col-md-3 or col-xs-6).
Now, we can edit our HTML to use these new classes. For each viewport, we add a class which tells the browser how wide it needs to be on that viewport. For example, our article section is 50% in width (or 6 out of 12 columns) on desktop, while 100% on tablet and mobile (or 12 out of 12 columns).
This will ensure our columns are 100% width on tablet and mobile screens. And because our classes are inside of media queries that only fire at the viewport width breakpoints, only the relevant class gets applied. As you can imagine, one can really fine tune how things look by selecting a higher number of breakpoints and designing for a more consistent UI across the spectrum of screen sizes.
Bonus – Hiding sections
Let’s say we decide that the blog-roll section isn’t super important on mobile screens, and should be removed. This is a common usecase; hiding specific blocks on specific viewports. And there’s a very easy way of doing this right in the grid system.
The little trick is to add hidden classes, just like our column classes, to our viewport media queries.
Now, to hide the blog-roll on mobile phones, we just add the hidden class to its classlist.
Code
I’ve posted truncated snippets above. The whole thing would look something like the following.
Possible extensions
The flexbox standard defines various ways of arranging things inside the container both along the flex-direction axis (called the main axis) and the cross axis (the axis perpendicular to the main axis). If we were to extend this, we could define helper classes like .col-reverse (flex-direction: reverse) or .col-spread-out (justify-content: space-between) using these properties depending on our use cases.
In closing
The flexbox standard is pretty elaborate, and extremely well documented. There are also good libraries build around Flexbox that provide what we just build, and much more than that, out of the box. In production, one should try to stick to libraries and not reinvent the wheel.
Having said that, it is also important to understand the libraries that we use. Now we know what actually happens when we use that familiar col-md-6 class from Bootstrap, and if need be, we won’t shy away from editing the source code to make the library fit our needs!
The best way to learn is to teach, they say. I totally agree, and that’s one reason I have so many articles on my blog explaining random topics. Part of the goal was always to understand the topic better myself.
I got reminded of it while I was preparing for a workshop on introduction to web technologies for my fellow Berliners who wish to get into tech. Then something interesting happened. I found the root of a problem that I was struggling with for some time at work. I will jot down some notes of this entire experience, and try to tell you about the lessons learned.
Our Styleguide
We maintain a frontend styleguide (think: company-specific UI library). We have many HTML elements and CSS classes that make our text, buttons and cards look the way we want them to look across the webapps. There’s a little issue. Most textual styles are defined on HTML elements. So to get a large heading styled with our predefined font-size, font-family, color and a bunch of other styles, one would just use h1 (notice: no class needed).
This worked for us until now, and wherever a big heading was required, we would just throw in an h1. Similarly for paragraphs, lists and other textual elements. This made a lot of sense in the past as accessibility or SEO weren’t a concern. But things changed.
Multiple h1s
It all started when our contracted SEO agency told us that we had two h1s on a page. We looked at the page, and it made total sense. There were two ‘headings’ of super large size, and our styleguide’s h1 made total sense from a purely visual perspective. But the second heading was just a title in large font. It had no semantic significance. Now, our product’s main title and some random text have the same precedence on the page.
This is bad for SEO, no doubt, but this is also bad for screen readers and all mini browsers (like Apple Watch and the like) which rely on the HTML to convey semantic information, and not visual.
Thinking and problem solving
I went to our designers and had a discussion. I could just hack and overwrite the styles, but that wasn’t the point. I asked what could be done. We could change the design, create a new class the resembles heading-1s and then use that on a span and so on. But we couldn’t think of why we’ve met with this problem. Maybe we’re missing something, something obvious, we thought.
Conference on web basics
I attended a developer conference a couple of weeks ago. There we had the good fortune of listening to Hakun Lie and Bruce Lawson. What struck me, apart from how much they cared about web standards and saving the web from the bloat hell that we’re hurtling towards, is how much one can accomplish just by sticking to web standards. One of the examples used was of the Apple Watch, and the website in question was developed much before anyone imagined a browser on your wrist. If one just uses semantic HTML, one can be sure that their website would work on any device, whether in existence or yet to arrive. And just like that, millions of well-designed websites started to work on the special Apple Watch browser.
This is important to note because there are usually multiple ways of doing something on the web. More often than not, there are a couple of right and many wrong ways. Part of our job as web developers is to ensure that our website isn’t just pretty visually but also correct semantically and structurally. This is to futureproof our creation.
The workshop
I was preparing for this workshop and thinking of various ways I could introduce web development to complete beginners. I referred to some nice articles and tried to understand the meaning of HTML and CSS myself. I tried to understand why reset.css and normalize.css are used, even though I’ve been using them for years. I came up with interesting analogies to explain the basic pillars of the web and as a result, improved my understanding of these constructs.
Lightbulb moment
After the workshop, when I went back to my codebase, I could see the problem staring right back at me. We had styled the HTML elements, and not created separate classes that we could then attach to our elements. This is the result of forgetting the basics and doing something the wrong way because it saves you from writing class="" for every HTML element, which to be fair, doesn’t seem that bad when you don’t differentiate between HTML and CSS and use a combination of the two to get the design right.
Conclusion
There are a couple of conclusions for me from this article. One is to learn and follow web standards. Semantic HTML is not at all hard, just some 120 tags in total. Then, understand what a markup language means, and how the semantics of a document is different from how it looks or works. Learn the rules of CSS selectors and how cascading works. Learn that HTML and CSS are declarative, and use them as much as possible. Only where it makes sense, introduce Javascript. In general, keep abstractions to a minimum.