Tag Archives: linux

Experiences With Macbook And MacOS – Part 2

In the part 1 of this post, I wrote about getting a new laptop from/for work. I also dug into the similarities and differences between my old Thinkpad T440 and my work computer, a Macbook Pro.

Now, after more than 6 months of using the Macbook as my primary computer, I switched back to my Thinkpad. In this article, I’ll get into what I missed from the Thinkpad (which was slightly different from what I thought I’d miss) and what the tipping point for the switch was. This is a personal account, and as such, is shaped by the way I perceive things. Probably it would have been very different if I’d used a Macbook from an early age and made my first shift to a GNU/Linux distro 6 months ago.


You lose a lot of control from over your system when you run a proprietary operating system (and most software on it). That feels obvious when I write it that way, but we often forget or take things for granted when using a GNU based operating system that we realize only when we switch to something like MacOS. There are ways of customing MacOS, no doubt, but they aren’t nearly as powerful (or simple) as the ones available on most distros on Linux.

It is worth noting that control isn’t strictly needed, and is a preference. But if you do prefer to be able to tinker with the way your system looks, feels and works, then you’ll find that preference better respected on this side of the fence.


No matter how hard I try, I just couldn’t get myself to like the gestures and animations in MacOS. My brain is kind of stuck in the old school menus of Windows XP/7 (and hence, xfce). Also, having spent a lot of time customizing my status bar in xfce, I couldn’t help but feel under-equipped trying to do the same on MacOS. Not a huge fan of the bubbly semi-transparent UI scheme either.

New learnings

Depending on how you see it, it can be regarded as both good and bad. Most things just work out of the box in MacOS. It is like buying a fully furnished house. That frees you from having to think about things like setting up Wifi, Bluetooth, screen brightness and so on. All keys on the keyboard do what they say they do, sound works out of the box, and even if you break something, you can always take it to someone to get it fixed (Not very sure if/how the last one works, but guessing there’s some kind of support available to Macbook customers).

While all this sounds perfect (and it is, if that’s what you need), most of what I have learnt about computers is through fixing stuff that I broke while tinkering with it. A couple of months of MacOS usage made me realize that I’m not learning even a fraction of what I’d have otherwise did if I was using a broken system, full of shortcomings. I’ll let a quote from Moxie Marlinspike’s The Worst to summarize my thoughts.

…no matter how much research they do, a partisan of the best might not ever know as much about motorcycles as the partisan of the worst who takes a series of hare-brained cross-country motorcycle trips on a bike that barely runs, and ends up learning a ton about how to fix their constantly breaking bike along the way. The partisan of the best will likely never know as much about sailing as the partisan of the worst who gets the shitty boat without a working engine that they can immediately afford, and has no choice but to learn how to enter tight spaces and maneuver under sail.

Speed disappointments

So, in exchange of all of that, I assumed, I’ll be using a system that’s super smooth and faster than anything I’d ever used. Nop. Not even that. It was just as fast as my old computer. I think newer CPUs and hardware in general are quite overrated for most people’s use cases. If you know even a tiny bit of what you’re doing (and aren’t part of that minority of computer users that actually needs powerful hardware), you can make a 5-10 year old computer work more or less the same way as any modern computer.


While I moved away from my Thinkpad, and by extension, the *nix community, it didn’t move away from me. I was constantly reminded of how customizable my desktop could’ve been at r/unixporn, how over 10 year old laptops are still daily drivers of many at r/thinkpad, how easy getting your operating to do something for you used to be at StackOverflow and what good documentation should look like at ArchWiki. All in all, I missed the community.

Tipping point

While all of this was coming to my notice slowly, a major turning point was hearing Hakun Lie and Bruce Lawson at DevBreak two months ago. There, I got to know about web standards and where things are heading. It was fun to get reminded of the things that excited me about web in the first place. Then, Bruno walked me through a project of his that used some of the newer web apis. I was blown away, and honestly a little embarrassed to have forgotten the passion with with people talk about the web and engineering things on it. I just wanted to get back to my old world.

Final thoughts

So I got back on my Thinkpad running a fresh installation of Arch and i3, and writing this on the same. Trying to get the function keys working never gets old for me, and the joy of having found the solution on the internet, implementing it and getting it to work and in the process actually understanding what happens when you press a function key is something you learn by actually doing it. This, and countless such experiences are what make it so much fun to be in the GNU/Linux ecosystem.

Thank you for reading!

Experiences With Macbook And MacOS

It has been quite some time since I had last used a computer that would connect to the Wifi and Bluetooth out of the box without having me to scream at my computer screen and rip some hair. But that changed when I got my work laptop. It is a Macbook Pro. I was very excited to unbox my first ever Apple product, even though I was never keen on buying one myself (or could afford one, for that matter).

The two devices

There were some surprises, both pleasant and otherwise. This post is going to be about those, about how I felt switching from GNU/Linux based distros to the MacOS, from a Thinkpad to Macbook. Note that one is a four year old second hand laptop, while the other is more recent and hence, not an entirely fair comparison for absolute things like specs. Also the Macbook costs about five times of what I paid for the Thinkpad. These are very personal experiences and hence, biased opinions. YMMV.


Both are excellent machines running excellent operating systems. If you want to get some work done, you couldn’t go wrong with either (and mostly depends on how familiar you are with each). Both are faster than anything I have used in the past. Software support is good on either. Both feel very durable (I can only vouch for the Thinkpad, but I’ve seen people use Macbooks for years too). And finally, both are considered ‘work’ laptops marketed towards professionals.

Where The Macbook Shines

1. Display

Compare the resolutions

The display is easily the best part of using a Macbook on a daily basis. Text is crisp, colors pop out of the screen and the resolution is out of this world. For comparison, the 27in monitor that I have as a secondary display has fewer pixels. Working on it is a joy, especially as a frontend developer.

2. Build Quality & Bulk

I used to think my Thinkpad was sleek and light, but the Mac is on another level. I can casually hold it in one hand and walk. There’s no flex anywhere, and the whole thing feels very solid and well built.

3. Trackpad

It is nice to get gesture-support out of the box for once. I tried doing it for the Thinkpad on Xfce, but that attempt failed miserably. The trackpad on the Macbook is huge. It has two levels of clicking for added functionality (I use the dictionary/reference look-up often). It supports many phone-touchscreen-like functions like pinch-to-zoom and is very refined.

4. Battery Life

Again something that I had never experienced before, a super long battery life. I have all the battery optimizations disabled and never stop the dev servers and IDEs, but I still easily get through half the day without having to connect the charger.

5. Speakers

In terms of absolute quality, I don’t know where the speakers on the Macbook Pro stand. For me, they’re hands down the best laptop speakers I have ever experienced. Loud and clear.

What I Miss From The Thinkpad

1. Ruggedness

While the Macbook is premium and rich, if I had to pick a more durable laptop, I’d pick the Thinkpad. I’d never use the Macbook as carelessly as I do my Thinkpad, especially considering the economic consequences.

2. Keyboard

I tried to get used to the new keyboard, and I did. But whenever I go back and use my Thinkpad, I immediately realize why it is called the best in the business. Perfect click-iness, key travel and key shape. Typing is a joy on the Thinkpad.

3. I/O

Nothing new here, but it sucks to need a dongle just to be able to connect a USB drive or read an SD card (yes, I still use those regularly). HDMI for secondary display? Need a fancy cable for that. VGA? You from the past, bro?

All I’m trying to say is, I’d rather have too many options at the cost of elegance than too few at the cost of functionality.

4. Operating System

It is hard to make a non-biased pick, but I’d still choose Arch and Xfce over MacOS. Many little things from years of using Linux distros have spoilt me; OS updates made me happy, but not anymore with Mac. Constant bugging to reboot just to update the OS? Don’t remember that from Arch. Aur (Arch User Repository) had everything in terms of software that you’d (almost) ever need, and I miss that.

It is also just the customizability an OS offers. The community that surrounds the laptop and the operating system (Have you checked out ArchWiki yet?).

5. Repair Costs

Thankfully, I’ve never had to repair either of them (I’d not have to do that for the Macbook anyway since it is a company device), but I felt the need to add this point here for fairness. Thinkpad parts are available in abundance on the internet, and you can do most repair on your own if you know how to use a screwdriver. A quick search for Thinkpad T440’s motherboard on ebay pops up results in the 50$ – 150$ range depending on the configuration. An equivalent for Macbook goes around 600$ – 800$.

In closing

As you can tell, there’s no clear winner here, even for me personally. I genuinely think the hardware of Apple is top notch and now kind of understand why many developers use Macbooks.

On the other hand, my heart still lies in the simplistic plains of Xfce, the ease of everyday operations, confidence to open the back cover and do minor repairs and the joy of just understanding what’s on the system. Of course, as things progress, maybe the Mac ways will become second nature to me and I’ll have a better understanding of this new system, which is nice.

It will be interesting to see how my thoughts shape from here. Cheers and thanks for reading.

OverTheWire Bandit 27-33 Write-up

The last part of the Bandit challenges was relatively easy with most of the flags attainable with basic git knowledge, except for the last restricted shell escape. Try them here: OverTheWire Bandit

Bandit 27-28

This is as simple as it can get at this stage. Just clone the repo and cat the README.md file. The flag is in plaintext.

Bandit 28-29

In this stage, if you cat the README.md file, you’ll find xxxxxxx in the place of the flag. If you do a git log, you’ll see that the password was entered and then removed. Just checkout the previous commit with git checkout {hash} and you’ll have your flag in the README.md

Bandit 29-30

There’s no commit history this time, and the README.md file says “no password in production”, which is a clue. Do a git branch -r and you’ll see a development branch. Checkout into it (git checkout dev). cat README.md in this branch to get the flag.

Bandit 30-31

No password in previous commits or branches here. But if you do a git tag, you’ll see a tag called “secret”. Do a git show secret and you have your flag.

Bandit 31-32

Add and commit any random file, remove the wildcard entry from .gitignore and push origin. The flag is in the verbose output of the commit.

Bandit 32-33

This is a restricted terminal escape challenge, very interesting. I urge you to think of creative ways of loopholing this before looking at the solution.

So the terminal converts every command into uppercase before executing. So ls becomes LS and cd becomes CD and nothing works.

One way of loopholing this behavior was symlinking a helper binary to an all caps name. I choose vim for the purpose, but cat, less or more, anything would’ve worked. Symlink the binary in your temp directory in some all caps name.

$ ln -s /usr/bin/vim /tmp/mytempdir/VIM

Now, simply running ./vim will execute VIM and you can then read the flag file with :r /etc/bandit_pass/bandit33 in vim.

Thank you for reading

System Stability & Moving Back To XFCE

One thing that I really hate, and I don’t use that word very often while describing my computer preferences, is system crashes. It’s one of those things; just unacceptable to me. You’re working on something important, and all of a sudden, the DE (Desktop Environment) decides that it needs to restart itself, and you lose all of your windows, terminals and most importantly, context. Coming back from there is a 15-minute process in itself; logging back again, starting the browser, IDE, terminals, entering virtual environments, running test servers and so on. As you can tell, it can escalate from slight inconvenience to very frustrating in little time.

When I got my new laptop back in May, I decided to switch away from XFCE. To be honest, I did try installing XFCE but couldn’t due to some issue starting the DE. Since this was a fancier laptop with better hardware, I assumed I can afford running a somewhat heavier DE for a better user experience (and my colleagues’ Macbooks constantly reminded me that I’m using an ancient (looking) DE). I did some research and was split between KDE and GNOME 3.

So the initial impressions of GNOME 3 were not very convincing (not that this was the first time I tried GNOME 3 anyway). I never liked the gesture-like way of accessing windows and quick menu. I’m more of a click-click person. But I decided to stick with it and see how it goes, customizing whatever that I can. So after that, things went uphill for a while. The more I used GNOME, the more I started to appreciate it. I brought back the ‘conventional’ application menu, quick access bar on the left side with an extension called Dash to Dock, Pomodoro and a bunch of widgets for the top bar (which by default is mostly empty).

A few issues persisted from the beginning. The most important one was memory and CPU usage. I looked up and concluded that it is a general problem and not just my laptop. The problem is not just the high usage of system resources (which is even a good thing if you trust the kernel). Problem is when you see gnome-shell constantly use one CPU even while idling and 500MB-1GB of data just after startup. Now, due to this, I was constantly facing situations when the RAM would go over 90% and the system would start to lag. This was serious, but this wasn’t the worst part.

I could’ve lived with a (little) laggy system, a system that lags while opening the app drawer, for example (tip: create shortcuts to all the apps that you frequently use to avoid GNOME 3’s app drawer altogether), but the DE would also crash all of a sudden, wasting my time re-spawning everything. And it was especially bad when it happened during my work hours. That was a deal breaker. I tried to debug it, but couldn’t convince myself to spend more time on it as it wasn’t making a lot of sense. I installed XFCE, made it work and it felt like I’m back home to my countryside house from a vacation in the city. Felt good.

In conclusion, I think I’m biased here. I had a preconceived notion about GNOME 3, and I might have fallen for that. Maybe GNOME 3 is objectively better at many things that my bias didn’t let me see. Don’t get me wrong. GNOME 3 is a wonderful DE, and for someone who values the bells and whistles that come with GNOME (I had three finger and four finger gesture support for once in my life. Thanks, GNOME) I think it is a perfect choice. For me, however, system stability is way too important than any secondary convenience feature.

An interesting thing I saw Mac OS users do was that they used to always suspend their machines, not shutting them down often. I wanted to do that since forever, just close the lid and be done with it. I never could do that on my old laptop because new issues would creep in after resuming from suspended state (failure to connect to wifi, display staying all black, USB ports not working etc to name a few). No such issue is present on my Thinkpad, and as a result, I suspend it in between use and at night. The system is rock solid, even at heavy loads. As an enthusiast, it gives me a lot of pride in mentioning my last shutdown was nearly ten days ago.

22:00:09 up 9 days, 11:17, 1 user, load average: 0.84, 0.58, 0.59

I’m sure a lot of you reading this can relate to the pride of showing off uptimes and talking about system stability, or the joy of keeping your car running in top notch condition after years with proper service and care! This is similar. Hope you found this interesting. Thank you for reading.

Encrypted Backups Using Duplicity & Google Cloud Storage

I came across this utility called Duplicity couple of days ago and found it to be very useful. I’ll use this little post to write about what it is and how can it be used to create encrypted backups of not so frequently accessed data on servers that you do not trust. If you only care about the tool, jump to the Usage section.

Use Case

I was working on a little project that involved setting RAID 1 using some spare 500 gig drives and Raspberry Pi as the raid controller. My problem was that I only had two drives, say A and B, and I needed to backup the data on one of the drives (say A) before setting up the raid in case something breaks. I tried to use a friend’s external hard disk but didn’t succeed getting it to work. Finally, I did find another hard disk (say C) in the ‘trash’ laptop a friend of mine gave me. So now I could get the data off the primary disk (A) onto this new disk (C), and I did manage to succeed in that. But this exposed a deeper question of how am I supposed to take care of around 400 GB of backup data, and is having a single copy of it even safe. Of course not.

If you remember my NextCloud solution on DigitalOcean, that worked great as a lightweight cloud, for easy backups of current set of camera pictures and maybe contacts and such. But when it comes to archiving data, DigitalOcean would get a bit too expensive (~USD 10 for 100GB and so on with their block storage). However, their object store was relatively cheap and costed around USD 5 for 250GB of data per month. I needed something like this.

The second problem was speed of upload. I own a modest 2.5Mbps connection, so uploading data off my Internet was out of question. Since I had Google Peering, I could make use of that. I looked up Google Cloud’s solutions and found Google Cloud Storage, which was AWS S3 like object store. They also offered USD 300 free credits to begin with for a year, so I decided to test it out. It worked great, and I got transfer speeds in excess of 24Mbps, which is what Google Peering does.

The last hurdle was finding an automated script to manage sync. I was already a huge fan of rsync and was searching for a way of doing incremental backups (although not strictly required for archiving data) while doing some sort of encryption as well. That is where Duplicity came in.

Enter Duplicity

Duplicity is a tool that uses rsync algorithm for incremental bandwidth efficient backups that are compressed into tarballs and encrypted with GPG. The tool supports a wide range of servers and protocols, and will work with anything from AWS S3, Google Drive/Cloud to WebDAV or plain SSH. In the typical *nix fashion, it does a little thing well and the use cases are only limited to your creativity.


The setup requires you to install Google Cloud SDK and Python 2’s Boto that will serve as an interface to Google Cloud Storage. Then go ahead, sign up with Google Cloud if you haven’t already and set up a new cloud storage project (say my_backup). Make sure you enable Interoperable Access in /project/settings/interoperability and create a new access-key/secret-key pair. You’ll need to copy these into the ~/.boto config file under gs_access_key_id and gs_secret_access_key respectively.

The last step is to test drive the entire setup. Create a directory with a couple of sample files (probably a slightly large video file as well), and run duplicity with the following command.

$ duplicity ~/backup_directory gs://my_backup

Enter the password when prompted. If all goes well, you’ll see some transfer stats after the command finishes execution (time taken depending on your transfer speed etc), and in your GC console under your storage project’s browser, you’ll see something like in the following image (of course, not a hundred difftars like mine. I’m in the process of backing up my archive).

If you do, then congrats! Those are the encrypted tar volumes of your data. Now just to be sure, make a little change in one of the files and run that command again. You’ll see that this time it takes almost no time, and if you refresh the GC storage’s browser, you’ll see couple of new files over there. Those are I believe the diff files.

To recover the archived files, just do a reverse of the command above and you’ll all your files magically in the ~/backup_directory (after you enter the correct password, that is).

$ duplicity gs://my_backup ~/backup_directory

The next step after the initial backup would to be to run this command every hour or so, so that regular backups of our directory keeps happening. I haven’t tested it yet but something like

0 * * * *  duplicity /home/abhishek/backup_dir gs://backup_dir >> /dev/null

should do the trick.

If you found this article interesting, make sure you read up more about Duplicity on their webpage and in the man pages. There are tonnes of config options. Thank you for reading!

Gentoo Experience

I started using GNU/Linux full time sometime in 2011. Before that, it was all tiny bits here and there, virtualbox and stuff like that. But then, I finally made my mind to replace the Windows based OS that I was running for Linux. Trust me, it was a very scary decision at that time, and Backtrack 5 was my first full time Linux distro.

From there, it was an ultimate goal to try out every major distro. I used Backtrack till version 5r3 when they dropped support. Then I tried Ubuntu, since I had read Backtrack was Ubuntu based. Switched to Kali Linux, for it was the latest. After that was CentOS 6. I remember using it because I had the same on my DigitalOcean VPS. VPS gone, it was time to switch. I tried Fedora, but GNOME 3 didn’t appeal to me, and still at this point, I was unaware of what a desktop environment was, or how do I change one. Later I switched to Debian 7, which was when I fell in love with it. Attended the launch party of Debian 8 and started using it right away. Debian stable had some really outdated packages, and the GNOME environment didn’t appeal to me either. I installed Debian testing, running XFCE. It was the setup I always needed and is still my daily driver. In the meantime, I tried OpenSUSE too, but nothing great. As you see, I don’t really need a reason to switch distros.

What really provoked me was a sentence that I read somewhere, ‘if you have not used Slackware or Gentoo, you really have not used Linux’. It was 3 years ago. Since then, I had multiple tries at installing Slackware, Gentoo and Arch, but somehow, I would mess some thing up and it would be a failure. You should note here that installing these systems is not as easy a task, especially when you don’t understand the commands you are typing. This time was nothing different. I downloaded the ISO which was surprisingly just some ~250MBs. I burnt it to a disk and booted it up.

The most scary part of it all is that there is no ‘installer’. You have to manually create partitions, the swap and the bootloader. Then there is the phase to download the ‘stage3’ file to the root and untaring it. Chrooting into it, setting up a few things like network, downloading the kernel source, compiling it and adding it to the /boot directory and then the hardest stage. Reboot. Hard emotionally, that is. After a minute, you are either greeted by the login prompt, or you realize that the past 8 hours of your life went in vain.

Luckily for me though, it booted up. It was after midnight when it booted up, and I was installing it since 2pm. I’m a noob. It is not a lot different than other distros, if you know how to make softwares. There is this package manager called emerge which is helpful too (it downloads sources and builds it, resolving dependencies). I am currently running it command line, as I didn’t setup the Xorg during installation. It is pretty usable, and the best part is that you actually understand your system. I realized that this is a great way to actually learn GNU/Linux. The thing that I have to live with is that since each and every installation is by compiling source, it is damn slow. Usually 15-30 minutes for most applications. The plus point is that since it is build on my particular system, the executable is smaller than usual (and I read it is faster too). Cool, right?

I am looking forward to using it everyday on my PC, at least for sometime till I get really good at it. Maybe then I’ll try doing the same on my laptop, which would be nice. For now, I’ll have to figure out a way to install Xorg.

Debian 8 Jessie is here

The next major stable release of the Debian lineup, the 8th one, codename Jessie was launched on 25th of april, last month. And like every Debian release, this one was packed with an awfully large package repo, meaning more free stuff to choose from, one of the fundamental reason I use Debian.

The total package base, with this release, is over 43,000, which is a lot. The key features added to this release, make it one of the best Linux distros till date, imo. Not to mention, rock solid.

All the major desktop environments like GNOME, KDE, XFCE, LXDE are supported (No Unity, sad ;), just select the preferred one during installation and it will download it for you. Alternatively, some iso are marked which desktop environment it ships, and if not specified explicitly, assume GNOME3.

If you are already on Debian 7, to get Jessie, fully update the system:

sudo apt-get update

sudo apt-get upgrade

sudo apt-get dist-upgrade.

Then just head to /etc/apt/sources.list and replace every occurrence of wheezy with jessie and then again:

sudo apt-get update

sudo apt-get upgrade

That would be it. I noticed that Jessie is super fast, and I intentionally did a minimal install. The bootup and shutdown speeds are very impressive. Here is an amature video of me filming the shutdown time, LOL.

A note, if you go to the downloads page, there are multiple DVDs, like for example, the amd64 complete comes in around 10 DVDs, but you only need the first one. It contains all the installation stuff and most of the packages you’ll ever use, including the desktop environments.

Downloads page: https://www.debian.org/CD/

A tip, you can add the DVD contents to your hard drive and set up an offline repository. Mount the contents of the DVD or copy it to a local folder.

# in case of mounting the iso

mount -t iso9660 -o loop /home/abhishek/Debian/debian-8.0.0-amd64-DVD-1.iso /mount_point

Or just plain copy iso contents into the folder. Then add the following line on top of the /etc/apt/sources.list

deb file:///mount_point/ jessie main contrib

so that it will check for a local copy of the package first for offline package installation.

That was it for this promotional article [ 😉 ] on Debian 8. Hope you like the new release and use it.

Thank you developers.

What is GNU/Linux

The following is a piece from my previous blog. I had written it in one of my diaries and since that blog of mine is no more, I would like to publish it here. My views on Linux/GNU and why it is one of the most amazing thing you will ever learn. Here it goes…

What is Linux?


In the early 1990s, Finnish computer science student Linus Torvalds began hacking on Minix, a small Unix-like operating system for PC then used in college OS courses. He decided to improve the main software component underlying Minix, called the kernel, by writing his own.
In late 1991, Torvalds published the first version of this kernel on the Internet and called it Linux, a mix of his own name and Minix.
When Torvalds published Linux, he used the GNU’s General Public License which made the software free to use, copy and modify by anyone provided that the copies and any variations are kept equally free. Torvalds also invited contributions from other programmers. Though these contributions came slowly at first, as Internet evolved, thousands of hackers and programmers from around the globe contributed to his free software project.
The Linux software developed so quickly that today, Linux is a complete modern OS, which can be used by programmers and non-programmers alike.

What makes Linux so special?


The building blocks of Linux OS are the ‘tools’. If you ask for a rough definition, ‘A tool is a small piece of code that is designed to perform one and only one task with great precision’.That’s it.
The entire Linux concept is based on thiese little pieces of code. Most operating systems (like Microsoft Windows) have large utilities called applications. These applications can perform a large number of functions or tasks, for example word processors, presentation designers or a web browser. Along with their main tasks, this applications also perform some side tasks like search, replace, spelling checks often found in all applications. The source code for these applications is stored separately (or each binary has a separate set of these instructions) for each application, hence taking up more space on disk as well in memory.
These applications are often closed source, meaning it will do your job like magic, but you will never understand what is happening in the background (like what methods are implemented to search, can it be revised to make it more efficient and such). Hence programmers can never learn, use or build anything from it. The end result of this approach is that the same functions inside all these different applications must be built by programmers from scratch, separately and independently each time – a set back to the progress of the society as a whole and waste of countless man hours and energy that programmers use to code the same thing again and again.
This is where Linux is special. Most of the tools (or all, for that matter) are open sourced, programmers can integrate them straight away without much effort to build something that takes the community as a whole ahead at a faster pace. Also, you don’t have to spend any time debugging the tools most of the time because it is almost always that someone has used it in production before and rectified the bugs, as they are always there. Saves a lot of time for us, as developers.
There is also this interesting functionality called ‘pipes’. Pipes behave as one would expect from the name. It ‘pipes’ the output of one tool to the input of another. If you didn’t think about it already, you can create powerful tool chains that do multiple tasks in co-ordination giving the expected result, and using individual tools in sequence would have. Just as the tensile strength of steel is greater than the added strengths of its components nickel, cadmium and iron – multiple tools could be combined to some unpredictable results than that of the individual tools, called as the concept of synergy, basic philosophy of all GNU/Linux tools.
These basic tools which have been improved over these decades are crafted to do a particular task to perfection, and even if it doesn’t fit your requirement, you can always grab the source, edit it accordingly and use it. If you want to keep up with the open source spirit, post it online so it will save some time of someone with the same problem. These small tools can be a power weapon in the hands of a Linux expert, or Wizard as we call them.
Note that when I use the term ‘Linux, I mean GNU/Linux. The entire thing wouldn’t be possible with either one. Linux is the kernel and rest of the OS is GNU. More information here: https://www.gnu.org/gnu/linux-and-gnu.html