Tag Archives: diy

Building vs Buying a NAS – A non-hardware nerd’s perspective

After years of contemplating on making or buying myself a NAS, I finally decided to bite and got a Synology DS223j two-bay NAS as a present for myself for Christmas along with two refurbished 16TB Seagate Exos hard drives.

Rewinding a bit

For most of my life, the only “backup” I had was a 500gb hard disk that came preinstalled in the Fujitsu A514, my first laptop. I had also bought a 120gb solid-state disk at the time and immediately replaced the hard disk with it. The hard disk went into an enclosure and I used it to back up any important data as my laptop went through frequent cycles of OS reinstalls during my distro-hopping era.

Any time I’d want to back up my photos, it would become a folder on the hard disk and then cleared from phone storage as phones came with much smaller storage back then. I remember having 16gb internal storage on my Galaxy Note and thinking I’d never fill it up no matter how many pictures I take.

I used Gmail and the 15gb storage the free account comes with was enough for email and the occasional document sharing via Google Drive. Life was good.

At some point I hooked this 500gb drive to my Raspberry Pi running Nextcloud and even had my very first “NAS” setup.

In 2018, I had some INR 300 of Google Play credits that I used to get some of the apps that I wanted to support and use pro versions of, like Nova launcher, Tasker etc. Along with those, I got the 100gb Google storage plan (at INR 130 a month) and started backing up photos to Google Drive. It was an okay cost as I was already working by then and had some money to spare for subscriptions.

Since then, I had kept that subscription and even upgraded it to 200gb after moving to Berlin.

Problem statement

The problem I set out to solve was to get rid of some of the cloud storage subscription fees I’m paying each month. I was paying duplicate fees for cloud between two Google accounts (and an iCloud account), so the plan would help get rid of two of the three subscriptions resulting in some 50 euros saved each year while providing me to actually follow the 3-2-1 backup strategy for some important data that I wouldn’t want to lose.

Requirements

Late last December I got a bit obsessed with watching computer build videos. It was fun to learn something new, and to be honest it also was entertaining. From the videos, I learned some important considerations that I’ll have to think for myself before spending my first Euro on this new project.

  1. Number of drive bays: While a NAS can just be a single drive attached to the network, generally they also offer redundancy (using RAID configurations) for NAS with two or more drive bays. I knew I’d want a RAID 1 so I’d need at least two drives.
  2. Idle power consumption: Power is expensive in Germany and I’d want to stay away from any old hardware that’s cheap to buy but very expensive to run. The last thing I’d want to do is invest in a NAS that I’m going to not keep running all the time.
  3. NAS application / operating system: Ideally, I’d like some bells and whistles like automatic backups to make my life a little easier and not having to do that manually. The popular NAS applications come with add-ons that allow more specific usecases. TrueNAS, Synology and others have public pages where one can see if the application they’d need is available on the platform they’re choosing for the NAS.
  4. Gigabit access speeds: I wanted to NAS to be accessible at the same speeds as the rest of my home network, which is currently at a Gigabit. Fortunately, that seems like the minimum one gets in 2023.

Non-requirements

I also learned about what other people prioritize in their NAS, but it didn’t resonate with me. More specifically, I couldn’t justify the price overhead that acquiring those features needed. Namely,

  1. SSD Cache: While there are many strategies to use an SSD for read and write caching, it primarily makes read/write operations feel faster for some use cases. I thought this was over-optimizing for my modest use case.
  2. CPU / RAM to run containers / VMs: I also found that folks like to run Docker containers / VMs on their NAS and need more powerful CPU and added RAM. This hadn’t even crossed my mind, and as such, I stayed true to the “if I didn’t know it exists, I probably don’t need it yet” principle.
  3. Multi-gigabit or multiple LAN ports: Folks opt for 2.5 or even 10gig LAN ports on their NAS for faster transfer speeds. Now while I’d looove to have transfer rates more than 120 MB/s, I’d need to upgrade my entire home network to multi-gigabit LAN and WLAN, and that’s expensive. That also would mean having a beefier CPU to handle the transfers which would increase the cost of NAS hardware itself.
  4. Four or more drive bays: This was the one I was most unsure about; Most if not everyone one reddit suggested to get at least a 4 bay NAS and then just using a couple of the bays until the requirement arises. It sounds reasonable, but I have a feeling that the requirement for 4 bays would never arise and 16TB in RAID-1 is going to be enough for a very long time.

Building vs buying

The nerd in me wanted to really go down the route of buying all the parts and assembling everything myself. The videos I had watched made me confident enough that I could pull it off. Building a PC is fun, and truth be told, I’ve never actually built one for myself.

Having said that, I also recognized that I’m mixing two things together; my requirement for a secure and reliable NAS for my critical data and my desire to build a PC that I can experiment on. I’ve lost important data in the past and I didn’t want to risk it. Especially not after having just watched a bunch of videos and never actually having built a decent PC myself.

I decided to pick up a Synology 2 bay NAS DS223j with 1gb RAM and a relatively weak but does-the-job quad-core Realtek CPU. The cost of the NAS was 180 euros. To do with it, I got a couple of refurbished 16TB Seagate Exos drives, each of which was 180 euros, for a total cost of 540 euros.

My rationale behind going down this path was as follows:

  1. I don’t trust my PC building and setting up skills enough to offload all of my data to the built-NAS and then cancel a couple of cloud subscriptions.
  2. At 4-5 watts of idle (hibernation) and 16 watts of under-load power consumption, it is comparable to a relatively efficient Mini PC.
  3. Hardware isn’t cheap in Germany, and actually saving money on hardware would require sourcing parts off Aliexpress which takes time and is often less than reliable.
  4. Building a PC and setting up the necessary software takes time. I could use this time on another project (home server blog post coming soon :D).
  5. Synology’s DSM (their proprietary operating system) comes out of the box with remote access via QuickConnect.

Tradeoffs I’ve ((sub)consciously) made

I feel like I’ve given up on some of my preferences when going down the Synology NAS route. And while some of these are conscious, there are some that will only show up once enough time has passed. In any case, I’m still documenting some of them here.

  1. Open source and free software: Synology DSM only runs on Synology hardware, at least officially. It means I’m stuck to using Synology software for as long as I’m using this hardware.
  2. Synology tax: Like I mentioned, using these off the shelf NAS’s requires paying the software and marketing budget of the companies in addition to the cost of the actual hardware. That’s fair, of course. Just something to be aware of.
  3. NAS hardware: I’m sure my “16TB ought to be enough for me” might not age very well. I also faced some situations where the NAS really slowed down and made me realize that it is running a very under-powered CPU after all. Again, tradeoffs.
  4. Noise: Seagate Exos drives aren’t the most silent hard drives. My wooden floor made it much worse and I ended up putting the NAS on a softer raised surface to absorb some of the vibrations. I believe this problem would’ve not completely vanished had I chose to get non-enterprise drives like the Seagate IronWolf, but it would’ve been better.
  5. Electricity costs: While my Synology NAS is efficient, and consumes single digit Watt power most of the times idling, it will still add 30-40 euros of additional power consumption to my yearly electricity bills (at 10w average power consumption 24x7x365 and 40 cents per kWh).
  6. Breakeven cost: From a purely utility perspective, it will take some years of not using Google Drive and iCloud before the cost of investing in a personal NAS is broken even. This is especially true if I do not fill up the 16TB that I have and then have regular need to access those files. This isn’t factoring in the added electricity cost, nor can I effectively compare the reliability of Google Drive to a NAS set up at home. Overall, for most people without the need to store TBs of data, Cloud is a very logical option.

In closing

I hope that was informative in some way. I am looking forward to seeing how my investment turns out and if the list of tradeoffs grows further. I’ll leave you with some pictures of the NAS.

Fresh out of the box with its accessories
Initial setup in my “Lack rack”. It has since been replaced but wait for an update on that.

Thank you for reading!

Tesla Coil Speaker

A long, long time ago I ordered a tesla coil speaker from Banggood out of sheer impulsive buying habits. I thought it would be a good DIY project to work on and was less than 4 euros at that time. But since I don’t have something like the India Direct Mail shipping here in Germany, the delivery took a month. I was very skeptic if it will ever make it, given that it was a super-cheap, super-fragile little DIY kit. But it did.

Upon receiving it, I soldered the kit together with the soldering station that I received as a birthday present, and headed out to a hobby electronics store to get a variable power supply. Turned out, even the simplest of them cost more than EUR40 in the brick and mortar stores here in Berlin. I wasn’t willing to spend that much to power this little kit. I had to go back to Banggood.

Not wasting any more time, I ordered a 12-24V 4.5A supply from Banggood for less than EUR10. Knowing well that this time it would take more than a month on account of the holidays in the middle, I forgot about it. That was until today, when it arrived at my work. Full of excitement, I got home and powered this project up! Here’s a glimpse from during the testing.

To go along with this project, I have a FM radio module that I’ll hook up with this Tesla Coil speaker. Hopefully, after figuring out a way to cool the speaker, I’ll be able to use it for extended durations of time.

Thank you for reading!

Reusing An Old Laptop’s LCD Panel

A month ago, my manager from LaughGuru gave me his old non-functional laptop. It was a six year old Dell Inspiron 15R 5520 notebook computer with 2GB ram, third-gen Intel i5 and a 500GB hard disk and weighted almost as much as Misty and my new Thinkpad combined. I couldn’t get the motherboard to boot up so I decided to take it apart (and given the condition of the laptop it made little sense to repair it).

I’ve found some really useful things inside the laptop. The two gig ram chip and the 500 gigs hard disk now sits on another water damaged laptop that my friend gave me a week ago and powers it flawlessly. The CD drive will come in handy someday as an external disk drive. I planned to use the display as an external monitor for my laptop, and that turned out to be a DIY project in itself.

Enter LVDS Connectors

So I isolated the LCD and looked very carefully for any hints on what sort of connector was that dangling from behind the panel. It had the word LVDS on it. Some Internet research later, things became clear. As alien as it sounds, LVDS or Low-voltage differential signaling is a standard that is used for high-speed transfers using very low power. For me, it simply meant that there’s no straightforward way of plugging the HDMI or VGA cable from my laptop into the bare LCD panel and start using it.

Unfortunately, there’s also no easy way of using the laptop’s motherboard logic to make the LCD panel work. Searching for a solution made it clear that LCD controller kit is what needs to be used. It is important that the exact spec of the LCD panel is known, as the kits are only compatible with a small range of panels. It might be difficult if the LCD was never working in your possession, as in my case, but this nice website called Panelook makes it easy to get the detailed spec of the panel from just the serial number. Things to look at are the resolution and the backlight type. The resolution needs to be an exact match and the backlight type is needed to judge if you’ll need an inverter with your controller kit. Mine was a WLED panel so no external power source or inverter needed. The next step is searching the serial number on ebay and other local hobby sites. I found a nice kit on Banggood and decided to order it.

Putting It All Together

Interfacing was simple, and there are nice videos on the topic on youtube. There’s the LVDS connector that goes into your LCD panel. Then there’s the controls board that needs to be plugged into the main board. The controls board also has an IR receiver for remote control and can be used to control brightness, contrast, sharpness etc of the panel.

The board itself supported inputs through AV connector, HDMI, and VGA. It operates on 12 volts and 4 amps and luckily I had a 5 amps supply with me, so no extra expenditure there. I had to borrow VGA cable from a friend though as my laptop only has VGA and no HDMI.

I used an old Tupperware tiffin box as the LCD’s stand to keep the delicate LVDS cables safe and away from physical contact with anything. To avoid physical damage to the panel, I’ve used the stock display cover of the laptop (the top half) as it was. The added benefit of using the stock plastic cover was that I could drill holes and fix the controller board on the back of the panel like an all in one PC (I could’ve literally made it an all-in-one PC by docking my raspberry pi back there as well, haha). Overall, I was very happy with the result, and as I write this, I have my editor on my primary screen and the browser on the extended display. Dual monitors at home achievement unlocked!




Hope you found this article useful. Thank you for reading!

Banggood’s India Direct Mail Shipping

Here’s some great news for all you hobby electronics enthusiasts who drool at the sight of cheap stuff on Banggood and AliExpress, but upon rethinking about the time it takes them to ship something, give up on the prospect of buying it. I’m not a hobby electronics person, but after shopping some three-four times in the past year through sites like AliExpress and Banggood, I was convinced that the wait is definitely not worth the discount that you get, because the shipping time is usually around 40-50 days. Yes, nearly two months it took for my speaker amplifier to reach me. And then some others that never reached.

But it seems like those are the days of the past because two weeks ago when I reluctantly surfed Banggood for a LCD controller board that I couldn’t find on Ebay or other Indian sites, I saw a new method of shipping called ‘India Direct Mail’ (not really new, it has been around since the last quarter of 2017), and it promised to ship in 8-16 days at almost no additional cost.

I really found it hard to believe but decided to take risk and order as I didn’t have much to lose (except for some 1200 rupees). It turned out to be true. Today, I received my LCD controller board, just 12 days after ordering. I feel this is reasonable, especially given that you won’t get it for at least double that here in India, if at all you do get it. This is great stuff and I’ll definitely be using this more in the future.

Encrypted Backups Using Duplicity & Google Cloud Storage

I came across this utility called Duplicity couple of days ago and found it to be very useful. I’ll use this little post to write about what it is and how can it be used to create encrypted backups of not so frequently accessed data on servers that you do not trust. If you only care about the tool, jump to the Usage section.

Use Case

I was working on a little project that involved setting RAID 1 using some spare 500 gig drives and Raspberry Pi as the raid controller. My problem was that I only had two drives, say A and B, and I needed to backup the data on one of the drives (say A) before setting up the raid in case something breaks. I tried to use a friend’s external hard disk but didn’t succeed getting it to work. Finally, I did find another hard disk (say C) in the ‘trash’ laptop a friend of mine gave me. So now I could get the data off the primary disk (A) onto this new disk (C), and I did manage to succeed in that. But this exposed a deeper question of how am I supposed to take care of around 400 GB of backup data, and is having a single copy of it even safe. Of course not.

If you remember my NextCloud solution on DigitalOcean, that worked great as a lightweight cloud, for easy backups of current set of camera pictures and maybe contacts and such. But when it comes to archiving data, DigitalOcean would get a bit too expensive (~USD 10 for 100GB and so on with their block storage). However, their object store was relatively cheap and costed around USD 5 for 250GB of data per month. I needed something like this.

The second problem was speed of upload. I own a modest 2.5Mbps connection, so uploading data off my Internet was out of question. Since I had Google Peering, I could make use of that. I looked up Google Cloud’s solutions and found Google Cloud Storage, which was AWS S3 like object store. They also offered USD 300 free credits to begin with for a year, so I decided to test it out. It worked great, and I got transfer speeds in excess of 24Mbps, which is what Google Peering does.

The last hurdle was finding an automated script to manage sync. I was already a huge fan of rsync and was searching for a way of doing incremental backups (although not strictly required for archiving data) while doing some sort of encryption as well. That is where Duplicity came in.

Enter Duplicity

Duplicity is a tool that uses rsync algorithm for incremental bandwidth efficient backups that are compressed into tarballs and encrypted with GPG. The tool supports a wide range of servers and protocols, and will work with anything from AWS S3, Google Drive/Cloud to WebDAV or plain SSH. In the typical *nix fashion, it does a little thing well and the use cases are only limited to your creativity.

Usage

The setup requires you to install Google Cloud SDK and Python 2’s Boto that will serve as an interface to Google Cloud Storage. Then go ahead, sign up with Google Cloud if you haven’t already and set up a new cloud storage project (say my_backup). Make sure you enable Interoperable Access in /project/settings/interoperability and create a new access-key/secret-key pair. You’ll need to copy these into the ~/.boto config file under gs_access_key_id and gs_secret_access_key respectively.

The last step is to test drive the entire setup. Create a directory with a couple of sample files (probably a slightly large video file as well), and run duplicity with the following command.

$ duplicity ~/backup_directory gs://my_backup

Enter the password when prompted. If all goes well, you’ll see some transfer stats after the command finishes execution (time taken depending on your transfer speed etc), and in your GC console under your storage project’s browser, you’ll see something like in the following image (of course, not a hundred difftars like mine. I’m in the process of backing up my archive).

If you do, then congrats! Those are the encrypted tar volumes of your data. Now just to be sure, make a little change in one of the files and run that command again. You’ll see that this time it takes almost no time, and if you refresh the GC storage’s browser, you’ll see couple of new files over there. Those are I believe the diff files.



To recover the archived files, just do a reverse of the command above and you’ll all your files magically in the ~/backup_directory (after you enter the correct password, that is).

$ duplicity gs://my_backup ~/backup_directory

The next step after the initial backup would to be to run this command every hour or so, so that regular backups of our directory keeps happening. I haven’t tested it yet but something like

0 * * * *  duplicity /home/abhishek/backup_dir gs://backup_dir >> /dev/null

should do the trick.

If you found this article interesting, make sure you read up more about Duplicity on their webpage and in the man pages. There are tonnes of config options. Thank you for reading!

Tinkering With OBD-II Port

I’ve been seeing people hook up their computers to their cars from quite some time. It is a common sight if you watch any motorsport event on television, where technicians are seen working on their laptops that is connected via a cable to the car or bike. I found it quite fascinating. “What interesting tweaks must they be making to that machine with that computer!” I thought. The idea of tweaking a machine to improve it’s characteristics wasn’t new to me. Overclocking is nothing new. But obviously, since I saw all those professionals do it, I assumed there was no way for such an interface to exist on our everyday road vehicles.

And I was wrong. I discovered that, by law, it was necessary for all cars to have a diagnostics port, called the On-Board Diagnostics port. The latest revision for that port is v2 or OBD-II, and all cars manufactured after 1996 should have one. Also, sometimes, the automotive Youtubers I followed showed various stats on the screens such a the engine rpm, throttle position, boost pressure etc. So that implied there exists a way to extract those stats out of the vehicle’s ECU. Interesting. A quick Google search for “odb scanners” revealed that they’re not very expensive either (with cheap clones available for as low as INR 300, USD 5 or even lower). After researching a bit, I learned that there was loads of data that came out of that little adapter, and that great Android applications (like Torque and DashCommand) exist which spit out the data into beautiful dials and graphs (like the ones on the Nissan GTR ♥) I was awestruck. What more can a nerd ask for!

All this happened a couple of months ago. I knew I needed to get one of those. I waited a couple of months and finally ordered it earlier this month. The first challenge was to find the OBD port. Unlike some other cars, Zacky’s OBD port was hidden behind the fuse box cover, the adapter had to go inside there. I managed to access the port without opening the fuse box and problem solved! Plugged in the adapter, paired with with my phone and it started sending data. That was one of the best feelings ever!

Some of the data it sent that I found particularly interesting to read was

  1. Boost pressure from the turbocharger
  2. Engine RPM
  3. Coolant temperature
  4. Engine load
  5. Error codes and provision to reset them
  6. Horse power, torque, acceleration and other such “calculated” data by combining sensor data with phone’s sensors like GPS and accelerometer and known parameters (like vehicle weight, engine displacement etc)
  7. and loads of other cool stuff

Note that the available sensor list varies from manufacturer to manufacturer, so keep that in mind. But even with the most basic, the experience is fun. It’s like opening task manager on your computer for the first time. Wow, so I can actually run this h4ck3r stuff, right?

Interesting Learnings

– Negative boost pressure When you start the car and drive it normally, you’ll notice that the boost pressure gauge will read negative (technically, not pressure but vacuum). Only when driving hard (shifting late, for example), will you notice the boost pressure rising. I thought it was some erroneous data from the sensor so I read up a bit. Turns out, at high rpm, the turbo forces the air fuel mixture into the cylinders. But what happens when the turbo is running too slow for compressing air? It simply works as a naturally aspirated engine and sucks in air during the intake stroke. THAT sucking part explains the vacuum. Cool!

– Driving modes So Zacky featured this thing called driving modes. Putting her on “Sports” made the throttle more responsive but reduced fuel economy while putting her in “Eco” did the exact opposite. Now I could’ve told you that this isn’t just marketing and if you test it out, you can even feel a noticeable difference, but that was all I knew. Now, after driving for a while with the boost pressure gauge in front, I made this little observation. When in normal drive mode, the turbo does not spool over 4-6psi boost. But as soon as I go ‘sport’, the turbo goes well over 10psi, even 12 if the sensor is to be believed, which is pretty fantastic.

– A better understanding of the relationship between torque and horsepower, and what each number actually implies. Yes, power is work done per unit time, but what exactly does that feel like. Why do diesels have same horsepower figures even after having loads of torque. It gets really clear once you see the torque, the rpm and the (thus calculated) horsepower figures side-by-side.

Torque curve So there’s this thing called a torque curve of an engine, which is just a curve with torque on one axis and RPM on the other. For an IC engine, the torque is not linear (as with electric motors), but a curve with a peak at some specific RPM (or RPM range, which is why a torque (or horsepower) figure is always accompanied by a RPM range), and tapering off at both the ends. To get the maximum acceleration, you have to keep this curve in mind when changing gears.

Now show me some kode!

Yeah, right. So while I was on all of that, I thought, why not study the protocol itself and try writing a little script to pull the raw data from the sensors out, just for fun. Right, but how? This thing is running on Bluetooth, and how do you sniff that. Is there something like Wireshark for bluetooth? Googling “Wireshark for bluetooth” reveals that Wireshark is the “Wireshark for bluetooth”. Damn!

But before wireshark could sniff, I needed to get thing thing connected to my laptop. That’s pretty straightforward. After having it running at /dev/rfcomm0, fire up Wireshark and keep it listening on Bluetooth interface.

Okay, pause. Here’s the funny part. The above text was written some 4 months ago. Then I had to do a lot of physical work to take my laptop into Zacky and do all the research/coding from there. I remember going out at least 3 times, but for some weird reason, never bothered to finish writing this article. I’m putting this out right now so that I will remember to write the part-II for it during the next weekend. Stay tuned.

Private Cloud Part 2 | Encrypted Storage With NextCloud

New cloud setup. YAAY! Self hosted, encrypted and scalable. Plus comes with a nice web interface, native Linux and Android clients and its very own app store. I’ll first write about the setup itself, and then some of my personal thoughts over the entire private cloud exercise.

Features Overview

The major components of the setup include the following

  • NextCloud 11 on Ubuntu using Digital Ocean’s one click installer on a 5 USD cloud vps
  • Digital Ocean’s flexible block storage
  • Let’s Encrypt for free TLS
  • NextCloud sync client for Arch and Android on desktop and phone respectively for data sync
  • DavDroid for contacts and calender sync on Android (uses WebDAV)
  • Optional redundant backup and client side encryption using GnuPG (see below)

Pros Vs Cons

So I now have a proper private cloud, self hosted, synced across mobile and desktop (including contacts, messages and calender), optional client-side encryption and scalable (♥DigitalOcean♥). What’s amazing is that I never had a native Google Drive client on desktop, but now I have a native NextCloud client, and it just works. And yes, it isn’t all sunshine and rainbow. There are some serious trade-offs which I should mention at this point, to make this fair.

  • No Google Peering, hence backing up media is going to be a struggle on slow connections
  • Google’s cloud is without a doubt more securely managed and reliable than my vps.
  • Integration with Android is not as seamless as it was with Google apps, sync is almost always delayed (By 10 minutes. Yes, I’m an impatient (read ‘spoiled’) Google user)
  • Server maintenance is now my responsibility. Not a huge deal, but just something to keep in mind

Having said that, most of it is just a matter of getting familiar with the new set of tools in the arsenal. I’ve tried to keep most things minimal. Using few widely adopted technologies and keeping them regularly updated, sticking to the best practices and disabling any unwanted, potentially dangerous defaults and with that the server is secure from most adversaries. Let’s first define what “secure” means in the current context using a threat model.

Threat Model

The only thing worse than no security, is a false sense of security

Instead of securing everything in an ad hoc fashion, I’m using this explicitly defined threat model, which will help me prioritize what assets to secure and the degree of security, and more importantly, what threats I’m NOT secure against.

  • Compromised end device (Laptop): Since data is present unencrypted on my end, an adversary having access to my computer via say a ssh backdoor can easily get access to all of my (unencrypted) data. Private keys cannot be compromised as they are password protected. A keylogger might be able to sniff out my password which can then be used to decrypt any encrypted data.
  • Compromised end device (Mobile phone): Since data cannot be decrypted on the mobile, all encrypted data would remain secure. Only the unencrypted files will get compromised. However, if an adversary gets access to my unlocked cell phone, securing cloud data would be the least of my worries.
  • Man In The Middle (MITM): As long as Let’s Encrypt does it’s job, TLS used should be enough to secure the data against most adversaries eavesdropping on my network. It would not protect me if Let’s Encrypt (or any other CA) gets compromised and an adversary makes duplicate certificates against my domain and uses it to eavesdrop the traffic, the possibility of which is rare.
  • Server Compromise: If the server is compromised through any server side vulnerability (assume root access) and an attacker gets access to everything on the server, all unencrypted files are compromised, which would include contacts/calender lists. Since the decryption key is never transmitted to the server, encrypted files won’t be compromised.

Why Client Side Encryption

The entire exercise would look pretty pointless if I just took all my data from G Drive and pushed it to NextCloud. And from the previous cloud server attempt, I know how uncomfortable it is to have your data accessible from the network all the time. Those reasons were more than enough for me to go for an encrypted cloud solution. Although it would still look pointless if you were to ask me why didn’t I just encrypt the data and upload it to G Drive again. The answer is simply because I didn’t want to.

After some research (being a novice with security, that was a must), I came up with a list of guidelines that I had to write my solution on.

  • Use of symmetric key cryptography for file encryption, particularly AES-128
  • Memorizing the AES key or using public key cryptography to store the key of file en/decryption on disk. (Not sure which is the proper way of doing it, although I’ve asked the experts for help)

Encryption

There are a lot of tools one can use for data encryption. I used Gnu’s Privacy Guard (GnuPG or simply GPG). It is anything but easy to use. But the nice part is that it just works, is extensively reviewed by experts and has been around since I was 4 years old. So in theory,

  • Generate a public/private key pair in GPG
  • Generate a strong passphrase for the encryption, and encrypt it using the public key you just generated. Store it locally someplace secure
  • Get a list of all files and directories from a specific folder using find (for one time backups), or use rsync with a local sync copy (for incremental backups)
  • Iterate the list (of all or changed files). If item is a directory, create that directory, if item is a file, encrypt the file and push it to that directory.
  • After encryption, you’re left with either two or three directories, /original-dir, /remote-encrypted and optionally, /local-unencrypted-sync
  • The additional (local sync) directory is useful when incremental backups are required and rsync uses this directory to keep track of changes, and only (re)encrypts those files that have been added/changed since last sync. Useful to setup a cron job. At this point, you can delete the files in your /original-dir safely
  • Decryption is just the opposite of this. You supply the location of your /remote-encrypted directory and the script generates a new directory with unencrypted content.


Original directory


Encrypted backup directory

This does the job for now. Here’s the script that I’m currently using. I wanted to enable sync without the need for a helper directory, just like Git does (it stores the changes in the same directory in a .git/ directory). Will update it if I manage to get that done.

In Closing

Eighteen months ago, I wrote on how to create a ‘cloud’ storage solution with the Raspberry Pi and half a terabyte hard disk that I had with me. Although it worked well (now that I think about it, it wasn’t really a cloud. Just storage attached to a computer accessible over the network. Wait, isn’t that a cloud? Damn these terms.), I was reluctant to keep my primary backup disk connected to the network all the time, powered by the tiny Pi, and hence I didn’t use it as much I had expected. So what I did then was what any sane person would’ve anyway done in the first place, connect the disk with a usb cable to the computer for file transfers and backups.

Earlier this year, I switched ISPs and got this new thing called Google Peering, which enabled me to efficiently backup all my data to the real ‘cloud’ (Google Drive). That worked, and it was effortless and maintenance free. And although Google doesn’t have a native Linux client yet, the web client was good enough for most things.

And that was the hardest thing to let go. Sync and automatic backups were, for me, the most useful feature of having Google around. And while everything else was easy to replace, the convenience of Drive is something that I’m still looking for in other open source solutions, something I even mentioned in my previous post on privacy.

So although I now have this good enough cloud solution, it definitely isn’t for everyone. The logical solution for most people (and me) would be to encrypt the data and back it up to Google Drive, Dropbox or others. I haven’t tried, but Mega.nz gives 50GB of free tier end to end encrypted storage. Ultimately, it makes much more sense to use a third party provider than doing it all yourself, but then again, where’s the fun in that! Thank you for reading.

Fun With Infrared Motion Sensor

Very few of the projects I’ve done actually had any real life purpose. The rest were just Can I do it? projects. Here’s one more to that little list. So last year, the fan switch in my room changed its position, from the spot where it was in a one arm distance behind my desktop computer table, to a spot where I had to walk three steps, climb my bed and walk another three steps to reach. Too much effort for something that is required to be done tens of times in a day. Naturally, I didn’t bother to turn it off whenever I left the room, and at times, the fan used to stay running long after I had left, until mom or dad noticed it and turned it off again, just before giving me a nice stare. Something had to be done.

I had this PIR module that I had bought a couple of years ago with a friend. PIR [for Passive Infrared] sensor triggers when a hot body comes in its range, like a human being. I had to make use of it somehow to control the fan in the room. I had to Google just about everything about this project, and slowly I discovered that you cannot just put the AC mains and DC circuits on the same breadboard, and expect them to be nice to each other. As a result, relay was discovered. [The one I have used is a more compact, cheaper version].

So a relay to act as a switch to turn the fan on and off, a PIR sensor to detect me entering the room and something in between to interface the two. Arduino should do it. So next is to code the arduino such that on receiving a Truefrom the PIR, the arduino would close the switch, activating the relay to close the switch of the fan, and stay like that for a couple of minutes. I didn’t want it to go on-off every 3 seconds, hence the timeout.

Next was mounting all of it together. A breadboard would’ve been unsafe, as per the Internet, hence I bought a perfboard for the purpose. I soldered everything to its place, and the final result was something like the following. Oh but first, some code.

The code which was shamelessly ripped off Arduino.cc

Here is how it looked [Heavy images. Patience!]



My desk that beautiful day!

Now Aditya would’ve told you that this project is what electronics people do when they’re like 5. Nevertheless, I was too proud of this. Not just because I could do it, but because I needed it.

So did it work? No. The range of the PIR sensor was a bit too small for the entire room, and I had to literally dance in front of it to trigger it. Solution? Multiple PIRs to cover the entire room. Also, I realized how easily I could just add the tubelight to the same circuit. Just add another relay and connect it to the LDR sensor and set it to trigger when the daylight fell below a certain threshold.

Suddenly, automating stuff in the room seemed a bit too simple, and it really is, even for an electronics novice like myself. The Internet gives you that power. As always, thank you for reading, and pardon any silly technical mistakes that I must’ve made in the post (even better, correct me ;).

Private Local Cloud Storage Using Raspberrypi – How To

Today we’ll see how you can home brew a cloud (in the local sense) storage solution that would be free to use, quite faster than an Internet based one, secure enough from any outside the network intrusion and customizable.

But why do you need to go to such lengths when you can easily create an account on Google and get 15 gigs of free storage. Well, first of all, data that we generate is increasing significantly each day. We have multiple devices with us, most of them with ~16-64 GB storage, which is not at all good enough. Then while our notebooks are getting faster with solid state drives, they are still costly to use for all of our needs like storing tonnes of movies and music videos, that is, if you are still left with space after cramming up your disk with camera pictures. If you opt for a premium account at Dropbox or Google Drive, it will easily cost you ~$100 a year recurring, the cost which can get you a 2 TB WD external hard disk.

Then there is the speed issue. At least here in India, we are deprived Internet connection faster than 2-4 mbps. Most of the times even less than that. Even if we considered the option of backing to an online cloud storage, the bandwidth prevents us from efficiently using what already exists for free. When using an local cloud, the bandwidth is only throttled by your equipments, and most of the times you can easily get ~40-60 mbps, which is fine.

The last issue, depending on how you see it, is the most and the least important. Security. If the files are going to be random movies and music videos, you might not be much worried about some hacker breaking into your cloud storage provider and downloading them, but on the other hand, if the files contain any kind of sensitive personally identifiable information, then you would worry. But having said that, I would always choose a secure storage solution from insecure ones if given an option, even if the data was not at all sensitive.

Things you’ll need

Now that we’ve discussed some merits and demerits, lets talk about building the thing. The things you’ll need are,

  • Raspberry Pi (with all it’s setup accessories), with Ethernet port
  • Hard disk, any capacity, with SATA to USB converter
  • Wireless router
  • Ethernet cable or Wifi adapter
  • USB power hub [in some cases]

Setting up the hardware

  1. Connect the hard disk to the Raspberrypi
  2. Boot it up and login via ssh
  3. Run sudo fdisk -l and make sure the hard disk is shown. Note the device name (/dev/sdb or similar)
  4. If not, try usb power adapter
  5. If it is showing, we’ll have to make sure it mounts to the same location each time we boot up.
  6. Create a folder for the mount point. I’ll be using
    /var/www
  7. It would be advisable to use a separate low privileged user for the process, since we will be changing the user home later on.
  8. sudo chmod 775 /var/www

    and

    sudo chown your_username /var/www

    to set the permissions for reading, writing and executing.

  9. sudo blkid

    and note the uuid for the external hard disk. Copy it.

  10. Now we need to make the mounting occur each time we boot the pi up. Open the fstab file by
    sudo nano /etc/fstab

    and add the following line

    UUID="3b28d90f-8805-4ec4-978d-c53ee397a924" /var/www ext4 defaults,errors=remount-ro 0 1

    by editing the UUID, mount location and file system and keeping other things constant.

  11. Reboot the pi, and your /var/www should now be pointing to the external hard disk. If so, you are done with this part of the tutorial. If not, check what did you miss. Also make sure you are able to read and write files to that directory from your user account. If not, recheck the steps, Google for solutions or comment for help.

Setting up the FTP server

  1. sudo apt-get install vsftpd

    to install the vsFTP server.

  2. Open the vsftp configuration file by
    sudo nano /etc/vsftpd.conf
  3. The would be a lot of options. Just go through and make sure the following lines are there and not commented. If not, add them.

    	anonymous_enable=NO
    	local_enable=YES
    	write_enable=YES
    	chroot_local_user=YES
    	force_dot_files=YES
    	local_root=/var/www
    	allow_writable_chroot=YES
    
  4. After saving (Ctrl + x and then y) and exiting, restart vsftpd by
    sudo service vsftpd restart
  5. Lastly, change the user home to the FTP root, so that you’ll directly get into the FTP server’s root on logging into the FTP client.
    sudo usermod --home /var/www/ your_username

If all went well, we have a 100% working local cloud storage running off our pi. Now, since not everyone would like to login with terminal each time they wish to access the cloud, I make some customizations to make it easy for even my Mom and Dad to use the cloud.

On the desktop, download and install filezilla.

sudo apt-get install filezilla

should do it on deb derivatives. Create a launcher icon that triggers the command

filezilla sftp://myUsername:myPassword@myIP:myPort/my/root

which in my case became

filezilla sftp://abhishek:[email protected]:22/var/www

.


On the mobile phones (we have droids, three of us), I used the ‘add ftp server’ option in the ES File Explorer and created a shortcut on the home screen with the widgets menu. Hence, accessing the cloud was nothing more troublesome than accessing a local folder on the phone.



Now I have my very own, secure, high speed cloud storage solution for all my devices and also for the family. It is really convenient and building a custom case for the thing, it looks pretty badass.

What do you think?

Fastboot Horror

This was the second day entirely wasted to get my external hard drive to work with the USB 3 ports on my laptop. It just refused to get detected. It worked fine on the USB 2 port, but just didn’t read on the 3. Initially thought it was a Thunar issue on XFCE, but there simply wasn’t any drive in the output of fdisk -l. Read up dmesg multiple times and there was this line consistently,

[sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK

Googled it and read every thread on the first page of the search results, literally. I had started to doubt the USB 3 ports on the notebook now. I am on Debian testing, and thought something has broken down at the kernel level. I immediately started to download openSUSE, to see if it really is a kernel bug, because I am not brave enough to switch kernels. Anyways, I thought, let me look if the BIOS is reading the drive, and boy, what do I see. The BIOS is just not recognizing the drive. Now I began to panic. It really looks like a hardware issue.

In between my googling, I came across a page that provided some information. Some good guy had asked it for his Windows 8.1 laptop. There was an accepted answer. There were these simple steps, go to BIOS, find Fastboot, disable it. Aha! I said. How did I not think about that myself. Did it, and it was working again, like it should have. Fastboot does save a second each time I turn on my notebook, but this time, it costed me 2 full days. Lesson learnt. When the guys at elite forums say fastboot will prevent some hardware from being read and tested on boot, they aren’t just putting a nominal warning on the door, that thing is real. “Want to make your PC boot faster? Enable fastboot”. No thanks.