self.li - note to self and share with others, blog by Peter Legierski

Peter Legierski

/pronounced as leg-year-ski/

Occasional workaholic, regular teaholic. Lead Developer of GatherContent. Fat, powdered nose, indoor climber!

Currently working on my personal projects Substance and phpconsole. Check them out!

Creating a post-installation script for Ubuntu

I want to share with you today how to create your own bash script that can be used to bootstrap a fresh installation of Ubuntu (and very likely any other Linux distro after small modifications), bringing it very close to a state where you can just open your favourite apps and start working.

Back in the days of Windows XP I’d create a perfect setup of my machine and use software like Norton Ghost to create an image of the main partition. It had several advantages over my current approach:

  • no access to the internet required
  • fully automated process
  • every tiny detail saved

Here’s the thing, though: with Windows I’d have to reinstall it every 2-4 weeks due to my heavy usage, installing and uninstalling of apps and general sluggishness of the system after a while. With Linux, on the other hand, I can go for months on the same install and I rarely run into problems that force me to reinstall the system. I can, however, put it on a different disk, partition, completely different machine or even use it for a different user, tweaking the username and adding/deleting sections of the script to match it to a new environment. Another great thing is that the script and all resources that it uses are very small compared to disk image. There is also no need to update any apps (or the system itself) after running the script as it already uses the newest versions available and runs system update.

The script is separated roughly into 3 parts:

  • install apps
  • configure apps
  • change system settings
    • with gsettings
    • with dconf

I’ve developed this script using Ubuntu 13.10 in Virtualbox 4.3 - you can create a snapshot of the system right after the basic installation (+ initial update & upgrade commands) and revert to it every time you want to run your code.

Part 1: Install apps

Some apps will require additional repositories, which should be added to the very top of your script. After that you can run update & upgrade, which will bring your fresh install up to speed with latest versions of everything installed by default.

The way I went about the apps was to go through my ~/.bash_history file and make a list of all apps that I’d like to have from the get go. I’ve added them all to one massive apt-get install.

There are two more apps that I want to install, that can’t be installed via apt-get, as they are essentially PHAR files: Composer and Laravel. These are kept in /usr/local/bin/ . Don’t forget to change their permissions and owner.

At the very end I have ubuntu-restricted-extras that requires interaction.

Part 2: Configure apps

There is generally several things to do, depending on app and your personal preference:

  • replace existing conf files / dotfiles with the ones inside data folder
  • append settings to existing conf files
  • add user to groups
  • copy scripts / program files into appropriate folders

Note: Pay attention to files that require root access to edit them. You won’t be able to do the following:

sudo echo "alpha" > /etc/some/important/file
sudo echo "bravo" >> /etc/some/other/important/file

The “sudo” applies only to “echo” in the examples above. Here’s how you replace and append contents of these files:

echo "alpha" | sudo tee /etc/some/important/file
echo "bravo" | sudo tee -a /etc/some/important/file

Note: Here’s how to copy dotfiles with a wildcard (*):

shopt -s dotglob
cp -ar ./data/dotfiles/* ~

Without the first line, the * wouldn’t match files starting with “.” .

Part 3: Change system settings

In this part we’ll focus on two different tools: gsettings and dconf. I was planning to use only gsettings, but it turns out that some things just can’t be changed with it.

Part 3.1: gsettings

My favourite way to make use of gsettings is to save all current settings from fresh install and diff them against my working machine.

On a fresh install within Virtualbox:

gsettings list-recursively > ~/original.txt

On my working machine:

gsettings list-recursively > ~/new.txt

It’s a good idea to sort settings and get rid of duplicates before diffing the two files. Sublime Text can do that for you as well as diff the files. This way you will be able to see which settings actually changed since the fresh installation.

copy all settings that you want to preserve and prepend each line with gsettings set. Don’t forget to add double quotes around arrays like ['spotify.desktop']

Part 3.2: dconf

I’ve used this tool to capture a few more settings that I couldn’t change with gsettings. This time, the easiest way to do it is to run dconf on a fresh install in monitor mode with:

dconf watch /

And make desired changes manually. You should see paths popping on the screen with their new values. Prepend dconf write to the lines and values you want to set on a fresh machine and add them to your script.

One final note

sudo requires you to retype user password after 10 minutes, so I try to put all sudo commands before the rest of the commands, as much as possible.

Here’s my current script:

#!/bin/bash


# add repos
sudo apt-add-repository -y "deb http://repository.spotify.com stable non-free"
sudo add-apt-repository -y "deb http://linux.dropbox.com/ubuntu $(lsb_release -sc) main"
sudo add-apt-repository -y "deb http://archive.canonical.com/ $(lsb_release -sc) partner"
sudo add-apt-repository -y "deb http://dl.google.com/linux/chrome/deb/ stable main"
sudo add-apt-repository -y "deb http://dl.google.com/linux/talkplugin/deb/ stable main"
sudo add-apt-repository -y ppa:webupd8team/sublime-text-3
sudo add-apt-repository -y ppa:tuxpoldo/btsync
sudo add-apt-repository -y ppa:freyja-dev/unity-tweak-tool-daily
sudo add-apt-repository -y ppa:stefansundin/truecrypt
sudo apt-key adv --keyserver pgp.mit.edu --recv-keys 5044912E
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 94558F59
sudo wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -


# basic update
sudo apt-get -y --force-yes update
sudo apt-get -y --force-yes upgrade


# install apps
sudo apt-get -y install \
    libxss1 spotify-client sublime-text-installer git gitk gitg \
    virtualbox virtualbox-guest-additions-iso filezilla dropbox \
    skype btsync-user gimp p7zip p7zip-full p7zip-rar unity-tweak-tool \
    indicator-multiload curl gparted dkms google-chrome-stable \
    ubuntu-wallpapers* php5-cli php5-common php5-mcrypt php5-sqlite \
    php5-curl php5-json phpunit mcrypt ssmtp mailutils mpack truecrypt\
    nautilus-open-terminal google-talkplugin linux-headers-generic \
    build-essential tp-smapi-dkms thinkfan moc


# install Composer
sudo curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer
sudo chmod 755 /usr/local/bin/composer


# install Laravel
sudo wget http://laravel.com/laravel.phar
sudo mv laravel.phar /usr/local/bin/laravel
sudo chmod 755 /usr/local/bin/laravel


# Virtualbox
sudo adduser x vboxusers


# email
sudo cp ./data/etc/ssmtp.conf /etc/ssmtp/ssmtp.conf
sudo chmod 744 /etc/ssmtp/ssmtp.conf


# x200 fan settings
# http://hackmemory.wordpress.com/2012/07/19/lenovo-x200-tuning/
echo "tp_smapi" | sudo tee -a /etc/modules
echo "thinkpad_acpi" | sudo tee -a /etc/modules
echo "options thinkpad_acpi fan_control=1" | sudo tee /etc/modprobe.d/thinkpad_acpi.conf
sudo cp ./data/etc/default/thinkfan /etc/default/thinkfan
sudo cp ./data/etc/thinkfan.conf /etc/thinkfan.conf
sudo chmod 744 /etc/default/thinkfan
sudo chmod 744 /etc/thinkfan.conf


# usb wifi + disable built in wifi // https://github.com/pvaret/rtl8192cu-fixes
mkdir -p /tmp/bootstrap/usb-wifi-fix/
unzip -d /tmp/bootstrap/usb-wifi-fix/ ./data/usb-wifi-fix.zip
sudo dkms add /tmp/bootstrap/usb-wifi-fix/
sudo dkms install 8192cu/1.8
sudo depmod -a
sudo cp /tmp/bootstrap/usb-wifi-fix/blacklist-native-rtl8192.conf /etc/modprobe.d/


# swappiness
cat ./data/etc/sysctl-append >> /etc/sysctl.conf


# Sublime Text 3
mkdir ~/.config/sublime-text-3/
unzip -d ~/.config/sublime-text-3/ ./data/sublime-text-3.zip
cp -ar ./data/sublime-text-3/* ~/.config/sublime-text-3/


# fonts
mkdir ~/.fonts
cp -ar ./data/fonts/* ~/.fonts/


# scripts
mkdir ~/.scripts
cp -ar ./data/scripts/* ~/.scripts/
chmod +x ~/.scripts/*


# dotfiles
shopt -s dotglob
cp -a ./data/dotfiles/* ~


# autostart
cp -a ./data/autostart/* ~/.config/autostart/


# Filezilla servers
mkdir ~/.filezilla/
cp -a ./data/filezilla/sitemanager.xml ~/.filezilla/


# Terminal
cp -a ./data/gconf/%gconf.xml ~/.gconf/apps/gnome-terminal/profiles/Default/


# folders
rm -rf ~/Documents
rm -rf ~/Public
rm -rf ~/Templates
rm -rf ~/Videos
rm -rf ~/Music
rm ~/examples.desktop
mkdir ~/Development
mkdir ~/BTSync


# update system settings
gsettings set com.canonical.indicator.power show-percentage true
gsettings set com.canonical.indicator.sound interested-media-players "['spotify.desktop']"
gsettings set com.canonical.indicator.sound preferred-media-players "['spotify.desktop']"
gsettings set com.canonical.Unity form-factor 'Netbook'
gsettings set com.canonical.Unity.Launcher favorites "['application://google-chrome.desktop', 'application://sublime-text.desktop', 'application://spotify.desktop', 'application://nautilus.desktop', 'application://gnome-control-center.desktop', 'application://gitg.desktop', 'application://gnome-terminal.desktop', 'unity://running-apps', 'unity://expo-icon', 'unity://devices']"
gsettings set com.canonical.Unity.Lenses remote-content-search 'none'
gsettings set com.canonical.Unity.Runner history "['/home/x/.scripts/screen_colour_correction.sh']"
gsettings set com.ubuntu.update-notifier regular-auto-launch-interval 0
gsettings set de.mh21.indicator.multiload.general autostart true
gsettings set de.mh21.indicator.multiload.general speed 500
gsettings set de.mh21.indicator.multiload.general width 75
gsettings set de.mh21.indicator.multiload.graphs.cpu enabled true
gsettings set de.mh21.indicator.multiload.graphs.disk enabled true
gsettings set de.mh21.indicator.multiload.graphs.load enabled true
gsettings set de.mh21.indicator.multiload.graphs.mem enabled true
gsettings set de.mh21.indicator.multiload.graphs.net enabled true
gsettings set de.mh21.indicator.multiload.graphs.swap enabled false
gsettings set org.freedesktop.ibus.general engines-order "['xkb:us::eng']"
gsettings set org.freedesktop.ibus.general preload-engines "['xkb:us::eng']"
gsettings set org.gnome.DejaDup backend 'file'
gsettings set org.gnome.DejaDup delete-after 365
gsettings set org.gnome.DejaDup include-list "['/home/x/Development', '/home/x/Pictures', '/home/x/.scripts', '/home/x/Sync/Backitude', '/home/x/Dropbox/Food & Private log']"
gsettings set org.gnome.DejaDup periodic-period 1
gsettings set org.gnome.DejaDup welcomed true
gsettings set org.gnome.desktop.a11y.magnifier mag-factor 13.0
gsettings set org.gnome.desktop.background picture-uri 'file:///usr/share/backgrounds/163_by_e4v.jpg'
gsettings set org.gnome.desktop.default-applications.terminal exec 'gnome-terminal'
gsettings set org.gnome.desktop.input-sources sources "[('xkb', 'us')]"
gsettings set org.gnome.desktop.input-sources xkb-options "['lv3:ralt_switch', 'compose:rctrl']"
gsettings set org.gnome.desktop.media-handling autorun-never true
gsettings set org.gnome.desktop.privacy remember-recent-files false
gsettings set org.gnome.desktop.screensaver lock-enabled false
gsettings set org.gnome.desktop.screensaver ubuntu-lock-on-suspend false
gsettings set org.gnome.gitg.preferences.commit.message right-margin-at 72
gsettings set org.gnome.gitg.preferences.commit.message show-right-margin true
gsettings set org.gnome.gitg.preferences.diff external false
gsettings set org.gnome.gitg.preferences.hidden sign-tag true
gsettings set org.gnome.gitg.preferences.view.files blame-mode true
gsettings set org.gnome.gitg.preferences.view.history collapse-inactive-lanes 2
gsettings set org.gnome.gitg.preferences.view.history collapse-inactive-lanes-active true
gsettings set org.gnome.gitg.preferences.view.history search-filter false
gsettings set org.gnome.gitg.preferences.view.history show-virtual-staged true
gsettings set org.gnome.gitg.preferences.view.history show-virtual-stash true
gsettings set org.gnome.gitg.preferences.view.history show-virtual-unstaged true
gsettings set org.gnome.gitg.preferences.view.history topo-order false
gsettings set org.gnome.gitg.preferences.view.main layout-vertical 'vertical'
gsettings set org.gnome.nautilus.list-view default-zoom-level 'smaller'
gsettings set org.gnome.nautilus.preferences executable-text-activation 'ask'
gsettings set org.gnome.settings-daemon.plugins.media-keys terminal 'XF86Launch1'
gsettings set org.gnome.settings-daemon.plugins.power critical-battery-action 'shutdown'
gsettings set org.gnome.settings-daemon.plugins.power idle-dim false
gsettings set org.gnome.settings-daemon.plugins.power lid-close-ac-action 'nothing'
gsettings set org.gnome.settings-daemon.plugins.power lid-close-battery-action 'nothing'


# update some more system settings
dconf write /org/compiz/profiles/unity/plugins/unityshell/icon-size 32
dconf write /org/compiz/profiles/unity/plugins/core/vsize 1
dconf write /org/compiz/profiles/unity/plugins/core/hsize 5
dconf write /org/compiz/profiles/unity/plugins/opengl/texture-filter 2
dconf write /org/compiz/profiles/unity/plugins/unityshell/alt-tab-bias-viewport false


# requires clicks
sudo apt-get install -y ubuntu-restricted-extras


# prompt for a reboot
clear
echo ""
echo "===================="
echo " TIME FOR A REBOOT! "
echo "===================="
echo ""
Posted 7 months ago - Comments

New Year’s resolution: Inbox Zen

It’s good to clean up your environment every now and then. Pretty much everyone uses email today and it seems like many of us don’t really take care of this bit of our cyber space. Every morning I’d wake up to 10-30 emails that I’d select and archive in bulk, without even skimming through their contents, all scattered across different labels, occasionally separated by a few emails that I did actually want to read. Twitter notifications, offers from Amazon, random newsletters that I always plan to read “later”, newsletters that I never explicitly signed up for. Many more would get filtered out of my inbox before I could even see them, archived automatically, sent to spam, deleted. Coming back from a holiday would mean digging through 100s of emails. Not good.

It’s worth mentioning that I’ve disabled both Gmail tabs (Social, Promotions, etc.) and Priority Inbox as soon as they became available. I believe it’s better to clean the mess instead of sweeping it under the rug.

Today I’ve spent a couple of hours tweaking and uncluttering my Gmail account. I never really had a problem getting down to inbox zero (or near zero), but over the years the amount of dirt has built up and the inbox needed a thorough cleaning. Here are the steps I took:

1. Unsubscribe from newsletters

I’ve unsubscribed from 20+ newsletters, most of which I’d archive as soon as I opened the email, without even reading the contents. The best way to go about it is to search for “unsubscribe” and go through the results one by one.

2. Unsubscribe from transactional emails

This one is still in progress, but I went through a number of emails and changed settings whenever I felt like I don’t really need the notification. Great examples are Twitter, Facebook and Google+ emails (I get notifications on my phone anyway). Another example would be forums that allow you to receive notifications in real time or in bulk daily/weekly. I opted for daily emails for threads that I’m particularly interested right now, disabling notifications for the rest of them.

3. Reduce the number of custom labels

Many of the custom labels were not in use for years and contained 1-5 emails that I didn’t need anymore. I got rid of most of them, leaving just 3, for travel related emails, phpconcole and communication with people close to me. No emails were deleted in this step, so all I got rid of was a bit of unneeded structure, down from ~30 to 3 - not bad!

4. Hide most of the labels

The only labels that I have visible by default are “Inbox” and “Starred”. “Drafts” are visible only if there’s anything to show (empty 99% of the time). All other labels, Categories and Circles are always hidden and I can get to them by clicking “More” below the two visible default labels.

5. Reduce the number of filters

Many of the filters became redundant after performing steps 1-3, so I got rid of them. Most of the filters that were left did exactly the same thing: Make sure that emails coming from address xxxxxx@yyyyyy.com never get sent to spam. It’s really useful for automated emails from servers or my Raspberry Pi that often contain exactly the same copy and might be treated by Big G’s robots as spam. You wouldn’t want to miss emails saying that the server is down, would you? I might combine them all into one big filter in the future, but for now it all looks pretty good. Down from ~70 to 16.

6. Delete transactional emails

This step along with 7. allowed me to shrink my email archive by ~800MB. I’ve searched for transactional emails mainly from Twitter and Facebook along with a few more websites and got rid of all of them. There is absolutely no value in keeping these emails.

7. Delete emails with large attachments

I’ve started with search “larger:25m” which showed emails with attachments that are larger than 25MB and kept lowering it by 5MB, deleting emails that were no longer useful, in many cases with photos for old projects that just took space and I’d never need again.

8. Ongoing maintenance

I’m on the lookout for the unwanted emails that are still coming to my inbox and am getting rid of them for good, one by one, instead of just archiving them.

That’s it, the job is (nearly) done. I’m looking forward to a bigger percentage of human-created emails in my inbox that were meant to reach me specifically.

Have tips on staying sane while working with email? Hit me up in the comments below.

Posted 8 months ago - Comments

Raspberry Pi [Part 3]: Amazon Glacier

Ok, it’s time for some real task for our Raspberry Pi. Today we’ll learn how to configure a command line client for Amazon Glacier and push GBs of data to the cloud. We’ll also configure our Gmail account to send us an email when our RPi is done uploading data. Last, but not least, we will learn how to limit upload rate for RPi so that other devices can still use the internet.

I’ve recently used my RPi for this very task, pushing out 120GB of my photos and backups up to Glacier. It took quite a while on my not-so-good internet connection - I left it running for a couple of weeks. The great thing about RPi is that it’s pretty much inaudible, even with HDD spinning 24/7 which makes it a perfect little server that can run under your desk. Let’s get right to it!

1. Getting up to date

Let’s log in and update system

ssh pi@raspberrypi.lan
sudo apt-get update && sudo apt-get upgrade

2. Install glacier-cmd

We will start by installing git and required python libraries and then install glacier-cmd

sudo apt-get install python-setuptools git
git clone git://github.com/uskudnik/amazon-glacier-cmd-interface.git
cd amazon-glacier-cmd-interface
sudo python setup.py install

3. Configure it

Let’s create a config file for glacier-cmd and fill it in

nano .glacier-cmd

Add the following, replacing your_access_key, your_secret_key and your_aws_region with correct values

[aws]
access_key=your_access_key
secret_key=your_secret_key

[glacier]
region=your_aws_region
logfile=~/.glacier-cmd.log
loglevel=INFO
output=print

Now you should be able to see your vaults by executing

glacier-cmd lsvaults

Success!

4. New vault

Let’s create a new vault for our photos

glacier-cmd mkvault "photos"

You should be able to see a new “photos” vault on the list

glacier-cmd lsvaults

5. Uploading a test file

Ok, now we can try to upload a file. I’m going to upload a file “ocr_pi.png” that I can see in my home directory

glacier-cmd upload --description "ocr_pi.png" photos "ocr_pi.png"

As you can see, I set the description to match the filename. By default, it would be set to a full path to the file, which is something that we don’t want, thus a description parameter.

6. Uploading multiple files

Now we’re going to create a script that will take care of uploading a bunch of files. Navigate to a folder that holds the files that you want to upload. In this example I’m going to upload zip archives. I trust you can figure out how to prepare your files for the upload. I’ve tried to keep every zip file below 500MB, making it easier to upload and also download data in the future, in case I need to access part of it.

Let’s create a folder where we’ll move uploaded files

mkdir uploaded

and a new script inside the folder with

nano upload.sh

and paste the following

find . -name "*.zip" | sort | while read file ; do
    echo "Uploading $(basename "$file") to Amazon Glacier."
    glacier-cmd upload --description "$(basename "$file")" photos "$file" && mv "$file" "uploaded"
done

Now we can execute the script with

bash upload.sh

It should upload all files one by one, showing progress and rate as it does its thing.

7. Installing screen

All good and well, but we still can’t leave it running on its own, uploading away all the files that we’ve prepared. We could, in theory, use cron job for that, but I personally like to be able to see the progress in real time whenever I want.

We’re going to install screen, a little utility that will let us disconnect from ssh session, while it’s still running and connect back to it at a later time, as if we never left.

sudo apt-get install screen

Now, let’s start a session called simply “pi” within screen with

screen -S pi

You might notice that not much has changed. In fact, everything looks exactly the same. But let’s see what screen will let us do. We will start top and disconnect, then we will try to reconnect and see if top is still running

top

Now press ctrl+a and d after that, you should see information similar to

[detached from 3068.pi]

We can now exit our ssh session with

exit

If everything went well, top is still running on our RPi even though we’re disconnected. Let’s see

ssh pi@raspberrypi.lan
screen -r pi

Boom, top is still running! As you can see, from system’s perspective we’ve never logged out. It’s going to be really useful for glacier-cmd. Just log into your “pi” screen session and execute our bash script as before

bash upload.sh

Now you can disconnect with ctrl+a followed by d and reconnect later to see how the script is doing. Neat, eh?

8. Email notification

I’d also like to be notified when RPi is done with uploading my files. It might take days (or weeks), depending on how much data you want to upload and how fast your internet connection is. The unfortunate truth is that upload speeds are almost always much worse than download.

Let’s configure mail command. RPi will be able to email us about finished upload using a Gmail account.

sudo apt-get install ssmtp mailutils mpack
sudo nano /etc/ssmtp/ssmtp.conf

And set (or add if they are not there) these options

mailhub=smtp.gmail.com:587
hostname=raspberrypi
AuthUser=myraspberrypilogin@gmail.com
AuthPass=myraspberrypipassword
useSTARTTLS=YES

Now we should be able to send a test message

echo "email body" | mail -s "email subject" your@email.com

If everything worked fine, we can add email notification to our upload script

nano upload.sh

The whole script should look like this

find . -name "*.txt" | sort | while read file ; do
    echo "Uploading $(basename "$file") to Amazon Glacier."
    glacier-cmd upload --description "$(basename "$file")" photos "$file" && mv "$file" "uploaded"
done

echo "Selected files were uploaded successfully." | mail -s "Glacier uploads finished" your@email.com

Now, our script is going to email us when it’s finished.

9. Throttling upload

The last part of this tutorial is throttling upload speed, so that RPi doesn’t choke your internet connection

sudo apt-get install wondershaper
sudo wondershaper wlan0 100000 400

The limits that we’re setting are in kb, so the 400kb above equals 50kB. The first parameter is our network device, wlan0 for wifi, eth0 for wired connection. The second parameter is download speed, which you probably don’t want to limit. The third parameter is upload speed.

Run the following to clear limits

sudo wondershaper clear wlan0

You might want to add these commands to cron, so that RPi can use as much upload speed as possible and cut it back during the day. Here are the commands that you might add to cron to limit speed at 10 am and remove this limitation at 1am, every day

0 10 * * * root sudo wondershaper wlan0 100000 400
0  1 * * * root sudo wondershaper clear wlan0

That’s it, our RPi is ready to push tons of data to the cloud.

Other parts in this series

Posted 10 months ago - Comments

Raspberry Pi [Part 2]: External drives, Samba and SFTP

In the second part of this series I want to show you how to connect external drives and configure SFTP and Samba for Raspberry Pi.

Note: You will need a powered USB hub if you plan to connect 2.5” external HDD (it requires more power than RPi can provide).

1. Update

Let’s start with updating our RPi

sudo apt-get update && sudo apt-get upgrade

2. Setting up HDD

Now, the safest way to plug our external HDD is to shut RPi down with

sudo poweroff

unplug the power source, connect HDD and plug it back in. After it boots and you’re able to ssh in, execute

sudo blkid

to see the list of disks. Mine looks like this:

/dev/mmcblk0p1: SEC_TYPE="msdos" LABEL="boot" UUID="2654-BFC0" TYPE="vfat"
/dev/mmcblk0p2: UUID="548da502-ebde-45c0-9ab2-de5e2431ee0b" TYPE="ext4"
/dev/sda1: LABEL="Data" UUID=3862A6DC65464A36 TYPE="ntfs"

The first two lines are Raspbian’s partitions on the SD card. The third line describes my HDD. As you can see, it’s UUID is “3862A6DC65464A36” and it’s a NTFS drive. We will need this information shortly.

Now we’re going to create a folder where all our drives (if you plan to use more than one) are going to be accessible and a folder that will represent our HDD

sudo mkdir /media/shares
sudo mkdir /media/shares/data

The next step is to open fstab file

sudo nano /etc/fstab

and add the following configuration

UUID=3862A6DC65464A36 /media/shares/data auto uid=pi,gid=pi,noatime 0 0

As you can see, UUID matches my HDD. It’s important in case you plug more drives and the path (e.g. /dev/sda1) changes.

Execute the following to mount the drive

sudo mount -a

Now you should be able to navigate to the drive using

cd /media/shares/data

And display its contents with

ls -lah

3. Handling NTFS

It’s quite possible that your external HDD will be formatted with NTFS. Your RPi will be able to see the folders/files and read them, but it won’t be able to make any changes. Let’s fix it

sudo apt-get install ntfs-3g
sudo mount -a

Let the RPi reboot and ssh back in. Now you should be able to create and delete a test folder

cd /media/shares/data
mkdir test-folder
rmdir test-folder

4. Samba

Samba is a cross-platform protocol that you can use to connect to your HDD plugged into RPi over wifi. We will have to set it up and open ports in iptables. Let’s get right to it!

sudo apt-get install samba samba-common-bin

Now let’s set a samba password for user pi

sudo smbpasswd -a pi

Great, now it’s time to add locations that will be accessible via Samba

sudo nano /etc/samba/smb.conf

Uncomment the following line in “Authentication” section

security = user

Now scroll to the very bottom and add the following

[shares]
  comment = Raspberry Pi shares
  path = /media/shares
  valid users = @users
  force group = users
  create mask = 0660
  directory mask = 0771
  read only = no

and restart Samba

sudo service samba restart

Now we need to add Samba’s ports to iptables and we should be able to connect to it from our computer over wifi! Let’s do it.

sudo nano /etc/network/iptables

And add lines for ports 137, 138, 139 and 445, so that it looks like this

*filter
:INPUT DROP [23:2584]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1161:105847]
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i wlan0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i wlan0 -p tcp -m tcp --dport 137 -j ACCEPT
-A INPUT -i wlan0 -p tcp -m tcp --dport 138 -j ACCEPT
-A INPUT -i wlan0 -p tcp -m tcp --dport 139 -j ACCEPT
-A INPUT -i wlan0 -p tcp -m tcp --dport 445 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
COMMIT

Let’s pull it in

sudo iptables-restore /etc/network/iptables
sudo iptables-save

And see if we can connect. Success!

5. SFTP

Now we’re going to enable access to our data via SFTP. It piggybacks on SSH connection, so we don’t have to open any additional ports.

sudo apt-get install vsftpd

(As pointed out in the comment below, there’s no need to install anything to access the device via SFTP protocol.)

That’s it! You should be able to connect using SFTP protocol and port 22, using username “pi” and your private SSH key as authentication method.

Note: You might have to convert yor SSH key into .ppk file if you use FileZilla.

Other parts in this series

Posted 10 months ago - Comments

Raspberry Pi [Part 1]: Basic setup without any cables (headless)

Today I want to show you how to set up a headless Raspberry Pi without any extra cables (HDMI or ethernet), screens, keyboards etc. You might have it all lying around, but you might as well be on the go with only your laptop and usb cable powering your Raspberry Pi.

You can still follow this guide in case you connect your RPi directly to the router, skipping step 3, where I set up wifi card.

I’ll assume you already have:

  • Raspberry Pi
  • SD card (4GB+)
  • power source for it (charger for your mobile phone will usually do)
  • compatible usb wifi adapter

1. Getting Raspbian

The first step is to download Raspbian image that we’ll be working with. You can get it from here (I’m using version 2013-09-25). Extract it, should be around 2.8GB

2. Writing it to SD card

Instead of trying to describe every possible way of writing the image on the SD card, I’m going to point you to an excellent resource on this topic - elinux.org article. Once you’re done with it, we can move to the next step.

3. Wifi settings

Don’t remove SD card from the reader on your computer. We’re going to set up the wifi interface, so that you can ssh into the box via wireless connection.

Open terminal and edit /etc/network/interfaces on the SD card (not on your machine)

Here’s how to open it with nano:

sudo nano /path/to/sd/card/etc/network/interfaces

and make it look like so:

auto lo

iface lo inet loopback
iface eth0 inet dhcp

auto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ssid "your-network-name"
wpa-psk "password-here"

You can save file with “ctrl+x” followed by “y”.

4. Test ssh access

Now, we can ssh into it with:

ssh pi@raspberrypi.lan

Default password for user “pi” is “raspberry”.

5. raspi-config

Run

sudo raspi-config

to expand filesystem, change user password and set timezone (in internationalisation options).

6. Password-less login

It’s time to secure it a bit. Log out executing

exit

and copy your public ssh key into RPi with

ssh-copy-id pi@raspberrypi.lan

Now you should be able to ssh into RPi without password:

ssh pi@raspberrypi.lan

Don’t have SSH key? No problem. Follow this guide from GitHub to create it.

7. sshd configuration

Now that we can ssh into RPi without password, it would be a good idea to disable password login.

sudo nano /etc/ssh/sshd_config

And change the following values

#change it to no
PermitRootLogin yes

#uncomment and change it to no
#PasswordAuthentication yes

From now on you will be able to ssh into your RPi only with your private SSH key. Nice!

8. Update

Let’s update RPi:

sudo apt-get update && sudo apt-get upgrade

It might take a while.

9. Watchdog

Now we’re going to install watchdog. Its purpose is to automatically restart RPi if it becomes unresponsive.

sudo apt-get install watchdog

sudo modprobe bcm2708_wdog

sudo nano /etc/modules

And at the bottom add:

bcm2708_wdog

Now let’s add watchdog to startup applications:

sudo update-rc.d watchdog defaults

and edit its config

sudo nano /etc/watchdog.conf

#uncomment the following:
max-load-1
watchdog-device

Start watchdog with

sudo service watchdog start

10. iptables

We’re going to use iptables to restrict access to our RPi

sudo nano /etc/network/interfaces

Add the following at the end of the file

pre-up iptables-restore < /etc/network/iptables

It’s going to pull iptables config that we’re about to create

sudo nano /etc/network/iptables

Add the following to the file

*filter
:INPUT DROP [23:2584]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1161:105847]
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i wlan0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
COMMIT

As you can see, we’re accepting incoming connections for ethernet (eth0) and wifi (wlan0) only on port 22. Now let’s pull this config into the system:

sudo iptables-restore < /etc/network/iptables

And check if it worked

sudo iptables-save

11. fail2ban

Now we’re going to install fail2ban which will automatically ban IP addresses that are failing to get into our RPi too many times

sudo apt-get install fail2ban

sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Restart fail2ban

sudo service fail2ban restart

and check current bans with

sudo iptables -L

Done!

That’s it, our RPi is set up and much more secure.

Other parts in this series

Posted 11 months ago - Comments

Scaling down

Minimalism has found its way into my pockets. I’m watching it carefully, trying to figure out what will be its next step. A lot has changed recently and I can’t wait to see what’s next!

Wallet

Somewhere in 2011, I got my wallet scaled down from a sizable 1” brick into a nice, tiny pack that forces me to keep it tidy. There is no space for coins, excess of banknotes (1 max) or anything else for that matter. Every new item in my wallet replaces one of the existing items, so each thing has to be really useful to land in there. The wallet is sleek, barely visible in my pocket and so comfortable that once in a while I have to check if it’s still there (is it really a drawback?). Paper money is so yesterday.

Cameras

As a photo geek I used to own a DSLR, SLR, a party-camera (HP 320 FTW!) a couple of lenses and all sorts of lighting equipment, but as years went by, I lost more and more interest, slowly getting rid of my gear and considering replacing bulky DSLR with something smaller. They say that the best camera is the one that is always with you, after all, so I decided to get myself a Canon G12. Nice and much smaller, almost pocketable. A few months later I’ve finally acknowledged that I barely use it, because it was still pain to carry it around. Smaller doesn’t mean small. Separate charger and pretty slow reaction time didn’t help. I’ve decided that it’s time for a drastic change, I wanted something REALLY small.

Tablet + MiFi

I got myself a Nexus 7 back in the day when it still shipped with 8GB of memory. It was (and still is!) a very decent device. Prompt updates from Google are always nice and I consider a 7” device to be the best form factor out of all touch devices I had a chance to get my hands on so far. Several of my blog posts were written on it and countless hours were spent browsing internet / reading books / chatting with people. I’ve even used it as a sat nav for cycling due to a pretty chunky 4400mAh battery that lasts considerably longer than my phone. The lack of built in GSM support was a PITA that forced me to buy a MiFi device to provide internet on the go for the tablet. Another device to carry around, charge, remember to take with me. Not good. Also the size of the tablet was starting to be an issue for me - small enough to put into my jacket pocket while exploring city (try to do that with iPad), but not small enough to keep in my jeans pocket while in the pub, etc. Time for a drastic change.

Phone

For the past 2+ years I carried ZTE Blade with a custom rom and overclocked processor from the day 1. It is a really pleasant phone that can be bought for £60-£70 these days. Decent screen (480x800, 3.5”) and not that terrible specs, but nothing amazing, especially in 2013. It did the job, especially paired with the tablet. The battery lasted for 3-4 days, mainly because I tried to use tablet for all multimedia tasks. Tablet+Mifi+phone combo worked well most of the time, provided quite pleasant experience (both tablet and phone running on the newest Android 4.2.2), but you can’t always have all these devices with you. Also, the battery started acting weird since I upgraded from 2.3.7 to 4.2.2, so I had to use either an old SonyEricsson phone or a tablet as a backup alarm clock. Not good. Time for a drastic change.

Looking for a solution

I looked at all the issues described above and came to one conclusion: It’s time to get rid of all that big, bulky, inefficient stuff and replace it with something much smaller, yet better quality. I’ve short listed my requirements:

  • has to be a phone - I want to have one device that Does It All(tm)
  • very good camera - I’m getting rid of my Canon camera, so I need a replacement
  • good specs - the phone waits for me and not the opposite
  • good screen - I want to be able to read a book on it on the go, perhaps watch a film on a plane
  • very good battery life / extended battery option - what’s good a powerful phone if you can’t use it?
  • good price - I don’t want to spend a fortune on it

I found only one device that ticked all the boxes, a phone that I customised (as with every electronic piece of equipment that I own), that runs beautifully and meets all my needs - Samsung Galaxy S3.

One device for all digital needs

I got it used from eBay for a decent price and bought a MASSIVE 7000mAh battery for it. I’ve replaced the stock software with nightly CyanogenMod 10.1 to get experience as close to Nexus device as possible. It’s a beast. Snappy, pretty, taking amazing photos (for a phone) and lasts forever on a single charge. Quoting classic, "on a scale 1-10 it’s a definite win". I’m still getting used to the size (change from 3.5” to 4.8” is not a small one), but I’m sure it’s not going to be a problem after a few weeks.

All in all, I retired my camera, tablet, MiFi and two phones and replaced it all with just a single device. WOW.

Another options?

Samsung S4 has better specs and available 7500mAh battery, but is much more expensive. Samsung S4 mini or its successor could perhaps replace my current setup. I’d probably trade smaller overall size for smaller screen (obviously) if the rest of specs stayed decent. There’s a bunch of phones that match my criteria with the exception of battery life, which is a shame. I don’t understand phone manufacturers creating devices that barely last 1 day of moderate usage. I can’t be the only person who wants to be sure that no matter what I do during the day, I’m still going to have some battery juice left before going to bed!

The future is bright

Currently I’m waiting for Google Wallet to become available in UK to test it and perhaps get rid of another thing to carry around. I doubt I’d stop carrying my wallet completely (it’s not only credit cards!), but who knows.

Another, more interesting application would be to replace my laptop with my phone, using this dock. It features HDMI port, 3 USB ports, AUX out and micro USB for power. I use external screen most of the time anyway and I’m sure that S3’s specs would be good enough to run tools that I use for web development. Canonical is working on an operating system that combines Android and Ubuntu Desktop. Can’t wait to see it happen!

Do you feel like your devices own you? Did you recently got rid of a bunch of electronics? Show your story in comments below!

Posted 1 year ago - Comments

All I want for Christmas is you, battery

There is an alarming trend in technology world, especially in mobile phones department. In the past your phone would last for a week or more on a single charge. Phones were mobile. You could unplug it on Friday, go hiking for a weekend and plug it back when you came back home after an adventure.

That’s something unheard of for modern phones. Most of them will barely last 1 day. Technology went forward, batteries got better, but companies like HTC and Apple decided to slim phones down, cutting out as much fat (read: battery) as possible.

So many powerful devices, so little juice to keep them running.

We ended up with powerful devices that turn into bricks by 6pm. If you use your phone extensively - Facebook, Twitter, browsing internet, maps, GPS, not to mention old school texts and calls - you will have much more trouble to spend entire day without plugging your device into mains to charge it up.

There is something that drives me mad every time I think about it: In the past pretty much every laptop and every mobile phone had user replaceable battery. If I needed 8h battery time on my laptop and knew that a standard battery would last only for 4h, I could buy another one and replace it when needed. If I went for a hike with my GPS-enabled phone and expected to use it a lot, I could take one or two extra batteries and make sure that I won’t get lost.

Now, more and more laptop and phone manufacturers opt out of user replaceable battery in exchange for a sleek, unibody designs. They do look good, but if you are out and about, you won’t be able to use them for too long.

There are exceptions

I was really pleased to see the new Macbook Air that offers 12h battery life, but it’s still nowhere near old Thinkpad X series (20+ hours). Another notable device is Samsung S3 phone, one of the very few modern, powerful phones that still has replaceable battery. I’ve recently purchased a Zerolemon 7000mAh extended battery, which kept my phone running for 3.5 DAYS after first charge. That’s with a fair bit of usage, including 12h screen time.

Am I the only person who wants to have his gadgets running for days on a single charge?

Posted 1 year ago - Comments

CodeIgniter timing out after 300s ? Here’s a solution

Hey CodeIgniter developers, here’s another thing to look out for when developing with our favourite framework. We ran into this issue here at GatherContent a little while ago, trying to figure out why PHP scripts triggered from CLI would always time out after 300s, even though every possible setting on the server was set to much higher values.

Turns out CodeIgniter overwrites time limit that you set on your server with a “liberal” (according to CI team) limit of 300s. Take a look:

(system/CodeIgniter.php, lines 100-108)

The problem is that 300s is way too long for a front end parts of your app (no one’s going to wait that long for a page to load), and way too short for back end parts of your app (scripts generating massive PDF’s for example). It might be ok for most people most of the time, but it might bite you badly one day and you’ll waste your time trying to figure out why the heck your code times out.

The best fix is probably to comment out entire section and let your server decide how long to run scripts for. No more unexpected behaviour.

And if you read that far, check out another unexpected CodeIgniter issue that I found a while ago.

Posted 1 year ago - Comments

Yellowish tint on your Nexus 7 / Samsung S3? Here’s how to fix it

I’ve been using my nexus 7 for nearly a year now and never thought that there would be a simple solution to a problem that every Nexus 7 owner faces (literally) - yellow tint on the screen. We all know it, we all saw it and there’s nothing we can do about it, right? Wrong.

I’ve found an app that does an amazing job in terms of fixing (more like hacking) colours on my precious. It’s called Screen Adjuster. The trick is very simple: the app displays a layer above everything else on the screen, which you can turn a little bit blue, which in turns offsets the yellow of the screen itself, making it much more “white”.

I’ve set blue to +13, left all other sliders alone and set the app to autorun on startup and hide status bar icon - as a result I have a tablet that looks good and is not littered by any unnecessary notifications/icons/whatever.

image

image

Enjoy!

Posted 1 year ago - Comments

Production/development switch for your codebase

tl;dr: Add “.production” file to the root folder od your codebase on production servers, add “.development” file to the root folder of your codebase on development servers (both files empty, only name is important), ignore them globally in your git repo, ignore them locally in your FTP settings or whatever you’re using to push changes up to your servers (I’m using ST2+FTP plugin for dev server) and add the following somewhere at the top of your index.php file:

Now, you can put all settings for both production and development environments in your configuration files and choose the right one based on your ENVIRONMENT constant! Sweet, one less thing to worry about!

Long version

How do you deal with differences in your codebase between production environment and development environment?

In my case, I used to have my codebase in git set to production (disabled error displaying, live base_url, live database, all functionality enabled) and using git’s “ignore locally” feature several files changed so that error displaying is enabled, base_url set to dev server, dev database details, emails redirected to phpconsole.

What’s wrong with that approach? Quite a few things, actually. What if a new developer sets up his account on dev server without changing any config files and starts running his code against live database? Well, we can change the repo to point to dev db by default. But what will happen if someone deploy code to one of production servers and forget to change values to point to the live db? Oopsie, some of the clients are hitting our dev database!

One day I was wondering how to make it all work automatically and remembered that Beanstalk uses “.revision” file in root folder of your codebase to track which revision is deployed to your servers.

I thought “brilliant!” and decided to use similar approach:

The snippet of code from tl;dr above sets ENVIRONMENT constant that can be used to change your application’s behaviour. If none of environment files is present, the page will exit with an information about what happened. This way we’re mitigating risk of running wrong version of the code,while keeping convenience of single codebase.

Of course, you’re not limited to only 2 options, you can easily add another one, e.g. “.staging”.

Let me know what you think about this approach in the comments below.

You can also read:
How to understand half of Harry Potter book in any given language
How to increase productivity per square inch of your screen
Logging in with QR codes

Posted 1 year ago - Comments