Category Archives: Computer things

Rocking your own SASS Framework

The following post will mostly contain a number of good to have links regarding practices of building a SASS framework.

WordPress is so versatile

WordPress is a truly lovely platform to work with. It has grown over the years to become something that is both well documented and stable and through extensive use within the community it gets more secure all the time.

I could go on forever just talking about all the amazing things that WordPress brings to the table but I’ll cut myself short and just make a short list with the most important things.

The list

In no particular order the list follows below. The things I enjoy most about WordPress.

  • A robust and competent CMS
  • User management with user roles
  • An organized administration interface
  • Well documented APIs
  • Competent plugin system
  • Competent theme system
  • Ability to create custom pages
  • General extensibility

So these are some of the most interesting things that I have found when using WordPress. I have been working with it on and off as both administrator and developer and my experience is overall a positive one.

If you ever find yourself looking for a platform to base a simple server system on WordPress could be for you. That is if your system uses fairly infrequent larger data transfers.

The example

Just today I learned something new about this wonderful platform and that was how you create a custom page type. Just write the following code in a *.php file in your current theme.

<?php /* Template Name: Example Template */ ?>

Where “Example Template” should be replace with the name which you want your custom page type to show up as. Below this comment you include your own custom PHP code to generate the complete page. Using the well document APIs of WordPress makes the rest a piece of cake. Try it!

 

Playing with Debian and Node.js

I have been playing around with Debian and Node.js during the last week and I think the experience is quite interesting. Getting started and choosing the right packages and build chains is not as simple as it seems in the beginning.

Some of my current choice of tools has fallen on nvm, gulp, compass, babelify and browserify. I also use virtual machines running in Virtual Box in order to secure a stable development environment. Using VMs makes the environment portable and resistant to platform differences.

Working with VMs

Working with VMs is all it’s cracked up to be. Virtual Box 5 is a very nice platform to run on and performs very well even without hardware virtualization enabled.

The networking and virtual disk management are the primary parts and tools which enables a great work environment.

Working with Debian

Debian is my choice of platform due to me being comfortable with the OS already. Using Debian over Ubuntu was a simple choice since stability and reliability is an important factor.

In order enable me as a developer to I choose to have my development toolbox in the native environment of the computer I am working on. This means that I need a way to communicate with the file system of the VM. This is done using the smb protocol sharing a projects folder and connecting it as a network drive. This allows node to run very fast while I can work natively and quickly. There are some occasional slowdowns when using solutions with a lot of file I/O such as git though, but it’s all within workable limits.

Setting up Debian is pretty straight forward but in order to prepare for Node.js and future web development using SASS I need some extra installed packages.

# Basic installs
apt-get install build-essential ruby ruby-dev curl
# Install compass
gem install compass

Working with Node.js

In order to make the most of the node installation we install it using nvm (Node Version Manager). This enables switching between node versions on the fly for any current project.

# Installing nvm (Node Version Manager)
curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash
# Installing a specific version of node
nvm install v0.10.32

Overall

In closing, working with Debian and Node.js is a very nice experience when you’ve gotten into using the right tools. If you are on a *nix like system you’ll get the most out of it but windows is really catching up.

I’ve been using Debian, Node.js, Compass and React to build web apps on my spare time and for work and they are all truly a joy to work with. ES6-4-life or until I finally return to coding in C.

Backing up a remote accessible MySQL database

This may seem as a very specific case since it might seem like a bad idea to expose an SQL database to the world wide web. Even though IP-restrictions may apply it may still be accessible to non authorised personell. Of course there’s a trade-off between security and simplicity and thus something like this might come in handy.

In order to make a dump of the MySQL database you want a tool called mysqldump and is located in the mysql-client package in the Debian repositories. The following command will install the package for you. Remember that you need to use sudo or log in as super user.

sudo apt-get mysql-client

Following this you want to utilise the mysqldump tool. A command may look as follows depending on the use and needs.

mysqldump --host hostaddress -u username -ppassword databasename > databasename.sql

The mysqldump tool will output the databasedump to stdout and thus we want to pipe it to a file, in this case named ‘databasename.sql’. Note that the password should follow exactly after the ‘-p’ flag without any spacing.

Selfhosted software projects

I have a desire to host my own projects for easy access over the internet. Ofcourse GitHub and Bitbucket are both great contenders in making this an easy task but hosting on an external system feels scary and restrictive. This leads me to look for other solutions preferrably open source possible to run on a linux host.

One such alternative is Fossil which as of now feels like a nice alternative. A test setup needs to be done in order to evaluate it’s functionality properly. It has features such as a ticket system, a wiki, and browsable history among other things.

I’ll need to look into this further.

Backups revival

After further contemplation on the topic of backups I realized that every machine I run is a possible target and thus is a possible entrypoint which may compromise the backup storage.

To minimize the impact of a possible malificent actor manipulating my entire backup storage I am thinking of adding a storage space and a user for each backup configuration. That would result in the backups being completely isolated from each other thus making sure only a compromised machine may get a contaminated backup.

In order to simplify the access to the backups one might also create a separate group for reading from the backups. A read only interface would make for easy access whilst maintaining high security.

The defaults for useradd should be correct but if one would like to check these. The following command could be used.

useradd -D

The previous command could also set the user defaults by including the preferred setting on the command line.

useradd -D -g customgroup

The following commands will add a user with a custom home folder which should be located in the datastore in order to avoid creating unecessary home folders.

useradd -d /datastore/d0/ -c "Storage Point PC0" storagepointpc0 # Will create the user

passwd storagepointpc0 # Will set the password and make the user active

smbpasswd -a storagepointpc0 # Sets password for samba user and enables user

Choosing appropriate user names and storage point names is another topic that may still need some thought.

Backups revisited

As I previously mentioned I have been working on setting up an securing my backups. This is a task that has got me thinking of how the backups should really be managed.

At first I was thinking of using a software raid under debian in order to secure the data with some fault tolerance but then I had a small discussion with my brother who dissuaded me from going down that path. His reasoning was that my data changes to seldom to really benefit from a true raid. Instead one could utilise a local execution of the command “rsync -ra” or something similar in order to copy the information from one drive to the other.

After some thinking I came to the conclusion that due to the risk of data pollution when connecting the backup to a possibly infected computer it is best to maintain two separate backup drives. One which is exposed through my samba connection and one which is periodically copied from the exposed drive. This may prevent a possible attacker from locking or destroying my data.

With that said, today I set up my exposed drive for backups over samba. To set up a new samba accessible folder one firstly needs to make sure the owner/group of the shared folder is correct and then one could take a stab at adding it to the shares by editing the file “/etc/samba/smb.conf”

[datastore0] # The name data store may be chosen as one see fit
  comment = Exposed data store share
  path = /datastore/d0
  writable = yes
  valid user = myuser

And this my share was public and all was good. At least, I hope so, but who knows.

Securing the backups

I have been looking for a way to securely store my backups for a long time but I never actually wanted to take the time to set something up. Now though I have taken some time to set up my backups properly.

By properly managing backups I mean adding at least one level of redundancy and I want to do this automatically using RAID 1. Since my backup server box do not support hardware RAID, I will be using debian software RAID.

In my box, which is a HP Proliant N54L, I will be using two WD RED 2TB disks which will be running in RAID 1 mirror mode.

Configuring the disks for optimal support

It took me a while to get to a point where I was satisfied but I settled with the method described Setting up WD RED for initialising my original disk. That is a single one of the WD RED disks. I used the partitioning scheme GPT just for the sake of it. MBR is ooold 😛

Basic steps were as follows after installing the parted package:

# parted -a optimal /dev/sdb
[parted] mklabel gpt
[parted] print
[parted] mkpart primary ext4 0% 100%
[parted] print
[parted] quit

# mkfs.ext4 /dev/sdb1

The final steps involved editing the “/etc/fstab” file which is pretty straight forward once you check out the format.