Happy 2015!

Happy New Year 2015!

Happy New Year 2015!

New years celebrations in Sydney have been amazing, starting with the now iconic fireworks on the bridge:

Wishing all my friends and fellow web professionals out there a very happy and successful year!

And more Radio Meuh for all please …

Radio Meuh

MWD0701: Log Management with ELK

elk_logo

In our series around modern web development, I’d like to touch on a vital component in the production pipeline, sitting in the area of debugging and monitoring (the MWD07 chapter), and that is Log management. Too often is this overlooked by most seasoned developers and dev managers, and that’s a real shame, because at all stages of the application life cycle Logs are a goldmine!

Obviously first and foremost for debugging purposes, at development and testing stages. But also later on, and once the application is in production, for performance monitoring, bug fixing purposes, and simply for usage analytics. There are a lot of logs available in a web stack, not to mention those that you will create and populate ad hoc for the verbose logging and overall auditability of your application: System logs, web server logs (access and errors), database logs, default framework-level logs (such as those you’ll get in Zend framework or Symfony for instance in the PHP arena), postfix and other mail logs, etc. All these deserve proper handling, rotation, storage and data-mining.

In my past life in agency-land, I had the opportunity to play with a variety of web log analysers such as AWStats, Webtrends and alike. I also used with reasonable success the community version of Splunk, and back then it seriously helped tracing back a couple of server hacks, but also providing custom stats around web campaigns to hungry marketers.

Now that I am working on one main web application with my current employer, I have been looking for a robust and sustainable solution to manage logs. And while looking along the lines of Logstash, a tool I used previously for a Java platform, I have discovered the new comprehensive solution now known as the ELK platform.

ELK stands for Elastic Search + Logstash + Kibana

Elastic Search has been around for a while, as a real-time search and analytics tool based on Lucene. Recently funded with a $70M C-round (press release), the company has undertaken the ambitious “Mission of Making it Simple for Businesses Worldwide to Obtain Meaningful Insights from Data”. Nothing less.

Logstash is this nice piece of software started 5 years ago, and maintained since then, by Jordan Sissel, a cheerful fellow developer also guilty some other nice nifty little utilities, such as the hand FPM. Logstash helps you take logs and other event data from your systems and store them in a central place. It is now commercially supported by ElasticSearch and Jordan Sissel has also joined the team.

And finally Kibana is a web fronted to visualise logs and time-stamped data. Produced by the vibrant Logstash community, and contributed in particular by early committer Rashid Khan, it is now commercially supported by Elastic Search as well, as the preferred visualisation and washboarding tool for Logstash and Elastic Search.

ELK_platform

So how does it work? Well the diagram above will give you the gist of it:

  • Logstash processes log files as inputs, applies codecs and filters to it (note the amazing Grok library used as a middleware for reggae patterns) and spits out output files, including specific support for Elastic Search.
  • Elastic Search consumes Logstash outputs and generates search indexes.
  • Kibana offers the user-friendly interface anyone expects to build business-enabling reports and dashboards.
Sample Dashboard in Kibana 3

Sample Dashboard in Kibana 3

To get the full picture of the solution, there’s probably no better preacher than the creator himself, Jordan Sissel, who has been a faithful contributor at PuppetConf for the last 3 years, check out these Youtube recordings:

Useful links:

MWD0201: Setting up a Mac for development (update)

A few months ago, I had a first crack at this topic: How to set up your Mac for modern web development. If you are curious enough, you’ll find the blog post here. 8 months after, I have taken a few things on board, and I believe time has come for an update.

The full step-by-step document is available as a PDF attached (SettingupaMacforDevelopment_v1.1), but to summarise my take on this topic:

  • You need some basic utilities: OS enhancements, editors, network utilities.
  • You need Homebrew, the missing package manager for Mac OS X. And thanks to that, you will be able to install all the languages and tools you need
  • Finally you need the DevOps tools required for modern automation and deployment practices: VirtualBox, Vagrant and Docker

mac_setup

Once this all settled and dusted, you will be able to run a state of the art web development environment on you Mac, on a day to day basis.

With handy shortcuts defined in your .bashprofile file, you will be able to start and stop services as we need them. A typical list of aliases would be:

#adding aliases

# PHP-FPM commands

alias php-fpm.start=”launchctl load -w  usr/local/opt/php55/homebrew.mxcl.php55.plist”

alias php-fpm.stop=”launchctl unload -w /usr/local/opt/php55/homebrew.mxcl.php55.plist”

alias php-fpm.restart=’php-fpm.stop && php-fpm.start’

# MySQL commands

alias mysql.start=”launchctl load -w /usr/local/opt/mysql/homebrew.mxcl.mysql.plist”

alias mysql.stop=”launchctl unload -w /usr/local/opt/mysql/homebrew.mxcl.mysql.plist”

alias mysql.restart=’mysql.stop && mysql.start’

# PostgreSQL commands

alias pg.start=”launchctl load -w /usr/local/opt/postgresql/homebrew.mxcl.postgresql.plist”

alias pg.stop=”launchctl unload -w /usr/local/opt/postgresql/homebrew.mxcl.postgresql.plist”

alias pg.restart=’pg.stop && pg.start’

# NGINX commands

alias nginx.start=’sudo nginx’

alias nginx.stop=’sudo nginx -s quit’

alias nginx.reload=’sudo nginx -s reload’

alias nginx.restart=’nginx.stop && nginx.start’

alias nginx.logs.error=’tail -250f /usr/local/etc/nginx/logs/error.log’

alias nginx.logs.access=’tail -250f /usr/local/etc/nginx/logs/access.log’

alias nginx.logs.default.access=’tail -250f /usr/local/etc/nginx/logs/default.access.log’

alias nginx.logs.default-ssl.access=’tail -250f /usr/local/etc/nginx/logs/default-ssl.access.log’

alias nginx.logs.phpmyadmin.error=’tail -250f /usr/local/etc/nginx/logs/phpmyadmin.error.log’

alias nginx.logs.phpmyadmin.access=’tail -250f /usr/local/etc/nginx/logs/phpmyadmin.access.log’

# WebDEV shortcuts
alias webdev.start=’php-fpm.start && mysql.start && nginx.start && mailcatcher’
alias webdev.stop=’php-fpm.stop && mysql.stop && nginx.stop’

To conclude, the most important thing is to keep your webdev environment up to date on an ongoing basis.

Mac OS X updates

Visit the AppStore to check for OS level updates. Pay particular attention to XCode updates.

Homebrew updates

All brew commands are here: https://github.com/Homebrew/homebrew/tree/master/share/doc/homebrew#readme

List the installed packages

$ brew list

Update the formulas and see what needs a refresh

$ brew update

Now upgrade your packages, individually or as a whole:

$ brew upgrade

If your paths and launch files are set properly, you should be fine even with an upgrade of PHP, MySQL, Nginx or NodeJS.

Pear updates

Simply run this to get a list of available upgrades:

$ sudo pear list-upgrades

And then to implement one oft hem

$ pear upgrade {Package_Name}

Gem updates

All Gem commands are here: http://guides.rubygems.org/command-reference/

List the installed packages

$ gem list

List those needing and update

$ gem outdated

Then update gems individually or as a whole:

$ gem update

Node updates

Note itself should be updated with Brew on a Mac.

$ brew upgrade node

To update Node Package Manager itself, just run

$ sudo npm install npm -g

To list all packages installed globally

$ npm list -g

Check for outdated global packages:

$ npm outdated -g –depth=0

Currently the global update command is bugged, so you can either update packagers individually:

$ npm -g install {package}

Or run this script

#!/bin/sh

set -e

set -x

for package in $(npm -g outdated –parseable –depth=0 | cut -d: -f2)

do

npm -g install “$package”

done

Note that all global modules are stored here: /usr/local/lib/node_modules

Conclusion

Obviously this is a personal flavour which characterises web development based on PHP, MySQL and NodeJS. For other destination ecosystems (Java, Ruby, Python), you can probably adapt the documentation above to fit your needs and specific constraints. But the main ideas remain: Use Homebrew, Ruby Gem, PHP Composer and Node NPM as much as you can to install additional libraries and manage dependencies.

Other tools I may have covered are a log management platform (such as Splunk or ELK), error catching (such as Sentry), mobile application utilities (such as Cordova, Ionix, Meteor), or design utilities (such as Omnigraffle, Pixelmator, Sketch, Mindmapple). Not to mention a variety of handy cloud services.

Please let me know what you guys out there think about this!

A star is born … well more exactly a Meteor! (v1.0.2 is out)

meteor_logo

Meteor was recently released in its official version 1.0, and this has been long expected by its community of early adopters. If you don’t know what Meteor is, rush to the website https://www.meteor.com and see by yourselves.

In a nutshell Meteor is a new, but very well-funded and production-ready, player on the scene and is one of the few frameworks that takes full-stack approach. Your app runs BOTH on the server and the client (in NodeJS on the server, and in your your browser’s JavaScript engine on the client) and works very holistically together. It also comes bundled with MongoDB (although you can replace this with a bit of tinkering).

Everybody knows Meteor uses NodeJS behind the scene. But does it use NodeJS version in your PATH? Hmmm…. No. Meteor is ultra portable and the developer does not need to know about NodeJS at all. So when you are installing Meteor, it will download something called dev_bundle which has NodeJS and all the NPM modules needed by Meteor. All these modules are pre-compiled for your platform. That makes getting started with Meteor easier and quicker. Is there any problem with this approach? No. This is perfect, you just need to be aware of it, especially if you are planning to bundle several apps.

So why should you consider coding your next web app using Meteor?

  1. Your app will be a real-time one by default, thanks to the power of web sockets through NodeJS
  2. Just like in NodeJS you can code the full stack with just one language: Javascript
  3. You can save a lot of time with smart packages grabbed from the AtmosphereJS site
  4. The community is extremely supportive, and the company very well funded  (Read this)
  5. It’s optimised for developer happiness, and it’s friendly for beginner developers
  6. It inter-operates nicely with other JS libraries such as AngularJS, Famo.us, and more.
  7. It’s clearly ahead of the technical curve, and that reads through their mission statement: “… to build a new platform for cloud applications that will become as ubiquitous as previous platforms such as Unix, HTTP, and the relational database.”

Meteor 1.0

In conclusion, Meteor is extremely interesting and I think they do a lot of things very right – it’s a delight to work with. EVERYONE coding JavaScript should learn it, because it’s proposed the right way, full-stack. But it’s only an option if you’re in the position of replacing your entire stack, client and server (or working from scratch of course). If you already have, say, a web API that you work against, of if you have an existing JavaScript frontend app that you just want to add some structure to, it won’t fit your needs. Then you would probably consider a more versatile approach with ExpressJS as a NodeJS framework and Ionic as a mobile app packager (which I will cover in another post)

Useful links for Meteor resources

MWD03 – Provisioning a local development stack

In the previous post, we set up the Mac workstation and got it ready for modern web development.
In this chapter, we’ll discuss the next key step in setting ourselves up the right way to develop a web application, and this is about creating and provisioning a development environment.
Using Linux is not a crime!
VMs are fantastic
If you are planning to create your app using PHP, Java, Python and or Ruby, then there are 90% chances you will do that on a Unix/Linux powered stack. Otherwise, you would go for Windows, and anyway things would not be very different.
Before we throw money through the window renting a server on the cloud, let’s be practical and consider the most obvious option, which is to leverage your own local workstation to setup a virtual environment. Note again that I advise against using platform ports of xAMP (Apache-MySQL-PHP) and there are a few good reasons for that, along the lines of consistency:
  • Operating system discrepancies (starting with file systems)
  • Software versions
  • Files and folder permissions
  • Stability
This said, the best thing to do is to provision a virtual machine which replicates as closely as possible the target production environment. For this end, we use a virtualisation platform like Virtual Box, as proposed in the previous article, and can install with it any preferred OS stack. Let’s assume a CentOS 6.5 64bits for the example, but it could be anything else, including a custom and home brewed VM.
Fortunately for us, instead of downloading the ISO at cents.org, and going through the full install process, ready made boxes are available on the web, and I can mention the following repositories:
My Vagrant is rich
My Vagrant is rich!
Vagrant is an amazing, accessible and free utility, and I hardly see how the modern web developer could ignore it. It allows them to create and configure lightweight, reproducible, and portable development and staging environments, with the exact combination of services and utilities I need for my project. And obviously I will consistently use and refer to Vagrant hereafter, as I am now using it both in my hobbyist and professional lives.
The basics for Vagrant are very well explained on the official site, here: http://docs.vagrantup.com/v2/getting-started/index.html
To install Vagrant, just visit the page and download the right package for your OS: http://www.vagrantup.com/downloads.html
Done, we are ready to provision our Linux stack for development (and possibly staging) purposes:
As I am a RedHat/Fedora/CentOS enthusiast, I go for a CentoS 5.5 64 bits stack, which I pick from the shelves of Vagrant Cloud (but could have been anywhere else): https://vagrantcloud.com/mixpix3ls/centos65_64
This one has been setup with the Virtual Box Guest additions (an I have a specific short article to help you out with upgrading your VB guest additions in case you update Virtual Box).
Let’s first create a working folder:
     $ mkdir ~/sandbox/my_project
     $ cd ~/sandbox/my_project
Now I initialise my local Linux stack:
     $ vagrant init mixpix3ls/centos65_64
=> This immediately create a local .Vagrantfile in your project folder, which you can freely edit and tweak to suit you need, as we will see later.
One thing you might like to immediately do though is to organise proper web port forwarding by inserting the following line in this .Vagrantfile:
 
config.vm.network “forwarded_port”, guest: 80, host: 8080
As you understand, this .Vagrantfile should later on be part of your GIT repository, as a way to share with your fellow developers what sort of server environment your app is supposed to run on.
For now, let’s just switch the Linux machine ON:
     $ vagrant up
 
It will take some time to download the gig of data corresponding to the image, just look at progress in your terminal window. But eventually your VM is up and running, and you can seamlessly SSH into it using the simple command:
     $ vagrant ssh
 
3 commands in total: Isn’t that amazingly straightforward?
In case you wonder where all this magic happens, beyond the 5kb .Vagrantfile stored in the project folder:
 
This is great but still a little bit bare bones for a web server, and we now have to continue provisioning the image at least with a web server and a database server.
Do you manual?
Do you manual?
The old dog and control freak in me can’t help, at least once, to do it the hard manual way, and that’s what it looks like:
$ sudo yum update # Careful because a Kernel update may ruin the Virtualbox Guest Tools
$ sudo yum install ntp
$ sudo service ntpd start
$ sudo rm /etc/localtime
$ sudo ln -s /usr/share/zoneinfo/Australia/Sydney /etc/localtime #properly set server time to this part of the world we love
$ sudo yum install nano # I need my fancy text editor
$ sudo yum install httpd # This is Apache, and you could choose Nginx or Tomcat
$ sudo service httpd start
$ sudo chkconfig httpd on
$ sudo nano /etc/httpd/conf/httpd.conf # Change Admin email
$ cd /etc/httpd/conf.d
$ sudo mkdir vhosts
$ sudo yum install php php-mysql
$ sudo yum install php-* # all extensions, because I can’t exactly tell which ones I need for now
$ sudo nano /etc/php.ini # Change Memory parameters
$ cd /var/www/html
$ sudo nano /var/www/html/phpinfo.php
$ sudo tar -jxf phpMy*
$ sudo rm phpMyAdmin-4.1.13-all-languages.tar.bz2
$ sudo mv phpMyAdmin-4.1.13-all-languages/ phpMyAdmin
$ cd phpMyAdmin
$ sudo find . -type f -exec chmod 644 {} \;
$ sudo find . -type d -exec chmod 755 {} \;
$ sudo mv config.sample.inc.php config.inc.php
$ sudo nano config.inc.php # Change blowfish secret
$ sudo rm -R setup
$ yum install php-mcrypt # these 2 lines are necessary to install MCrypt, and be able to deliver some reasonably serious work around cryptography
$ sudo yum install mod_ssl
$ sudo service httpd restart
$ sudo yum install mysql mysql-server
$ sudo service mysqld start
$ sudo mysqladmin -u root password ******
$ sudo chkconfig mysqld on
$ sudo yum install git
$ sudo nano /etc/conf/con.d/my_website.conf
$ sudo service httpd reload
Now just hit
=> http://127.0.0.1:8080/phpMyAdmin/ to access MySQL via PHPMyAdmin
=> http://127.0.0.1:8080 in your local browser, and you should see the magic happening, with the default Apache page.
It does not seem much, yet you might quickly spend 30min+ going through the above, if everything goes right and you do not inadvertently commit any typo.
Plus what you’ve just done, your fellow developers in the team would have to do it as well, as they VagrantUp their own local development environment: This is of course not acceptable.
One obvious and immediate countermeasure is to share the custom BOX we’ve just created with our peers via network or cloud storage. This is well covered by Vagrant here: http://docs.vagrantup.com/v2/boxes.html . Your peers would simply have to use the box add command:
$ vagrant box add my-box /path/to/the/new.box
$ vagrant init my-box
$ vagrant up
However, this is still a bit brute force and not fully flexible and future proof: What if I realise I need to add a specific service or configuration? I would have to update my box and to copy it again over the network for my peer developers, and moving around 1Gb of data is not the most elegant thing to do, is it?
Therefore we are looking for a more scripted and flexible way to provision our Linux stack on the fly.
In my next article, I will discuss a couple of simple enough yet professional solutions to provision your development environment in a robust and agile manner using either Chef or Puppet.
Next to MWD04 – Provisioning the stack with Chef

Link

Code Academy

Codecademy is an education company. But not one in the way you might think. We’re committed to building the best learning experience inside and out, making Codecademy the best place for our team to learn, teach, and create the online learning experience of the future.

Link

Lynda.com

lynda.com is an online learning company that helps anyone learn software, design, and business skills to achieve their personal and professional goals.