Tim Groeneveld

Random musings from the world of an Open Source geek

Installing NRPE for Nagios Monitoring

/ | Leave a Comment

When you have more then three servers to monitor, automating the installation of NRPE is a must! At Digital Pacific, the configurations that I have written are very versatile, there is about ten main lines, and adding or removing a server from that line determines what services will be monitored and how.

I have built a set of RPMs for all the servers there so that installing NRPE is basically done in two steps, and can be done completely hands off (if your hostname is set up correctly – which sometimes is not done).

Step One

Install the Nagios repository into /etc/yum.repos.d/

wget http://software.digitalpacific.com.au/repos/nagios.repo -O /etc/yum.repos.d/nagios.repo

Install NRPE

yum install dp-nrpe dp-nagios-plugins

Step Two

Do a few basic configuration file edits!

BOB=`hostname -i`; sed -i \
     /etc/nagios/nrpe.cfg -e "s/^server_address=\(.*\)$/server_address=$BOB/"
chkconfig --add nrpe
service nrpe start
service nrpe restart

If NRPE restarts you know you have done well!

Happy New Year!

/ | Leave a Comment

To all my friends that I have never met in real life – Happy New Year!

This year is going to present some fun times. My code for autodeploying servers with predefined settings on them (eg, MySQL clusters, HTTP clusters) should be released some time soon. Also, ShareSource’s compile farm will go live. Another exciting project will be unleashed onto the world, but you will have to wait for that!

See you soon,
Tim

Updating my Arch system

/ | 1 Comment on Updating my Arch system

This is the bad thing about running a distro that releases up to date software. Eventually you will get 276 package updates. Read more »

Clustering 101: Choosing the right web server

| Leave a Comment

When I faced with a decision of what web server I am going to roll out with, I usually end up picking between two web servers: Apache and Cherokee.

Apache is like your grandfather. He is 80 years old, and He has been around for a while. He is not always the guy you go to when you want something done quickly, but when you want it done reliably, you would not even think of going to anyone else.

Cherokee is like a kid out of Uni. Sure, he is still only a little baby, but he is packed full of the latest knowledge and has been taught how to do the job. Quick. He is also much better then his Granddaddy at doing the easier things, such as giving you the same document over and over and over again.

The way that Apache is written, it does use up a lot of memory, however, there is also the benefit with Apache that if you want to do it, you can. There is literally any type of module you would ever want for Apache. There are more then 400 modules to download, compile, install and try.

Cherokee on the other hand has a much smaller selection of modules you can choose to run with, but don’t let this scare you! If you run pretty much stock standard Apache setups (such as I do for timg.ws and sharesource.org), then Cherokee will be able to come to the table with everything you need and more.

The really cool benefit that you get from using Cherokee is the fact that, it does more out of the box with all its modules.

At the moment, I really like the development that is going into Cherokee.  They do look very hard at security, such as this the new spawning mechanism introduced into Cherokee earlier this year.

But really, there is a lot of hype over the whole “lightweight” httpds in some sense. Sure, lighttpd and Cherokee are really fast to deliver static files, but are they really faster then for anything else? In all honestly, not that much faster.

I hear about people doing a lot of really interesting things when it comes to web servers, like Apache+nginx+fastcgi, and then I wonder, couldn’t you just pick one product and stick with it? It’s not like the extra milliseconds are going to save you $200,000 a year.

I usually choose Cherokee now for new installations, simply because it does everything and I don’t need to actually do that much hunting around to make it ‘just work’. Not only that, but it has a nice web interface for administrating it that would even make my Dad happy (yes, literally).

Virtuozzo module for WHMCS

/ | Leave a Comment

I have written a module for WHMCS for Virtuozzo. If you have any Virtuozzo servers that you would love to have integrated with WHMCS, read the instructions I have posted on the WHMCS forum and download the module.

New books in my library

/ | Leave a Comment

Today was 32°C. I decided to go to McDonald’s and buy a nice cold frozen Fanta.

A few days after I moved to Sydney I found this very awesome book store near Central Station called Basement Books. Seriously, WOW.

Basement Books, conveniently located in central Sydney, offers 8kms and over 10,000 titles of discounted books with savings of up to 90% of recommended retail prices.

Even though the book I was specifically looking for was not there in the shop, in true Tim style I did walk out with about 1.5KGs worth of books. I suppose the only bad part of that was it was only two books.

I was looking for a book on C, because my skills have deteriorated greatly after not really writing much C code for at least 24 months.

  • MySQL Developer’s Library, by Paul DuBois; and
  • ANSI C++, The Complete Language by Ivor Horton.

I have started reading the book by DuBois, and it is a very well written; easy to understand, and it does include a very huge section about writing C applications; so it’s a win situation anyways, because I imagine that if I did write a complete C application, it would use MySQL in some way.

If your ever in Sydney, I would highly suggest going to the Basement Books store. It has amazing books… at amazing prices. Also, if you want to learn everything there is to know about MySQL, get this book. I only discovered partitioning for MySQL databases a few months ago when reading an article at work, and have loved the idea ever since. To have a nice section inside this book with practical examples for using partitioning has started to get my mind ticking.

Installing NoMachine’s free Terminal Server on ArchLinux

/ | Leave a Comment

If you have never used NoMachine before, it is a fantastic technology that allows you to have a Terminal Server for X on Linux, similar to XenApp for Windows. It is a very powerful application. I used to install FreeNX back in the day when it was released, but I have since learnt that it is just much easier to install the free version of NoMachine, especially for my own personal use.

It only took me about one minute, but I thought I might just write down the quick and dirty hack that I just did to get NoMachine’s free terminal server package (which allows two clients to connect at a time…) on Linux.

# sudo su –
# cd /tmp/nx
# wget http://64.34.161.181/download/3.4.0/Linux/FE/nxserver-3.4.0-8.x86_64.tar.gz http://64.34.161.181/download/3.4.0/Linux/nxclient-3.4.0-5.x86_64.tar.gz wget http://64.34.161.181/download/3.4.0/Linux/nxnode-3.4.0-6.x86_64.tar.gz
# echo the above URL’s may no longer be correct at the time of you reading this, please check http://www.nomachine.com/download-package.php?Prod_Id=1351
# cd /usr
# tar -xvf nxclient-3.4.0-5.i386.tar.gz
# tar -xvf nxnode-3.4.0-6.i386.tar.gz
# tar -xvf nxserver-3.4.0-8.i386.tar.gz
# ln -s /etc/rc.d /etc/init.d
# sudo /usr/NX/scripts/setup/nxnode –install redhat
# sudo /usr/NX/scripts/setup/nxserver –install redhat
# rm /etc/init.d

All the errors that the installer comes up with can be safely ignored.

NoMachine’s GUI for OS X requires Rosetta?

| 2 Comments on NoMachine’s GUI for OS X requires Rosetta?

I was just booting up my Mac to take some nice pretty screenshots of my cluster install process, and I realised that NoMachine’s client for OS X does not even properly support Intel CPU’s -.-”

Anyways, unlike Adobe porting Creative Suite to Linux, at least NoMachine are actually working on it.

Cluster 101: Building your first dumb cluster

| Leave a Comment

I would like to introduce you tonight to a x-part series (I don’t know how many parts there are at the moment, we will see as time progresses) entitled “How to build a www cluster in x days“.

In this series, we will be looking at a wide variety of different techniques that can be used to create a cluster for hosting your next large project on.
When I talk about clusters, I like to categorise them into two particular sections: dumb and intellegent clusters. A dumb cluster is a group of machines that essentially know very little of their surroundings. They do not take into account how many machines are currently running, where the machines are located or how the data is to be devlivered to it’s location. Put simply: you set it up, and it works. No complex heartbeat configurations (except when necessary) and nothing fancy.
Intelligent clusters on the other hand know alot of information about where other machines on the network are. They know where the vistor to the website is coming from, and what the fastest way is to deliver content to that user.
Put simply: a “dumb cluster” is a group of machines that are set up to deliver content. If one machine goes down, the load balancer will detect this and bring that node out of the server pool until it is reported back up again. an “intelegent cluster” knows where the user is visiting. It can take into consideration what the shortest internet route is to it’s desired location, or what route should be taken to get the best availible speeds.
In this series, we will be looking more at the dumb clustering side of things. Dumb clusters are easy to set up, and well, if you really want an intellegent cluster, hire me :] So lets look at what we are going to discuss in this series.
  1. Setting up our first webserver
  2. Setting up our shared filesystem
  3. Setting up our http/https load balancer
  4. Setting up our mail server
  5. Setting up our backup system
  6. Setting up our mysql cluster
  7. Setting up our shell server
  8. Setting up our TCP/IP load balancer
For the purpose of writing this series, I will be using Xen installed on my machine, using CentOS 5.4 as the base opperating system. At any one time, I will have up to five domains running on Xen. I will be testing loads and response times using my laptop.
Stay tuned, the first part of the series comes on Tuesday.

Is the United States becoming too paranoid?

/ | Leave a Comment

I will admit that usually, I do not really discuss anything that is political, however, I think that this is worth a posting. A college at work was going through the process of installing Dell IT Assistant, and the US Government wanted immediate answers to a few questions. Here is the screenshot:

nonuclear

Please click to enlarge the image.