Posted by & filed under Clustering, Computers, Linux.

When I faced with a decision of what web server I am going to roll out with, I usually end up picking between two web servers: Apache and Cherokee.

Apache is like your grandfather. He is 80 years old, and He has been around for a while. He is not always the guy you go to when you want something done quickly, but when you want it done reliably, you would not even think of going to anyone else.

Cherokee is like a kid out of Uni. Sure, he is still only a little baby, but he is packed full of the latest knowledge and has been taught how to do the job. Quick. He is also much better then his Granddaddy at doing the easier things, such as giving you the same document over and over and over again.

The way that Apache is written, it does use up a lot of memory, however, there is also the benefit with Apache that if you want to do it, you can. There is literally any type of module you would ever want for Apache. There are more then 400 modules to download, compile, install and try.

Cherokee on the other hand has a much smaller selection of modules you can choose to run with, but don’t let this scare you! If you run pretty much stock standard Apache setups (such as I do for timg.ws and sharesource.org), then Cherokee will be able to come to the table with everything you need and more.

The really cool benefit that you get from using Cherokee is the fact that, it does more out of the box with all its modules.

At the moment, I really like the development that is going into Cherokee.  They do look very hard at security, such as this the new spawning mechanism introduced into Cherokee earlier this year.

But really, there is a lot of hype over the whole “lightweight” httpds in some sense. Sure, lighttpd and Cherokee are really fast to deliver static files, but are they really faster then for anything else? In all honestly, not that much faster.

I hear about people doing a lot of really interesting things when it comes to web servers, like Apache+nginx+fastcgi, and then I wonder, couldn’t you just pick one product and stick with it? It’s not like the extra milliseconds are going to save you $200,000 a year.

I usually choose Cherokee now for new installations, simply because it does everything and I don’t need to actually do that much hunting around to make it ‘just work’. Not only that, but it has a nice web interface for administrating it that would even make my Dad happy (yes, literally).

Posted by & filed under Random Thoughts.

Today was 32°C. I decided to go to McDonald’s and buy a nice cold frozen Fanta.

A few days after I moved to Sydney I found this very awesome book store near Central Station called Basement Books. Seriously, WOW.

Basement Books, conveniently located in central Sydney, offers 8kms and over 10,000 titles of discounted books with savings of up to 90% of recommended retail prices.

Even though the book I was specifically looking for was not there in the shop, in true Tim style I did walk out with about 1.5KGs worth of books. I suppose the only bad part of that was it was only two books.

I was looking for a book on C, because my skills have deteriorated greatly after not really writing much C code for at least 24 months.

  • MySQL Developer’s Library, by Paul DuBois; and
  • ANSI C++, The Complete Language by Ivor Horton.

I have started reading the book by DuBois, and it is a very well written; easy to understand, and it does include a very huge section about writing C applications; so it’s a win situation anyways, because I imagine that if I did write a complete C application, it would use MySQL in some way.

If your ever in Sydney, I would highly suggest going to the Basement Books store. It has amazing books… at amazing prices. Also, if you want to learn everything there is to know about MySQL, get this book. I only discovered partitioning for MySQL databases a few months ago when reading an article at work, and have loved the idea ever since. To have a nice section inside this book with practical examples for using partitioning has started to get my mind ticking.

Posted by & filed under Linux.

If you have never used NoMachine before, it is a fantastic technology that allows you to have a Terminal Server for X on Linux, similar to XenApp for Windows. It is a very powerful application. I used to install FreeNX back in the day when it was released, but I have since learnt that it is just much easier to install the free version of NoMachine, especially for my own personal use.

It only took me about one minute, but I thought I might just write down the quick and dirty hack that I just did to get NoMachine’s free terminal server package (which allows two clients to connect at a time…) on Linux.

# sudo su –
# cd /tmp/nx
# wget http://64.34.161.181/download/3.4.0/Linux/FE/nxserver-3.4.0-8.x86_64.tar.gz http://64.34.161.181/download/3.4.0/Linux/nxclient-3.4.0-5.x86_64.tar.gz wget http://64.34.161.181/download/3.4.0/Linux/nxnode-3.4.0-6.x86_64.tar.gz
# echo the above URL’s may no longer be correct at the time of you reading this, please check http://www.nomachine.com/download-package.php?Prod_Id=1351
# cd /usr
# tar -xvf nxclient-3.4.0-5.i386.tar.gz
# tar -xvf nxnode-3.4.0-6.i386.tar.gz
# tar -xvf nxserver-3.4.0-8.i386.tar.gz
# ln -s /etc/rc.d /etc/init.d
# sudo /usr/NX/scripts/setup/nxnode –install redhat
# sudo /usr/NX/scripts/setup/nxserver –install redhat
# rm /etc/init.d

All the errors that the installer comes up with can be safely ignored.

Posted by & filed under Experimental, Linux.

I would like to introduce you tonight to a x-part series (I don’t know how many parts there are at the moment, we will see as time progresses) entitled “How to build a www cluster in x days“.

In this series, we will be looking at a wide variety of different techniques that can be used to create a cluster for hosting your next large project on.
When I talk about clusters, I like to categorise them into two particular sections: dumb and intellegent clusters. A dumb cluster is a group of machines that essentially know very little of their surroundings. They do not take into account how many machines are currently running, where the machines are located or how the data is to be devlivered to it’s location. Put simply: you set it up, and it works. No complex heartbeat configurations (except when necessary) and nothing fancy.
Intelligent clusters on the other hand know alot of information about where other machines on the network are. They know where the vistor to the website is coming from, and what the fastest way is to deliver content to that user.
Put simply: a “dumb cluster” is a group of machines that are set up to deliver content. If one machine goes down, the load balancer will detect this and bring that node out of the server pool until it is reported back up again. an “intelegent cluster” knows where the user is visiting. It can take into consideration what the shortest internet route is to it’s desired location, or what route should be taken to get the best availible speeds.
In this series, we will be looking more at the dumb clustering side of things. Dumb clusters are easy to set up, and well, if you really want an intellegent cluster, hire me :] So lets look at what we are going to discuss in this series.
  1. Setting up our first webserver
  2. Setting up our shared filesystem
  3. Setting up our http/https load balancer
  4. Setting up our mail server
  5. Setting up our backup system
  6. Setting up our mysql cluster
  7. Setting up our shell server
  8. Setting up our TCP/IP load balancer
For the purpose of writing this series, I will be using Xen installed on my machine, using CentOS 5.4 as the base opperating system. At any one time, I will have up to five domains running on Xen. I will be testing loads and response times using my laptop.
Stay tuned, the first part of the series comes on Tuesday.

Posted by & filed under Humor.

I will admit that usually, I do not really discuss anything that is political, however, I think that this is worth a posting. A college at work was going through the process of installing Dell IT Assistant, and the US Government wanted immediate answers to a few questions. Here is the screenshot:

nonuclear

Please click to enlarge the image.

Posted by & filed under Computers, Linux.

What does a computer nerd do when he is at work and trying to use one keyboard and mouse for two machines and it just does not work? Well, he fixes it of course!
For the last few weeks, since switching my KDE desktop at work to Gnome, I have had this seriously pressing issue that every now and then when I moved my mouse to the Linux machine, synergy would crash on that screen, with a nice assertion error.

INFO: CScreen.cpp,99: entering screen synergyc: ../../src/xcb_io.c:243: process_responses: Assertion `(((long) (dpy->last_request_read) - (long) (dpy->request)) <= 0)' failed.

Well,I looked through the code for a few hours and I finally tracked down the issue. The synergyc application was actually trying to do calls to the X server using multiple threads, when X did not know that it was being used from a multi-threaded application.

Basically, after hunting down where the issue was, fixing the problem was as simple as adding a call to the XInitThreads() function on line 100 of lib/platform/CXWindowsScreen.cpp.

Rebuilt my package and boom, a perfect synergy.

Posted by & filed under ShareSource.

ShareSource has a new look, and it’s getting released in seven days. ShareSource3. SS3. It looks good, it feels perfect, and it works excellent. There are not really any new features, and the database schema is mostly the same (except for ten new tables), but all the missing things that just were not always there are now there.

Now Administrators of the website will have even easier control over news features, and now project administrators can unlock files, and modify the contents. Files are now stored on two machine in two different locations, as well!

It does get better! ShareSource will have the ability to create Xen containers, so you can virtually choose almost any operating system on the planet (except for Apple OS X) and compile, run and test applications.

Developers and users alike will enjoy the new look ShareSource has been christened with.

Posted by & filed under Random Thoughts.

Between playing with my new computer and busily working away on two personal projects, I have not really updated my blog. Infact, my blog has been getting no loving at all. I thought I might just spend a few minutes at this terrible hour of the morning to update people on what is going on.

  • ShareSource (Open source code forge)
    Yes, ShareSource is still being maintained. It has not died. I have almost finished writing the last dabs of the Xen software for ShareSource, as well as rewriting the template engine so it is all consistent. (Did you know that ShareSource has three ways of rendering pages?). Digital Pacific (the crew that I work for) have donated a Xen server for a compile farm for ShareSource, which is very neat.
  • MyBanco (Open source banking software)
    I have sitting in the Mercurial repository a pile of code to do with managing loans for users who have an account with the bank. New loans can be requested by a user, and verified by the Administrator(s). The money is either a) made out of reserves (ie, pulled out of thin air… like real money) or b) pulled out of a reserve account (like a “perfect” monetary system).
  • Galium (Open source top level domain management software)
    In a few days, I will be pushing code that fixes a medium issue with the adding of new records. In some cases, a user can enter particular inputs that crash BIND.

My new computer is going well. Running Compiz and Xen together allows me to have CentOS on one side of the cube, Arch Linux on the “main” side, and Windows on the other. Now there is love. My two new wide screen monitors are also pure joy.

My “top secret” project is coming along well, too. One part of the project is nearing release soon, which should give quite a good idea on the whole solution.