FreeBSD Server – Ports tree maintenance

In the new FreeBSD server i am dumping the old CVSUp/Portupgrade and going for new and improved tools for ports management.

For maintaining the ports tree up to date i will use portsnap. Why? You can read about the advantages in the portsnap web page. It’s already in base system, so no need to install nothing. Try:

# portsnap

and should print the usual help options. So first thing is to fetch and apply the whole ports tree. It will overwrite every custom file that you have in ports tree /usr/ports/ like patches. Distfiles will survive and config options also (as they live in /var/db/ports/).

# portsnap fetch extract

Now we have a fully updated ports tree, and the portsnap own database in /var/db/portsnap/.

From now on simply update the ports tree with

# portsnap fetch update

Much simpler than the cvsup command and config file… all that is left is to put this in auto-pilot. You have the cron option, witch is nothing more than a sleep by a random time, so all the clients don’t slashdot the portsnap server. You can test it with:

# portsnap cron update

Both handbook and man pages explicitly warn about automatic ports tree updating, as one can be installing a port at the same time and “cause major problems” (meaning ending up with a corrupted or non-working software installed), so you should only update the index.

# portsnap -I cron update

Personally i will live on the edge and take the risk of actually updating the ports tree, not just the index, as the risk of forgetting to update the ports tree before installing packages and installing deprecated/legacy software is much higher than installing at the sime time the tree update is running.

The handbook goes for the cron to do this, but i use periodic and make this a part of the normal maintenance works (automatic email reports, logs, logs rotation all for free)

Create local daily dir if not exists

# mkdir -p /usr/local/etc/periodic/daily/

add really simple sh script with full paths, save and set execution perms

#!/bin/sh
#
# Updates the Ports tree
#
/usr/sbin/portsnap cron update
rc=0
exit $rc

For the report about old/obsolete packages i go again for periodic… but this time is already there all done for us (the FreeBSD users), just add in /etc/periodic.conf

weekly_status_pkg_enable="YES"

The ports tree is now managed, next post about the packages themselves.

Sources:
http://www.freebsd.org/cgi/man.cgi?query=periodic
http://www.freebsd.org/cgi/man.cgi?query=portsnap
http://www.freebsd.org/doc/handbook/portsnap.html

Making a donation

This is a post that i am cooking in my head for quite some time, and is a deviation of my own (roman catholic/Mediterranean/whatever) education pattern…

Long story short, since last year i made a personal commitment (several reasons in the decision making process) of donating a portion of my own income to some institution devoted to some noble cause each year. So, last year i selected an known institution as the recipient of the donation, so i contacted them and wired 1000 euro. A couple of weeks later i received back in snail mail a receipt, and that was it, no nothing. That left a bittersweet taste…

One may ask, what do you wanted? A full gospel band chanting “Thank you” and cheering you? Well, no. But a template thank you letter would be nice, and even nicer would be some kind of invitation to visit their work, not as an inspector (i believe that 99% of them have bigger bang for the buck than the public institutions) but to increase the involvement with them and encourage further donations.

Other very relevant point that i want to focus in this post, is the hypocrite view about the disclosure (or indeed the non-disclosure) of donors in our own culture. When you give you shouldn’t say, when you receive you also shouldn’t say. “Be kind and generous and don’t expect nothing in return” – i heard in church. For starters, turning this “rule” even more hypocrite, it only applies to small donations, if you give big enough it’s okay to put your name and bust in some building, even if proportionally, the donor of the smallest sum made a bigger effort. Also, this stupid secrecy has another effect, the lack of peer pressure and accountability, because no one has nothing to do with each others actions, the individuals that don’t give have a shield and the ones that do give are not accountable for their actions… ultimately no one is (both the doers and the non-doers).

As i couldn’t care less about these social conventions (or else i wouldn’t publish this post). This year i am selecting other institution and when it’s selected and the donation made, i will inform the world in the follow-up of this post, hoping that there will be followers.

Linux – upgrade to Kubuntu 11.04 “Natty Narwhal”

Just finished a laptop upgrade of Kubuntu (the Canonical Linux flavor) from distribution 10.10 to the latest 11.04″Natty Narwhal”. As usually my system was pretty messed up, and it take a couple of hours to put everything in working status. Anyway, the pain pays off and i really enjoy more and more to work with it, some of the key benefits are just awesome:

  • It’s free
  • It’s fast
  • The install/uninstall/upgrade software system is perfect, with thousands of free apps just a click away
  • The really beautiful KDE user interface

    It’s funny and kind of sad to walk into a room with 20 people, 15 of which running the “Think Different” computers, then boot up your laptop with Kubuntu…

  • The full freedom to customize and configure your OWN machine
  • The excellent fuse “mount everything” that you think of sub-system

There are still some problems that make it hard for non-geek users to embrace Linux on their desktops and laptops, even on the user friendly distributions (like Kubuntu). At least for me the major upgrades are nothing short of chaotic, some drivers support is still flaky and buggy (3d on ATI for me the worst by far), here and then there are some regressions with stuff unexpectedly stopping working good.

But the Linux community should be very proud, because the system has done such a long way. I still remember when just installing was some kind sorcery, not to speak in starting a graphical windows system (xorg.conf test #383). Or more recently support and configuration of wifi cards/networks/authentication nightmare. All of this is now gone, for instance last week i was able to configure VPN access and get 3G Internet via USB dongle just using the network manager GUI, no black screens, no bash, no googling…. simply amazing… – take that Windows – and then mounting remote filesystems with only one command – take that Apple.

So, is Linux perfect? Of course not.
Would i install it my friends computer? Maybe.
Am i willing to pay for Windows or OSX with Linux as a free alternative? Of course not.

URL/File decoupling with Apache mod_rewrite and PHP

In the beginning God created the heavens and the earth, then there was the Apache HTTP Server and afterwards PHP. Back in those ages, i was split between CGIs and mod_php. So the usual URL was something as http://www.mydomain.com/cgi-bin/script.pl?do=this or http://www.mydomain.com/script.php?do=that. It was simply a direct link between script.pl and script.php and a file in the filesystem … also it was ugly as hell…

I was not happy with this at all… and besides, just to make my days more miserable, some websites had perfect URLs, like:

http://www.mydomain.com/product/my_product
http://www.mydomain.com/product/my_other_product

so… i put myself to work to emulate this. First, i tried, to use a directory system and take advantage of the auto-index feature, as the index file is automatically served by the server. Ex:

http://www.mydomain.com/product/my_product/index.html
http://www.mydomain.com/product/my_other_product/index.html

but you could actually access them the way i wanted

http://www.mydomain.com/product/my_product
http://www.mydomain.com/product/my_other_product

This was of course a bad nightmare to maintain, not even to speak about database driven websites and related problems, template problems… nightmare…. So, i moved along to PHP auto prepend feature, the idea was to catch the user request by a php file (that is prepended to each request) do all the parsing and display and then kill the normal page processing. Better, but not quite there yet….

Then i discovered Apache mod_rewrite, and everything made sense, all things, the universe, the meaning of life, even Flash programming (well maybe not Flash programming). With some simple rules i was able to catch the user request and filter the ones that i wanted to a central file (that i call handler.php) parse the request and send it to whatever file/module that i want.

RewriteEngine on
RewriteCond %{REQUEST_URI} !\.(php|xml)$ [NC]
RewriteRule \.[a-z0-9]{1,}$ - [NC,L]
RewriteRule .* %{DOCUMENT_ROOT}/handler.php [L]

What are we doing here is quite simple but at the same time powerful. With these rules, all requests with file extension (.gif, .png, .js, .css, etc, etc), usually static content, are directly served as normal (line 3), except for the requests with .php and .xml extension that are sent to the “handler.php” (line 2), if we want other extensions to be sent to dynamic parsing, ex: server side generated image, just add them in this line.

Then a stripped down handler.php file is something like this

// configurations
require('config/vars.inc.php');
require('config/bd.inc.php');

// session start
session_name(SESSION_NAME);
session_start();

// outputt buffering
ob_start();
  
// get script parts
$uri = $_SERVER['REQUEST_URI'];
$tmp = explode ("?", $uri);
if (! isset($tmp[0])) $tmp[0] = '/';
$script_parts = explode ("/", $tmp[0]);

// clean empty keys
$tmp = array();
foreach($script_parts as $key=>$row)
  if ($row != '') $tmp[] = $row;
$script_parts = $tmp;

// default
if (! isset($script_parts[0])) 
  $script_parts[0] = 'hp';

// Send to execution
switch ($script_parts[0]) {
  case 'hp':
    require($_SERVER['DOCUMENT_ROOT'].'/homepage.php');
    break;
		
  case 'products':
    if (isset($script_parts[1])){
      require($_SERVER['DOCUMENT_ROOT'].'/modules/products/detail.php');
      break;
    }	
		
    require($_SERVER['DOCUMENT_ROOT'].'/modules/products/cat.php');
    break;
		
  case 'php':
    phpinfo();
    break;
		
  default:  // 404 error (not found)
    header("HTTP/1.0 404 Not Found");
    require($_SERVER['DOCUMENT_ROOT'].'/templates/error404.php');	
}

Simple, include all the global stuff, start sessions, database links, etc… get the URL request, parse it and send it to whatever file for processing. But as always you can/should build from here, change it to your needs and/or style, put your secret ingredient, do it better for yourself.

Some (many) years ago i would kick some ass to read this post and get this info on a silver plate.

Blue pill, red pill

After this, there is no turning back. You take the blue pill – the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill – you stay in Wonderland and I show you how deep the rabbit-hole goes.

So, let’s see how deep this goes.

I wanna wake up! Tech support! It’s a nightmare! Tech support! Tech support!