Posts filed under 'sysadmin'

Admin: Linux file server performance boost (ext4 version)

LinuxIn the previous article, I showed how to improve the performance of an existing file server by tweaking ext3 and mount settings.
We can also take advantage of the availability of the now stable ext4 file system to further improve our file server performance.

Some distribution, in particular RedHat/CentOS 5, do not allow us to select ext4 as a formatting option during setup of the machine, so you will initially have to use ext3 as file system (on top of LVM preferably for easy extensibility).

A small digression on partitioning

Remember to create separate partitions for your file data: do not mix OS files with data files, they should live on different partitions. In an enterprise environment, a minimal partition configuration for a file server could look like:

Hardware:

  • 2x 160GB HDD for the OS
  • 4x 2TB HDD for the data

The 160GB drives could be used as such:

  • 200MB RAID1 partition over the 2 drives for /boot
  • 2GB RAID1 partition over the 2 drives for swap
  • all remaining space as a RAID1 partition over the 2 drives for /
    Note though that it is generally recommended to create additional partitions to further contain /tmp and /var.

The 2TB drives could be used like this:

  • all space as RAID6 over all drives (gives us 4TB of usable space) for /data
  • alternatively, all space as RAID5 over all drives (gives us 6TB of usable space) The point of using RAID6 is that it gives better redundancy than RAID5, so you can safely add more drives later without increasing the risk of failure of the whole array (which is not true of RAID5).

Moving to ext4

If you are upgrading an existing system, backup first!

Let’s say that your /data partition is an LVM volume under /dev/VolGroup01/LogVol00. First, make sure we have the ext4 tools installed on our machine, then unmount the partition to upgrade:

    # yum -y install e4fsprogs
    # umount /dev/VolGroup01/LogVol00

For a new system, create a large partition on the disk, then format the volume (this will destroy all data on that volume!).

    # mkfs -t ext4 -E stride=32 -m 0 -O extents,uninit_bg,dir_index,filetype,has_journal,sparse_super /dev/VolGroup01/LogVol00
    # tune4fs -o journal_data_writeback /dev/VolGroup01/LogVol00

Note: on a RAID array, use the appropriate -E stride,stripe-width options, for instance, on a RAID5 array using 4 disks and 4k blocks, it could be: -E stride=16,stripe-width=48

For an existing system, upgrading from ext3 to ext4 without damaging existing data is barely more complicated:

    # fsck.ext3 -pf  /dev/VolGroup01/LogVol00
    # tune2fs -O extents,uninit_bg,dir_index,filetype,has_journal,sparse_super /dev/VolGroup01/LogVol00
    # fsck.ext4 -yfD /dev/VolGroup01/LogVol00

We can optionally give our volume a new label to easily reference it later:

    # e4label /dev/VolGroup01/LogVol00 data

Then we need to persist the mount options in /etc/fstab:

    /dev/VolGroup01/LogVol00    /data    ext4    noatime,data=writeback,barrier=0,nobh,errors=remount-ro    0 0

And now we can remount our volume:

    # mount /data

If you upgraded an existing filesystem from etx3, you may want to run the following to ensure the existing files are using extents for file attributes:

    # find /data -xdev -type f -print0 | xargs -0 chattr +e
    # find /data -xdev -type d -print0 | xargs -0 chattr +e

Important notes

The mounting options we use are somewhat a bit risky if your system is not adequately protected by a UPS.
If your system crashes due to a power failure, you are more likely to lose data using these options than using the safer defaults.
At any rate, you must have a proper backup strategy in place to safeguard data, regardless of what could damage them (hardware failure or user error).

  • The barrier=0 option disables Write barriers that enforce proper on-disk ordering of journal commits.
  • The data=writeback and nobh go hand in hand and allow the system to write data even after it has been committed to the journal.
  • The noatime ensures that the access time is not updated when we’re reading data as this is a big performance killer (this one is safe to use in any case).

References

1 comment October 3rd, 2010

Admin: Linux file server performance boost (ext3 version)

Linux Using a Linux for an office file server is a no-brainer: it’s cheap, you don’t have to worry about unmanageable license costs and it just works.

Default settings of most Linux distributions are however not optimal: they are meant to be as standard compliant and as general as possible so that everything works well enough regardless of what you do.

For a file server hosting large numbers of files, these default settings can become a liability: everything slows down as the number of files creeps up and it makes your once-snappy fileserver as fas as a sleepy sloth.

There are a few things that we can do to ensure we get the most of our server.

Checking our configuration

First, a couple of commands that will help us investigate the current state of our configuration.

  • df will give us a quick overview of the filesystem:

    df -T
    Filesystem    Type   1K-blocks      Used Available Use% Mounted on
    /dev/md2      ext3    19840804   4616780  14199888  25% /
    tmpfs        tmpfs      257580         0    257580   0% /dev/shm
    /dev/md0      ext3      194366     17718    166613  10% /boot
    /dev/md4      ext3     9920532   5409936   3998532  58% /var
    /dev/md3      ext3      194366      7514    176817   5% /tmp
    /dev/md5      ext3    46980272  31061676  13493592  70% /data
    
  • tune2fs will help us configure the options for each ext3 partition. If we want to check what is the current configuration of a given partition, says we want to know the current options for our /data mount:

    # tune2fs -l /dev/md5
    

    If I was using LVM as a Volume manager, I would type something like:

    # tune2fs -l /dev/VolGroup00/LogVol02
    

    This would give lots of information about the partition:

    tune2fs 1.40.2 (12-Jul-2007)
    Filesystem volume name:   <none>
    Last mounted on:          <not available>
    Filesystem UUID:          d6850da8-af6f-4c76-98a5-caac2e10ba30
    Filesystem magic number:  0xEF53
    Filesystem revision #:    1 (dynamic)
    Filesystem features:      has_journal resize_inode dir_index filetype 
                              needs_recovery sparse_super large_file
    Filesystem flags:         signed directory hash
    Default mount options:    user_xattr acl
    Filesystem state:         clean
    Errors behavior:          Continue
    ....
    

    The interesting options are listed under Filesystem features and Default mount options. For instance, here we know that the partition is using a journal and that it is using the dir_index capability, already a performance booster.

  • cat /proc/mounts is useful to know the mounting options for our filesystem (just listed some interesting ones here):

    rootfs / rootfs rw 0 0
    /dev/root / ext3 rw,data=ordered 0 0
    /dev/md0 /boot ext3 rw,data=ordered 0 0
    /dev/md4 /var ext3 rw,data=ordered 0 0
    /dev/md3 /tmp ext3 rw,data=ordered 0 0
    /dev/md5 /data ext3 rw,data=ordered 0 0
    none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
    /dev/md4 /var/named/chroot/var/run/dbus ext3 rw,data=ordered 0 0
    

    The data=ordered mount parameter tells us of the journaling configuration for the partition.

Journaling

So what is journaling?
It’s one of the great improvements of ext3: a journal is a special log on the disk that keeps track of changes about to be made. It ensures that, in case of failure, the filesystem can quickly recover without loss of information.

There are 3 settings for the journalling feature:

  • data=journal the most secure but also slowest option since all data and metadata is written to disk: the whole operation needs to be completed before any other operation can be completed. It’s sort of going to the bank for a deposit, filling the paperwork and making sure the teller puts the money in the vault before you leave.
  • data=ordered is usually the default compromise: you fill-in the paperwork and remind the teller to put the money in the vault asap.
  • data=writeback is the fastest but you can’t be absolutely sure that things will be done in time to prevent any loss if a problem occurs soon after you’ve asked for the data to be written.

In normal circumstances all 3 end-up the same way: data is eventually written to disk and everything is fine.
Now if there is a crash just as the data was written only option journal would guarantee that everything is safe. Option ordered is fairly safe too because the money should be in the vault soon after you left; most systems use this option by default.

If you want to boost your performance and use writeback you should make sure that:

  • you have a good power-supply backup to minimise the risk of power failure
  • you have a good data backup strategy
  • you’re ok with the risk of losing the data that was written right before the crash.

To change the journaling option you simply use tune2fs with the appropriate option:

    # tune2fs -o journal_data_writeback /dev/md5

Mount options

Now that we’ve changed the available options for our partition, we need to tell the system to use them.
Edit /etc/fstab and add data=writeback to the option columns:

    /dev/md5     /data    ext3    defaults,data=writeback   1 2

Next time our partition is mounted, it will use the new option. For that we can either reboot or remount the partition:

    # mount - o remount /data

noatime option

There is another option that can have a very dramatic effect on performance, probably even more than the journaling options above.

By default, whenever you read a file the kernel will update its last access time, meaning that we end up with a write operation for every read!
Since this is required for POSIX compliance, almost all Linux distributions leave this setting alone by default.
For a file server, this can have such drastic consequence on performance.

To disable this time-consuming and not useful feature (for a file server), simply add the noatime option to the fstab mount options:

    /dev/md5     /data    ext3    defaults,noatime,data=writeback   1 2

Note that updating access times is sometimes required by some software, such as mail software (such as mutt). If you properly keep your company data in a dedicated partition, you can enable the mount options only for that partition and keep other options for the root filesystem.

dealing with errors in fstab

After doing the above on one of the servers, I realized that I made a typo when editing /etc/fstab.
This resulted in the root filesystem being mounted read-only, making fstab impossible to edit…

To make matters worse, this machine was a few thousand miles away and could not be accessed physically….

Remounting the root filesystem resulted in errors:

    # mount -o remount,rw /
    mount: / not mounted already, or bad option

After much trial and rebooting, this worked (you need to specify all mounting options, to avoid the wrong defaults from being read from etc/mtab`):

    # mount  -o rw,remount,defaults /dev/md2 /

After that, I could edit /etc/fstab and correct the typo…

Conclusions

How much these options will improve performance really depends on how your data is used: the improvements should be perceptible if your directories are filled with large amounts of small files.
Deletion should also be faster.

1 comment July 11th, 2010

A story about exceptional service

security01.pngRecently I found myself constrained by the puny 200GB of my Mac Book Pro and I bought a 500GB Seagate drive to replace it (a fast 7200 rpm one).
The Macbook Pro has no easy access for the drive so you have to resort to dismantling the case to access it. This put me off replacing the drive because I would probably be voiding the warranty and was running the risk of damaging this expensive piece of equipment.

I’ve been filling the drive with pictures from my recent camera purchase and I couldn’t put it off any longer, so I bought the new drive and went online to find some good tutorial on how to crack open the Macbook Pro case.

After a few searches, I noticed that many people were referring to the iFixit.com website. It was very easy to find the tutorial I was looking for, I didn’t have to register, and each step was made very clear and simple.
It took no time to open the case and replace the drive.
I was very happy with that find.

Now, that’s not the end of the story.

A couple of days before I replaced the drive the left fan of the laptop suddenly became noisy. This would happen a few times a day, at random, and would last 10-20 minutes.
My only solution to get this repaired was to get to the local Apple service shop. Even though I knew exactly which part number was to be replaced, they still wanted me to:

  • go across town to visit them so they could see for themselves what the problem was: annoying because the problem was intermittent so I may have to go for nothing.
  • wait for the part to arrive a few days later.
  • go back to leave the laptop
  • go again to collect the repaired laptop the next day or so. So all in all: about 6h spend travelling back and forth + no laptop for a couple of day + the risk that some indiscreet technician start looking through my personal stuff.

Instead, I went back to the iFixit website:

  • identified my machine
  • found out the list of spare parts available from their store
  • added the fan to my cart
  • paid for it.
  • found a guide that showed how to replace the part.

That took me all of 10 minutes; I placed my order on Thursday and the next Monday I received the part … halfway across the globe!

I also got a survey request from iFixit and left some comments, from which I got back two nice detailed email follow-ups, one from the CEO saying they were implementing my remarks as part of their site improvement efforts.

Well, I thought I would share this story. It’s not that often that you get excited by an online vendor that not only does its job well but goes beyond expectations.

Add comment August 26th, 2009

Sysadmin: SQL server performance madness

Technology I’ve just lost 2 days going completely bananas over a performance issue that I could not explain.

I’ve got this Dell R300 rack server that runs Windows Server 2008 that I dedicate to running IIS and SQL Server 2008, mostly for development purposes.

Dell PowerEdge R300 Rack servers

In my previous blog entry, I was trying some benchmark to compare the performance of Access and SQL Server using INT and GUID and getting some strange results.

Here are the results I was getting from inserting large amounts of data in SQL Server:

Machine Operating System Test without Transaction Test with Transaction
MacbookPro Windows Server 2008 x64 324 ms 22 ms
Desktop Windows XP 172 ms 47 ms
Server Windows Server 2008 x64 8635 ms!! 27 ms

On the server, not using transactions makes the query run more than 8 seconds, at least an order of magnitude slower than it should!

I initially thought there was something wrong with my server setup but since I couldn’t find anything, I just spend the day re-installing the OS and SQL server, applying all patches and updates so the server is basically brand new, nothing else on the box, no other services, basically all the power is left for SQL Server…

Despair

When I saw the results for the first time after spending my Easter Sunday rebuilding the machine I felt dread and despair.
The gods were being unfair, it had to be a hardware issue and it had to be related to either memory or hard disk, although I couldn’t understand really why but these were the only things that I could see have such an impact on performance.

I started to look in the hardware settings:

Device Manager

And then I noticed this in the Policies tab of the Disk Device Properties :

DISK Device Properties

Just for the lulz of it, I ticked the box, close the properties

Enable advanced performance

And then tried my query again:

Machine Operating System Test without Transaction Test with Transaction
Server Windows Server 2008 x64 254 ms!! 27 ms

A nearly 35 fold increase in performance!

Moral of the story

If you are getting strange and inconsistent performance results from SQL Server, make sure you check that Enable advanced performance option.
Even if you’re not getting strange results, you may not be aware of the issue, only that some operations may be much slower than they should.

Before taking your machine apart and re-installing everything on it, check your hardware settings, there may be options made available by the manufacturer or the OS that you’re not aware of…

Lesson learnt.

Add comment April 12th, 2009

Sysadmin: Multiple ISP firewall – The setup

Linux After suffering broadband trouble for the past 9 months, including interruptions that lasted a few days, I decided to get an additional line installed by a different ISP.
I could have bought one of these multi-WAN devices but decided against it for a couple of reasons: I like a challenge and I wanted to achieve a particular setup that I wasn’t sure could be answered by off-the-shelf products (for a reasonable price that is).

This long article is fairly detailed but if your setup is similar it should be enough to get you going quickly.

The basic setup

Without further ado, this is the network configuration:

Network Diagram

Notable things

We have 2 broadband connections:

  • CYBR, a standard DSL line with a fixed IP 111.99.88.77 allocated through PPPoE.
  • HKBN, a standard 100Mbps line with a fixed IP 30.40.50.62.

The network is split into different zones:

  • the Internet zone, connected to our Firewall through interfaces eth0 (ppp0) and eth1.
  • a Firewall zone, delimited by the firewall system itself
  • a DMZ zone connected through interface eth2 for the servers we want to make visible from the Internet.
    The DMZ has its own private subnet delimited by 192.168.254.0/255.255.255.0.
  • a LAN zone connected through interface eth3 so local computers can access the Internet and be protected from it.
    The DMZ has its own private subnet delimited by 192.168.0.0/255.255.255.0.

Objectives

What we want from our setup:

  1. our firewall protects our DMZ and LAN from unwanted access.
  2. our win server can host websites or other services.
  3. our linux server can handle receiving and sending email or other services.
  4. our firewall can handle incoming traffic from either ISP.
  5. our firewall can load-balance local outgoing traffic across both ISP.
  6. If one line fails, incoming traffic switches to the other line.
  7. If one line fails, outgoing traffic switches to the other line.
  8. Eventually, we want both the linux and win servers to be able to host different websites and we want the firewall to send incoming requests to the right server.

In this first article, I’ll present the setup for items 1-5.
The remaining topics will be the subject of subsequent articles of their own.

Technologies

The firewall is our primary subject. What is being discussed here is pretty much distribution-independent and should work on all flavours of Linux.

OS on the firewall system

I chose CentOS on the firewall.
Being an almost byte-for-byte identical copy of RedHat Enterprise Linux, all configuration will be identical on RedHat and its derivatives such as Fedora.

Firewall software, first try

When my firewall needs are simpler, I use the Stronger IP Firewall Ruleset from the Linux IP Masquerade HOWTO.
I started to modify the script to adapt it to my new Multi-ISP setup but things got complicated once I needed to debug routing tables.
I got it 80% of the way but tracing network connections and packet routing is complicated and time-consuming.
After a couple of days of peering into log files and wireshark capture screens, I gave up manual configuration and decided to go with something else.

Firewall software, final

The product I chose in the end is shorewall: it’s a very flexible firewall system that create the necessary iptable rules and configure most of the routing needs to properly handle complex network setup.
Shorewall is Open Source, very stable, has been out for a long time, is actively maintained and has lots of excellent documentation and examples.

Things to know

Before we get into the meat of the article, you should brush up on the following topics:

  • You have some knowledge of Linux system administration, know how to configure network connections, know how to enable/disable/stop/start services, able to edit config files.
  • Networking: you should know what a netmask is, what a gateway is, what a subnet is and have a passing understanding of IP classes, IP notation, what ports are for, what’s the difference between the tcp, udp, icmp protocols, what Dynamic Port Forwarding (DNAT) is, what Network Address Translation (NAT) is, what masquerading means.
  • Some basic understanding of DNS and local host name resolving (using host.conf and resolv.conf)
  • Some basic knowledge of what routing is for and how it works.
  • Some knowledge of how the linux kernel handles network packets (NetFilter, basics of iptables).

You don’t need to be a specialist in any of these areas but any knowledge helps.
I’m far from being well versed into Netfilter and routing, it’s not often that I have to deal directly with these topics, but brushing up on these topics helped.

Things to read

Shorewall has very extensive documentation. So much so that it can be a bit daunting, not knowing where to start.
I found the following documents helpful to get me started:

Installing shorewall

Go to the download site list [http://shorewall.net/download.htm#Sites] and download the most appropriate binary package for your distribution.

If you get RPMs for RedHat systems, you only need to install (rpm -ivh) the following packages:

    shorewall-4.X.X-X.noarch.rpm
    shorewall-perl-4.X.X-X.noarch.rpm 

If you install from source, only download, compile and install the common, doc and perl packages.

Preparing the system

For shorewall to properly handle both our firewall and packet routing needs, we need to make sure that the other parts of the system are not interfering with it.

Internet lines

Make sure that your multiple internet lines are properly working on their own!

Disable routing
  • Make sure that you don’t define a GATEWAY in the configuration of your network interfaces (in /etc/sysconfig/network-scripts/ifcfg-XXX) .
  • If you use an (A)DSL connection, also set DEFROUTE=no if its ifcfg-XXX file as well.
  • Remove the GATEWAY from the /etc/sysconfig/network file if there is one.
  • Edit your /etc/sysctl.conf file and set net.ipv4.conf.default.rp_filter = 0.
Disable firewall

Disable the current firewall, for instance using the system-config-securitylevel helper tool.
Be careful if you’re directly connected to the Internet, you will be left without protection!
You can actually wait until shorewall is properly configured to disable the firewall.

Shorewall configuration

Shorewall uses a set of simple configuration files, all located under /etc/shorewall/. For exact detail of each configuration files, have a look at the list of man pages.

Zones

zones are probably the simplest configuration file.
Details in the zones man page. Here we just name the various zones we want our firewall to handle:

    ################################################################
    #ZONE   TYPE          OPTIONS       IN                  OUT
    #                                   OPTIONS             OPTIONS
    fw      firewall
    net     ipv4
    loc     ipv4
    dmz     ipv4

This just reflects our setup as highlighted in the diagram above.

Note that the fw zone is often referred to as the $FW variable instead in various configuration files.

Interfaces

Here we list all the network interfaces connected to our firewall and for which zone they apply.
Details in the interfaces man page.

    ################################################################
    #ZONE   INTERFACE       BROADCAST       OPTIONS
    net     ppp0            detect
    net     eth1            detect
    dmz     eth2            detect
    loc     eth3            detect

Note that for our net zone, we list the 2 interfaces connected to our ISPs.
If you’re using PPPoE to connect, don;t use the interface name eth0 but use ppp0 instead.

Policy

The policy file tells shorewall which default actions should be taken when traffic is moving from one zone to another.
These default actions are taken if no other special action was specified in other configuration files.
View the policy file as a list of default actions for the firewall.
Details about this configuration file as in its man page.

    ################################################################
    #SOURCE DEST    POLICY          LOG     LIMIT:      CONNLIMIT:
    #                               LEVEL   BURST       MASK
    net     net     DROP            info
    loc     net     ACCEPT
    dmz     net     ACCEPT
    loc     dmz     ACCEPT
    loc     $FW     ACCEPT
    dmz     $FW     ACCEPT
    $FW     net     ACCEPT
    dmz     loc     DROP            info
    net     all     DROP            info
    all     all     DROP            info

Traffic from one zone to another needs to be explicitely ACCEPTed, REJECTed or DROPped.
For instance, loc net ACCEPT means that we allow all traffic from our local LAN to the Internet, while net all DROP means we don’t allow incoming traffic from the internet to anyone (remember this is the default action, in most cases we will override this for specific types of traffic in the rules file).
When we set the default action to DROP, we can tell shorewall to keep a trace of the details in the /var/log/messages log.

Providers

The providers file is generally only used in a multi-ISP environment.
Here we define how we want to mark packets originating from one ISP with a unique ID so we can tell the kernel to route these packets to the right interface.
Not doing this would get packets received from one interface to be routed to the default gateway instead.
The details of this configuration file are explained in the providers man page for it.

    #############################################################################
    #NAME NUMBER MARK DUPLICATE INTERFACE GATEWAY      OPTIONS          COPY
    CYBR  1      0x1  main      ppp0      -            track,balance=1  eth2,eth3
    HKBN  2      0x2  main      eth1      30.40.50.61  track,balance=5  eth2,eth3

Note that the DUPLICATE columns tells shorewall that it should make a copy of the main default routing table for this particular routing table (called CYBR or HKBN depending on which ISP we refer to).
Packets are marked with number 0x1 or 0x2 so we can distinguish them during their travel through the system.
For PPPoE connections, don’t specify a GATEWAY since it’s most likely that your ISP didn’t give you one.

The most interesting part of this file are the OPTIONS: track means that we want the packets to be tracked as they travel through the system; balance tells the kernel that we want traffic coming out to be spread over both interfaces.
Additionally, we want HKBN to receive more or less 5 times more traffic than CYBR (note that this has no effect on reply packets).

The COPY columns will ensure that the routing tables created for CYBR and HKBN are copied for each internal interface, so our eth2 and eth3 interfaces know how to route packets to the right ISP.

Route Rules

For our purpsose, the route_rules file only describes how traffic should be routed through one or the other ISP we set up in /etc/shorewall/providers.
Details are in the route_rules file man page.

    #####################################################################
    #SOURCE             DEST               PROVIDER        PRIORITY
    ppp0                -                  CYBR            1000
    eth1                -                  HKBN            1000

Here we simply say that all traffic through the CYBR table should be sent to ppp0.
The PRIORITY is an ordering number that tell shorewall to consider this routing rule before it marks the packets. Since we know the packets originated from ppp0 or eth1 we don’t really need to mark them.

Masq

The masq file will contain the masquerading rules for our private interfaces: in essence, we want traffic from the local LAN and DMZ to be hidden behind our limited number of external IPs.
See the masq manpage for all the details.

    #####################################################################
    #INTERFACE              SOURCE           ADDRESS                 
    # Ensure that traffic originating on the firewall and redirected via 
    # packet marks always has the source IP address corresponding to the 
    # interface that it is routed out of.
    # See http://shorewall.net/MultiISP.html#Example1
    ppp0                    30.40.50.62      111.99.88.77
    eth1                    111.99.88.77     30.40.50.62
    ppp0                    eth2             111.99.88.77
    eth1                    eth2             30.40.50.62
    ppp0                    eth3             111.99.88.77
    eth1                    eth3             30.40.50.62

The first part ensures that the traffic coming out of our public interfaces but originating from the other is actually rewritten as originating from the right IP for the interface.
This ensures that packets leaving eth1 for instance don’t come out with the wrong source address of the other interface.
The second part of the ensures that packets from our LAN or DMZ leaving either public interfaces are doing so with the right IP address, so traffic from my desktop going through ppp0 for instance, will have its source address as 100.90.80.70.

Rules

This is the main file where we tell shorewall our basic configuration and how we want packets to be handled in the general case.
The /etc/shorewall/rules file contains the specific instructions on where to direct traffic that will override the default actions defined in the /etc/shorewall/policy file.

    #####################################################################
    #ACTION    SOURCE                DEST                   PROTO  
    #                                                                     
    SECTION NEW
    # Drop and log packets that come from the outside but pretend 
    # to have a local address
    DROP:info  net:192.168.0.0/24    all
    DROP:info  net:192.168.254.0/24  all

    # Redirect incoming traffic to the correct server for WWW and email
    DNAT       all                   dmz:192.168.254.20     tcp   www
    DNAT       all                   dmz:192.168.254.10     tcp   110
    DNAT       all                   dmz:192.168.254.10     tcp   143
    DNAT       all                   dmz:192.168.254.10     tcp   25

In its most basic form, what we’ve just defined here is that we want all traffic from anywhere destined for port 80 (www) to be sent to our win server.
All mail traffic, POP3 (port 110), IMAP (port 143) and SMTP (port 25) is to be redirected to our linux server in the DMZ.

There are a few more useful rules that we can include, for instance, I want to be able to access my servers through either ISPs from home (IP 123.45.67.89) and disallow everyone else from accessing it.

    #####################################################################
    #ACTION    SOURCE                DEST                   PROTO  
    #                                                                     
    # Allow SSH to the firewall from the outside only from home

    ACCEPT     net:123.45.67.89      $FW                    tcp   ssh
    # Redirect input traffic to the correct server for RDP, VNC and SSH 
    DNAT       net:123.45.67.89      dmz:192.168.254.10:22  tcp   2222
    DNAT       net:123.45.67.89      dmz:192.168.254.10     tcp   5901
    DNAT       net:123.45.67.89      dmz:192.168.254.20     tcp   3389

When I SSH to 30.40.50.62 or 100.90.80.70, on the normal port 22, I will access the firewall.
Now if I SSH to the non-standard port 2222, I will instead access the linux server.
Ports 5901 are for remoting through VNC on the linux machine, and port 3389 will be used for Remote Desktop connections to the win server.

To make sure my machines are up and running, I like to be able to ping them:

    #####################################################################
    #ACTION    SOURCE              DEST              PROTO  
    #                                                                     
    # Accept pings between zones
    ACCEPT     dmz                 loc               icmp  echo-request
    ACCEPT     loc                 dmz               icmp  echo-request

Note that ping will only work between the LAN and the DMZ and pinging my firewall from the Internet will result in the requests being silently dropped.
I usually prefer that configuration as it makes discovering the servers by random bots slightly less likely.

There are lots of other cool things we can do with forwarding but that will do for now.

shorewall.conf

The last file we’re going to look at is the main configuration file for shorewall.
See details about each option from the man page for shorewall.conf.

Most options are OK by default. The only ones that I have had to change are:

    STARTUP_ENABLED=Yes
    MARK_IN_FORWARD_CHAIN=Yes
    FASTACCEPT=Yes
    OPTIMIZE=1

The first option tells shorewall that we want it to start automatically when the system boots.
That’s not enough though, so make sure that the service will be started:

    # chkconfig shorewall --levels 235 on

Installing our firewall rules

Shorewall configuration files need to be compiled without error before the firewall is actually loaded by shorewall.
The command:

    # shorewall restart

will stop and recompile the current configuration.
If there are any errors, the current firewall rules will be unchanged.
There are lots of other commands that can be issued. Check the man page for a complete list.

If you use PPPoE, you will want the firewall to be restarted every time the line reconnects.
The simplest way is to create a file /etc/ppp/if-up.local with only a single line:

    shorewall restart

DNS

There is one remaining issue with our firewall: if a user on the LAN attempts to access the web server by its name the request will probably fail.
Same for accessing our mail server: we can configure our desktop to connect to 192.168.254.10 to get and send emails, but on the laptop we would usually use something like pop.acme.com instead so we can read our emails from outside the office.

Similarly, trying to access www.acme.com hosted on the win server from the linux server will fail.

One solution is to route traffic through the firewall but that’s actually fairly complicated to setup properly.
The shorewall FAQ 2 discourages this and instead recommends the use of split-DNS: it’s very easy to setup and it works like a charm.

dnsmasq

Just install dnsmasq on the firewall. There are ready-made packages available for it and a simple yum install dsnmasq should suffice.

Dnsmasq provides a simple DNS forwarding and DHCP service. I had already configured dhcpd -which is already fairly simple to configure- on my firewall so I won’t need DHCP from dnsmasq but you can easily set it up if you want.

On the DNS side, dnsmasq can be told to first try to resolve hostnames by looking at the standard /etc/hosts file and then query the DNS servers defined in /etc/resolv.conf if necessary.

This simple trick means that we can:

  • Keep our normal DNS service pointing to say 100.90.80.70 for www.acme.com so that people on the Internet will properly resolve their web requests to our win server.
  • Add an entry in the firewall’s hosts file to point local clients to 192.168.254.20 instead.

To achieve this, simply edit /etc/hostsand add entries matching all your services:

    # Acme's services. 
    # One line for each DNS entries accessible from the Internet
    192.168.254.20        acme.com
    192.168.254.20        www.acme.com
    192.168.254.10        pop.acme.com
    192.168.254.10        mail.acme.com
dsnmasq configuration

Edit the /etc/dsnmasq.conf and uncomment or add the following lines:

    # Never forward plain names (without a dot or domain part)
    domain-needed
    # Never forward addresses in the non-routed address spaces.
    bogus-priv
    # listen on DMZ and LAN interfaces
    interface=eth2
    interface=eth3
    # don't want dnsmasq to provide dhcp
    no-dhcp-interface=eth2
    no-dhcp-interface=eth3

Then make sure that dsnmasq will start on boot:

    # chkconfig dnsmasq --levels 235 on
    # service dnsmasq restart

DNS resolution

There may be one last issue with DNS: in your /etc/resolv.conf you will have listed the DNS servers of one or both of your ISPs.
The problem is that some ISPs don’t allow access to their name servers from a network different than theirs.

The result is that each time any of the systems issues a DNS request it may fail and need to be sent to the next server instead, which may also fail and introduce delays in accessing named resources on the Internet.

One easy way out is to not use the ISPs DNS servers but instead only list the free OpenDNS name servers in your resolv.conf:

    search acme.com
    nameserver 208.67.222.222
    nameserver 208.67.220.220

Then make sure that you disable DNS in your /etc/sysconfig/network-config/ifcfg-XXX configuration file for your PPPoE connection:

    PEERDNS=no

Failure to do so will result in your /etc/resolv.conf file being rewritten with the DNS servers of one of your ISP every time you reconnect to them.

DHCP configuration

If you use dhcpd for local users, then you will need to make sure that its DNS server is set to the firewall’s:

    # DHCP Server Configuration file.
    ddns-update-style none;
    ignore client-updates;

    subnet 192.168.0.0 netmask 255.255.255.0 {
        option routers                  192.168.0.1;
        option subnet-mask              255.255.255.0;
        option domain-name              "acme.com";
        option domain-name-servers      192.168.0.1;
        range 192.168.0.200 192.168.0.250;
        default-lease-time 86400;
        max-lease-time 132000;
    }

On your local machines that use DHCP, make sure to renew your IP.
All other machines should be configured to use 192.168.0.1 as their unique DNS server and the machines in the DMZ should have their DNS set to 192.168.254.1.

Unless you reboot, don’t forget and flush the local DNS cache of each machine:
On Windows, from the command line:

    C:\> ipconfig /flushdns

On Mac, from the terminal:

    bash-x.xxx$ dnscacheutil -flushcache

Initial conclusions

I believe this type of firewall setup is fairly common and I hope that the -rather long- article helped you get your own setup in place.
In the -much shorter- follow-up articles, we’ll make our system as redundant as possible so our web and email services stay online even when one of the broadband connections fails.

In the meantime, don’t hesitate to leave your comments and corrections below.

History

References

20 comments February 4th, 2009

Sysadmin: file and folder synchronisation

Technology Over the years I’ve struggled to keep my folder data synchronised between my various desktop and laptops.

Here I present the tools I’ve tried and what I’ve finally settled on as possibly the ultimate answer to the problem of synchronising files and folders across multiple computers:

Sync Files

rsync

I’ve tried rsync, which is a great Open Source tool to securely synchronise data either one-way or both-ways.
It’s very efficient with bandwidth as it only transfer blocks of data that have actually changed in a file instead of the whole file. It can tunnel traffic across SSH and I’ve got a few cronjobs set up between various servers to back-up files daily.

It’s only weaknesses are that:

  • Every time it runs, it needs to inspect all files on both sides to determine the changes, which is quite an expensive operation.
  • Setting up synchronisation between multiple copies of the data can be tricky: you need to sync your computers in pairs multiple times, which quickly becomes expensive and risky if you have the same copy across multiple computers.
  • It doesn’t necessarily detect that files are in use at the time of the sync, which could corrupt them.

unison

It a folder synchronisation tool whose specific purpose is to address some of the shortcomings of rsync when synchronising folders between computers. It’s also a cross-platform Open Source tool that works on Linux, OS/X, Windows, etc.

Unison uses the efficient file transfer capabilities of rsync but it is better at detecting conflicts and it will give you a chance to decide which copy you want when a conflict is detected.

The issue though is that, like rsync, it needs to inspect all files to detect changes which prevents it from detecting and propagating updates as they happen.

The biggest issue with these synchronisation tools is that they tend to increase the risk of conflict because changes are only detected infrequently.

WinSCP

WinSCP Is an Open Source Windows GUI FTP utility that also allows you to synchonise folders between a local copy and a remote one on the FTP server.

It has conflict resolution and allows you to decide which copy to keep.

It’s great for what it does and allows you to keep a repository of your data in sync with your local copies but here again, WinSCP needs to go through each file to detect the differences and you need to sync manually each computer against the server, which is cumbersome and time consuming.

General Backup tools

There are lot more tools that fall into that category of backup utilities: they all keep a copy of your current data in an archive, on a separate disk or online. Some are great in that they allow you to access that data on the web (I use the excellent JungleDisk myself) but file synchronisation is not their purpose.

Now for some Captain Obvious recommendation: remember that file synchronisation is not a backup plan: you must have a separate process to keep read-only copies of your important data.
File synchronisation will update and delete files you modify across all your machines, clearly not what you want if you need to be able to recover them!

Revision Control Systems

Revision control software like cvs, subversion, git, etc are generally used to keep track of changes of source code files; however, they have also been used successfully to keep multiple copies of the same data in sync.
It’s actually exactly what I use for all my source code and associated files: I have a subversion server and I check-out copies of my software project folders on various computers.

After making changes on one computer, I commit the changes back to the server and update these changes on all other computers manually.

While great at keeping track of each version of your files and ideally suited to pure text documents like source code, using revision control systems have drawbacks that make them cumbersome for general data synchronisation:

  • you need to manually commit and update your local copies against the server.
  • not all of them are well suited to deal with binary files
  • when they work with binary files, they just copy the whole file when it changed, which is wasteful and inefficient.

Revision Control System are great for synchronising source code and configuration files but using them beyond that is rather cumbersome.

Complex setup

All of the above solutions also have a major drawback: getting them to work across the Internet requires complex setup involving firewall configurations, security logins, exchange of public encryption keys in some cases, etc.

All these are workable but don’t make for friendly and piece-of-mind setup.

What we want from data synchronisation

I don’t know about you but what I’m looking for in a synchronisation tool is pretty straightforward:

  • Being able to point to a folder on one computer and make it synchronise across one or multiple computers.
  • Detect and update the changed files transparently in the background without my intervention, as the changes happen.
  • Be smart about conflict detection and only ask me to make a decision if the case isn’t obvious to resolve.

Live Mesh folders

Enters Microsoft Live Mesh Folders, now in beta and available to the public. Live Mesh is meant to be Microsoft answer’s to synchronising information (note, I’m not saying data here) across computers, devices and the Internet.
While Live Mesh wants to be something a lot bigger than just synchronising folders, let’s just concentrate on that aspect of it.

Installing Live Mesh is pretty easy: you will need a Windows Live account to log-in but once this is done, it’s a small download and a short installation.

Once you’ve added your computer to your “Mesh” and are logged in you are ready to use Live Mesh:

  • You decide how the data is synchronised for each computer participating in your Mesh:
    you’re in charge of what gets copied where, so it’s easy to make large folders pair between say your laptop and work desktop and not your online Live Desktop (which has a 5GB limit) or your computer at home. You’re in control.
  • Files are automatically synchronised as they change across all computers that share the particular folder you’re working in.
    If the file is currently used, it won’t be synced before it is closed.
  • If the other computers are not available, the sync will automatically happen as they are up again.
  • There is no firewall setup: each computer knows how to contact the others and automatically -and uses- the appropriate network: transfers are local if the computers are on the same LAN or done across the Internet otherwise.
    All that without user intervention at all.
  • Whenever possible, data is exchanged in a P2P fashion where each device gets data from all the other devices it can see, making transfers quite efficient.
  • File transfers are encrypted so they should be pretty safe even when using unsafe public connections.
  • If you don’t want to allow sync, say you’re on a low-bandwidth dialup, you can work offline.
  • The Mesh Operating Environment (MOE) is pretty efficient at detecting changes to files. Unlike other systems, in most cases it doesn’t need to scan all files to find out which ones have been updated or deleted.

Some drawbacks

  • It’s not a final product, so there are some quirks and not all expected functionalities are there yet.
  • The Mesh Operating Environment (MOE) services can be pretty resource hungry, although, in fairness, it’s not too bad except that it slows down your computer’s responsiveness while it loads at boot time.
  • You can’t define patterns of files to exclude in your folder hierarchy.
    That can be a bit annoying if the software you use is often creating large backup files automatically (like CorelDraw does) or if there are sub folders you don’t need to take everywhere.
  • The initial sync process can take a long time if you have lots of files.
    A solution if you have large folders to sync is to copy them first manually on each computer and then force Live Mesh to use these specific folders: the folders will be merged together and the initial sync process will be a lot faster as very little data needs to be exchanged between computers.

Bear in mind that Live Mesh is currently early beta and that most of these drawback will surely be addressed in the next months.

Conclusion

I currently have more than 18GB representing about 20,000 files synchronised between 3 computers (work desktop, laptop and home desktop) using Live Mesh.

While not 100% there, Live Mesh Folder synchronisation is really close to the real thing: it’s transparent, efficient, easy to use and it just works as you would expect.

Now that Microsoft has released the Sync Framework to developers, I’m sure that other products will come on the market to further enhance data synchronisation in a more capable way.
In the meantime, Live Mesh has answered my needs so far.

References

3 comments January 19th, 2009

Sysadmin: Recovering deleted Windows partitions

TechnologyI made a mistake the other day: I wanted to delete the partition on an external drive and in my haste ended up deleting the partition of a local hard drive instead…

The good thing is when you delete a partition using the Windows Disk Management console it doesn’t actually delete your files, only the partition header.

Windows Disk Management Console

With NTFS files systems, there is a backup at the end of the partition. The problem is how do you recover it?

I first looked at the instructions from Microsoft knowledge base article kb245725, downloaded the low-level sector editor Dskprobe but was getting no-where with it.

Searching google brings you to the usual list of recovery software that you can’t be sure will actually do the job until you fork $$ for them.
I’ve got nothing against paying for software but I’ve been bitten by false promises before.

My search ended up with TestDisk an OpenSource utility to manipulate and recover partitions that works on almost all platforms.
The user interface is DOS only, so it’s not pretty, not point-and-click user friendly but it has a fair amount of options and after fiddling around with it for 10 minutes, I was able to simply recover the backup boot sector and tada! all my files were back!

TestDisk in action

So, some recommendations when recovering lost partitions:

  • Don’t panic! If you only deleted the partition (whichever type), chances are you’re likely to recover it or at least salvage the files.
  • Obviously, be careful not to write anything over them, like recreating partitions and a file system.
  • If you use a utility like TestDisk, don’t blindly follow the on-screen instructions. At first, it was telling me that I had 2 Linux partitions on the device (which used to be true) but it did not see the NTFS one. Then it thought I had a FAT partition only until I switched to the advanced options and inspected the boot partition.
    Just know enough about file systems to know what you’re looking for.
  • Low-level tools are not for everyone, so if you’re not comfortable using them, don’t tempt your luck and try a paid-for recovery tool with an easier interface.

If you use TestDisk and you manage to recover your files, don’t forget to donate to encourage Christophe GRENIER, the author.

References

4 comments January 8th, 2009

Windows 2008 / Windows 7 x64: The ‘Microsoft.Jet.OLEDB.4.0’ provider is not registered on the local machine.

TechnologyThere are times when the coexistence of 64 and 32 bit code on the same machine can cause all sorts of seemingly strange issues.
One of them just occurred to me while trying to run the ASPx demos from Developer Express, my main provider of .Net components (the best supplier I’ve ever been able to find).
I was getting the following error:

The ‘Microsoft.Jet.OLEDB.4.0’ provider is not registered on the local machine:

Server Error

It may look otherwise, but this error is generally due to either of two thing:

  • you don’t have Office 2007/2010 Jet drivers installed
  • or you are running a 32 bit application in a default x64 environment.

The first issue is easy to solve, just download the Access 2010 Database Engine from Microsoft (works with Access 2007 databases as well).

For the second one, the fix is also easy enough:

  • For Windows 2008: Navigate to Server Manager > Roles > Web Server (IIS) > Internet Information Services (IIS) Manager, then look under your machine name > Application Pool.
  • For Windows 7: Navigate to Programs > Administrative Tools > Internet Information Services (IIS) Manager, then look under your machine name > Application Pool.

Under there you can call the DefaultAppPool’s advanced settings to change Enable 32-Bits Applications to True:

Advanced Settings

You may have to restart the service for it to take effect but it should work.

References

Updates

  • 10DEC2011: Updated driver link to use the Access 2010 engine.
  • 03APR2010: Added instructions for Windows 7
  • 12FEB2009: Added reference to Scott’s article.
  • 28OCT2008: Original version

100 comments October 28th, 2008

Sysadmin: Macbook Pro, after the honeymoon

security01.pngI’ve been using the MacBook Pro I introduced in my previous blog entry for a few weeks now.
Between love and frustration I hang…
Here is a review of our relationship so far.

The Great

Hardware delight

Whether running OS/X or Windows 2008 I’ve got no major complaint about the performance of the machine. It’s fast, stable (except sometimes it’s not waking up from sleep or it does but the screen remains black). The screen is nice and vibrant, I just love the magnetic power connector and the small size of the power adapter. I have a few complaints though, see below.

OSX battery Power usage

For such a large and powerful laptop I’m pleasantly surprised with the duration of the battery under OSX: I’ve been able to watch videos for 3h, full screen, without trouble and overheating (although I would lower the screen brightness to reduce consumption).
I haven’t had such luck under Windows 2008 where I’ve been struggling to find the right power settings balance, but remember that’s a server OS and it’s not really meant to be run on a laptop.

The Ugly

The mouse

You wonder why Apple, in all its hardware expertise could design the mighty-mouse with a single big button that can still do right-clicking but can’t give us the same thing with the enormous single-button of the trackpad.
Now the new models -just released this week- have done away with the button entirely, which may be just as well although I’m curious about how well the drivers will work under Windows.

Mouse acceleration in OSX is pretty frustrating to me. When you’ve got a large screen, you’re endlessly shuffling the mouse to get that pointer in the right place. It feels slow, inaccurate and is extremely irritating after a while. The problem is even worse when you’re working in OSX under VMware Fusion: while it might still be usable under OSX, the difference is really severe and unnatural in Windows.
This does not happen under bootcamp though where mouse acceleration behaves as you would expect (for windows).
I’ve tried a number of utilities (iMouse , SteerMouse and others) but none gave me what I needed.

The keyboard

The keyboard feels great to type on, with a nice spring and softness. There are a few issues though:
Why is the Return key so small compared to the right Shift?
The rule is that the more a key is used, the bigger it is, yet the Enter is rather small, it’s the same size as the caps lock which serves almost no useful purpose in comparison.
The arrow keys are also minuscule, another probable example of Apple choosing form over function. The lack of delete key forces you to type both the fn + backspace keys instead, which is an unnecessary pain, it’s not like understanding the difference between delete and backspace is that confusing to people using a complex machine like a computer.

The lid

I love the way the hooks for the lid come out just when you close it. It makes for a neat screen without protruding bits of metal or plastic.
My main issue with the lid is with its limited opening angle: if you’re just a little tall and you place your laptop on your lap then there is no way to open the screen enough to have it properly face you.
This is also an issue if you place your Macbook Pro on a cooling pad or a riser that’ll put the laptop too vertical (for instance to free some space around it when using an external keyboard): you just can’t use these devices.

The sound

That one really makes me mad: the MacBook Pro has audio issues that you won’t even find in a US$15 MP3 player and it’s totally unacceptable.
When idle, I can hear hissing sounds that vary in power and frequency if I slide the volume control; when playing music, there is a lot of noise and “cutting” sounds between songs.
These are not noticeable when using the integrated speakers but,they become really annoying once you use earphones.
I am sorely disappointed with sound quality to say the least.

Electronic noises

On top of the annoying sounds from the sound chipset, the LCD inverter also makes a hissing sound that increases in volume when I lower the LCD brightness…
Coupled with that, the processor also makes a hissing noise when it gets into its C4 power saving state…
The noise is probably not high enough to get the laptop fixed but it might be great as a mosquito repellent.

Boot ‘song’

Apple knows best and they know that your only aim in life is to become a poster boy for the brand.
When booting/rebooting your mac, it likes to play its welcome song that says “hey, over here, I’m a mac and I’m telling the world I’m booting up. Everyone listen how awesome I am!”.
The perverse thing is that even if you have earphones plugged in (as in: you don’t want to disturb people around you) the boot song will be played on the speakers…
Of course, there is no option anywhere to disable it.
Apple knows best.
After much trials, I found that booting under OSX, lowering the volume to zero, then rebooting under windows and changing the volume there would be OK: no more booting song -at least for now- and I can still change the volume in OSX and Windows.

Conclusion

Would I buy a Mac again knowing what I know now?
Mmmm, probably not.
I find the annoyances a bit too much relative to my expectations and my usage scenario. To be fair, it’s not all bad and most of the time I’m happy with my mac but I find myself trying to avoid the things that infuriate me and it’s not really what I want from a laptop; it’s supposed to liberate me and give me the freedom I need to get things done, not get in the way.
Re-adapting to a strangely layout keyboard, having to deal with Apple’s brand awareness arrogance, battling with stuff that you just normally take for granted, all this is a bit too much of a price to pay for what is essentially for me a Windows development machine.
I prefer to waste my time actually getting things done rather than searching forums on how to keep Windows and OSX time in sync or bring back my machine for repair if a CD stays stuck in the CD Drive.

So, let’s say that our relationship is more ambivalent than needed to be.

References

Other links to pages of interest.

2 comments October 19th, 2008

SysAdmin: Installing Windows Server 2008 x64 on a Macbook Pro

security01.pngMy trusty old gigantic Sony Vaio is about 4 years old. It served me well and still works but it’s about to become my main development machine for the next couple of months and I can’t afford to have it die on me during that time.
It was time to get something as gigantic and more up-to-date in terms of technology.

I use VMware on my main desktop to keep multiple OS setups that match typical configurations of my customer’s machines.
This allows me to test my software before deployment and make sure everything works as expected. It saved me many times from strange bugs and I would consider these final tests to be a mandatory step before deployment.
My old trusty vaio would be hard pressed to run any of these without slowing down to a crawl.

I looked at some possible replacements. Initially I checked Lenovo’s offerings but they don’t seem to offer anything in large screen size (WUXGA 1920×1200) (Note, actually, they have, but not really for me).
Dito for Dell, not counting their humongous XPS M1730 luggable gaming machine that was wayyy over the top as a work computer, not to mention probably heavier than its volume in pure gold.

On a hint from a friend I checked out Apple’s online store and saw they had a nice Macbook Pro configuration. I went to check it out in the retail store close to my office and they had that exact specification in stock, so, in what must have been the highest rated expense/time-to-think ratio of any decision I ever took, well, I bought it…

The spec, some bragging rights:

  • Macbook Pro 17″
  • Core Duo T9500 2.6GHz processor
  • nVidia 8600M GT 512MB graphics card
  • 200GB 7200rpm drive
  • Kingston 4GB DDR2 667MHz RAM
  • Hi Resolution 17″ 1920×1200 glossy screen

It’s a very nice machine, Apple knows how to make nice hardware, there is no question there.
OSX has some cool features, some of them still a bit foreign to me and some minor annoyances are creeping up, like Thunderbird’s not picking up my system date and time settings and displaying the date in the wrong format (a pet peeve of me), probably not Apple’s fault but annoying nonetheless.
So far so good and while I don’t mind using OSX for my browsing, email and creative stuff, that machine is meant to be running Windows Server 2008 x64 as a development platform.

Why Windows Server 2008 x64?

Well, it has some excellent features, a smaller footprint than Vista, all the aero eye candy, is apparently noticeably faster than Vista and has none of the nagging security prompt (you are considered administrator though, so keeping safe is entirely up to you).
The 64 bit version can also address the full 4GB of RAM without limitation and all server features are optionally installable.
By default, the installation is actually pretty minimal and you have to set services and options to get Windows configured as a proper workstation. It is after all, meant to be a server.
Oh, I almost forgot that there is also support for HyperV, although you must make sure you download the right version (if you list all available downloads in your MSDN subscription, you’ll see some that are explicitly without that technology).

Installing Windows Server 2008 x64 is remarkably easy.

  • Get your hands on the ISO from your MSDN subscription or an install DVD from somewhere else (like a MS event, or even as a free 240 days download from Microsoft).
  • You’ll need to repackage the ISO as it won’t work properly (something to do with non-standard file naming options).
    It’s fairly easy if you follow the instructions from Jowie’s website (cached version): you can get the ImgBurn software for free as well, which is a good find in itself. It should’t take more than 30 minutes to repackage the DVD.
  • In OSX, go to Applications > Utilities > Boot camp and follow the instructions on screen.
    You will be able to resize the default partition by just moving the slider. I left 60GB for OSX and allocated the rest to Windows. The good thing is that OSX can read Windows partitions, so you can always store data there. Windows however, can’t read the HFS+ mac file system, although there are some third-party tools that can do it [1] [2] [3].
  • Insert your repackaged DVD and Bootcamp will have rebooted the machine.
    After a few minutes of blank screen (and no HDD activity light to let you know something is happening), windows setup launches.
  • You will be then prompted with the choice of partition to install to.
    Select the one named BOOTCAMP, then click the advanced options button and click format. From there one, windows will install everything, then reboot, then carry on installing, then reboot one last time.
  • Now, insert your OSX recovery CD 1. It should automatically launch the driver installation.
    Once done, you’ll reboot to a nice, full-resolution windows prompt.
  • All drivers will have been installed correctly except the one for Bluetooth. To easily solve that issue, just go to Spencer Harbar’s website and read how to install the Bluetooth drivers. Takes 5 minutes tops.

The final touches

A few notes to quickly get things running as expected.

  • Get the most of your configuration by following the list of tweaks from Vijayshinva Karnure from Microsoft India.
  • There are more tweaks, and even more tweaks available as well (don’t forget to enable Superfetch).
  • Microsoft has a whole KB entry on enabling user experience.
  • In the Control Panel > System > Advanced System Settings > Advanced > Settings > Advanced > Processor scheduling, set to Programs instead of Background services.
  • Activate your copy of Windows using Control Panel > System.
    I was getting an error code 0x8007232B DNS name does not exist error. To force activation, just click on the Change Product Key button and re-enter the same key you used during install.
    Windows will activate straight away.
  • When booting your Macbook, press the Option key and you will be presented a list of boot choices.
  • You can check on Apple’s Bootcamp webpage other information about how to use the track pad, keyboard layouts etc,

References

22 comments August 31st, 2008

Previous Posts


Most Recent Posts

Categories

Links

Posts by Month