This is caused by an actual bug that appears to be impacting Windows XP Pro SP2 and Windows XP Pro SP3 machines only. It most definitely does not affect the Windows Vista family.
Under some circumstances the Core 2 processor may report as a Pentium III or as a Pentium III XEON.
This behaviour has no known ill side effects other than causing confusion for people attempting to catalogue their machines via the WMI interface and finding that their machines believe themselves to be quite old even when they are obviously very new.
A hotfix is available from Microsoft for this issue: KB953955
References:
http://news.softpedia.com/news/XP-SP3-Win32-Processor-Class-Labels-Intel-Core-2-Duo-CPUs-Incorectly-90201.shtml
http://wccftech.com/forum/computer-talk/20133-xp-sp3-win32-processor-class-labels-intel-core-2-duo-cpus-incorrectly.html
]]>
I like to start a Xen installation using the very handy virt-install command. Virt-install, available by default, makes creating a new virtual machine very simple. I will assume that you are familiar with this part of the process and already have Xen installed and working. If you are not sure if your environment is set up properly, I suggest that you start by paravirtualizing a very simple, bare-bones Red Hat Linux server using the virt-install process to test out your setup before challenging yourself with a much more lengthy Windows install that has many potential pitfalls.
The first potential problem that many users face is a lack of support for full virtualization. This is becoming less common of a problem as time goes on. Full virtualization must be supported at the hardware level in both the processor and in the BIOS/firmware. (I personally recommend the AMD Opteron platform for virtualization but be sure to get a processor revision, like Barcelona or later, that supports this.)
Using virt-install to kick off our install process is great but, most likely, you will do this and, if all goes well, you will begin to format your hard drive and then you will find that your Xen machine (xm) simply dies leaving you with nothing. Do not be concerned. This is a known issue that can be fixed with a simple tweak to the Xen configuration file.
CD Drive Configuration Issues
In some cases, you may have problems with your CD / DVD drive not being recognized correctly. This can be fixed by adding a phy designation in the Xen configuration file to point to the CD-Rom drive. This is only appropriate for people who are installing directly from CD or DVD. Most people prefer to install from an ISO image. Using an ISO does not have this problem.
In Red Hat, your Xen configuration files should be stored, by default, in /etc/xen. Look in this directory and open the configuration file for the Windows Server 2003 virtual machine which you just created using virt-install. There should be a “disk =” configuration line. This line should contain, at a minimum, configuration details about your virtual hard drive and about the CD ROM device from which you will be installing.
The configuration for the CD ROM device should look something like:
disk = [ “file:/dev/to-w2k3-ww1,hda,w”, “,hdc:cdrom,r” ]
You should change this file to add in a phy section for the cdrom device to point the system to the correct hardware device. On my machine the cdrom device is mapped to /dev/cdrom which makes this very simple.
disk = [ “tap:aio:/xen/to-w2k3-ww1,hda,w”, “phy:/dev/cdrom,hdc:cdrom,r” ]
Accessing the Xen Graphical Console Remotely via VNC
If you are like me you do not install anything unnecessary on your virtualization servers. I find it very inappropriate for there to be any additional libraries, tools, utilities, packages, etc. located on the virtualization platform. These are unnecessary and each one risks bloat and, worse yet, potential security holes. Since all of the guest machines running on the host machine all all vulnerable to any security concerns on the host it is very important that the host be kept as secure and lean as possible. To this end I have no graphical utilities of any kind available on the host (Dom0) environment. Windows installations, however, generally require a graphical console in order to proceed. This can cause any number of issues.
The simplest means of working around this problem is to use SSH forwarding to bring the remote frame buffer protocol (a.k.a. VNC or RFB) to your local workstation which, I will assume, has a graphical environment. This solution is practical for almost any situation, is very secure, rather simple and is a good way to access emergency graphical consoles for any maintenance emergency. Importantly, this solution works on Linux, Mac OSX, Windows or pretty much any operating system from which you may be working.
Before we begin attempting to open a connection we need to know on which port the VNC server is listening for connections on the Xen host (Dom0). You can discover this, if you don’t know already from your settings, by running:
netstat -a | grep LISTEN | grep tcp
On Linux, Mac OSX or any UNIX or UNIX-like environment utilizing a command-line SSH client (OpenSSH on Windows, CygWin, etc. will also work on Windows in this way) we can easily establish a connection with a tunnel bring the VNC connection to our local machine. Here is a sample command:
ssh -L 5900:localhost:5900 [email protected]
If you are a normal Windows desktop user you do not have a command-line integrated SSH option already installed. I suggest PuTTY. It is the best SSH client for Windows. In PuTTY you simply enter the name or IP address of the server which is your Dom0 as usual. Then, before opening the connection, you can go into the PuTTY configuration menu and under Connection -> SSH -> Tunnels you can specify the Source Port (5900, by default for VNC but check your particular machine) and the Destination (localhost:5900.) Then, just open your SSH connection, log in as root and we are ready to connect with TightVNC Viewer to our remote, graphical console session.
If you are connecting on a UNIX platform, such as Linux, and have vncviewer installed then you can easily connect to your session using:
vncviewer localhost::5900
Notice that there are two colons between localhost and the port number. If you only use one colon then vncviewer thinks that you are entering a display number rather than a port number.
If you are on Windows you can download the viewer from the TightVNC project, for free, without any need to install. Just unzip the download and run TightVNC Viewer. You will enter localhost::5900 and voila, you have remote, secure access to the graphical console of your Windows server running on Xen on Linux.
]]>In addition to using Kerberos for secure authentication, we are also switching from using plain HTTP as our transport to HTTP over SSL so be aware that after applying the Apache configuration file here that you will need to access your Subversion directory with HTTPS rather than HTTP and that, unless otherwise configured, you will need to open your firewall both locally and remotely to allow port 443 traffic out instead of (or in addition to) port 80 traffic.
Installing Necessary Components
As with anything else in the Red Hat world, most of the heavy lifting is done by our friends at Red Hat engineering and we just need to leverage what they have already done for us. We need to install the module for SSL transport and Kerberos authentication in Apache:
yum -y install mod_auth_kerb
This will automatically install the file /etc/httpd/conf.d/auth_kerb.conf which will take care of loading the Kerberos module into Apache and will provide a sample configuration if you want to learn more about Kerberos authentication in Apache.
Setting Up the Apache KeyTab File
Now we need to set up our Apache to Kerberos authentication table. The Red Hat standard for this file is to be located at /etc/httpd/conf/keytab although you control its location through your Apache configuration. We will not deviate from the standard here.
This file needs to contain
echo HTTP/[email protected] >> /etc/httpd/conf/keytab
chown apache.apache /etc/httpd/conf/keytab
Setting Access Control
The traditional examples will generally tell you to use the .htaccess file to manage your authentication mechanisms. For most cases it is better to avoid the use of the .htaccess file and to switch to configuring these details within your <Location> section in your Apache configuration files. This is better for performance reasons as well as for ease of security management. Now you only need to worry about specifying your security information in a single location and Apache need not traverse the entire directory structure seeking out .htaccess files for each access attempt.
I use the file /etc/httpd/conf.d/subversion.conf for the configuration of my Subversion repository. Here are its contents:
<Location /svn>
DAV svn
SVNPath /var/projects/svn/
AuthName "Active Directory Login"
AuthType Kerberos
Krb5Keytab /etc/httpd/conf/keytab
KrbAuthRealm EXAMPLE.COM
KrbMethodNegotiate Off
KrbSaveCredentials off
KrbVerifyKDC off
Require valid-user
SSLRequireSSL
</Location>
Configuration of Kerberos
Kerberos is configured in Red Hat Linux in the /etc/krb5.conf file. Obviously replace EXAMPLE.COM and ad.example.com with the name of your Domain and your KDC. This file should have been created for you using almost exactly these settings by the RPM installer so there is very little here that needs to be changed.
[libdefaults]
default_realm = EXAMPLE.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
forwardable = yes
[realms]
EXAMPLE.COM = {
kdc = ad.example.com:88
}
[domain_realm]
example.com = EXAMPLE.COM
.example.com = EXAMPLE.COM
[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}
Enable HTTPS Access Through Firewall
Use the Red Hat management tool to enable HTTPS connection through your host firewall.
system-config-securitylevel-tui
Restart Apache
Now, all that we need to do is to restart the web server to have it pick up the changes that we have made and voila, Kerberos authentication to Active Directory should be working.
/etc/init.d/httpd restart
Testing Your Connection
In order to test your connection you can use a web browser or use the Subversion command line client as below:
svn list https://localhost/svn/
Error Notes:
If you set KrbMethodNegotiate On then, in my experience, you will see Firefox work just fine but Internet Explorer (IE) and Chrome will fail with a 500 error. In the logs I discovered the following entry:
gss_acquire_cred() failed: Unspecified GSS failure. Minor code may provide more information (Unknown code krb5 213)
References:
Providing Active Directory Authentication via Kerberos Procol in Apache by Alex Yu, MVP, Microsoft Support
]]>I must preface all of this, of course, by stating that you must make a complete backup of your virtual machine before doing something as invasive as this. While this process is reasonably safe there is always the potential for disaster. Take precautions.
The lvextend command is used to increase the size of the logical volume. You can view your current logical volumes with lvdisplay. I use the -L+ syntax as a safety measure to be sure that my drive is getting larger and not shrinking accidentally due to a typo. In this example I am expanding the /dev/VolGroup00/lvxen logical volume by an additional 80GB.
lvextend -L+80G /dev/VolGroup00/lvvmware
This first step can be completed while the virtual machine is still running. It will happily extend your available space in the background. Our next step, however, requires that you power down your virtual machine before continuing.
Now that we have created space on our logical volume we need to expand the Linux local filesystem before we can expand the virtual filesystem running on top of it. Assuming that we are using the current standard Ext3 this is very simple:
umount /dev/VolGroup00/lvvmware
e2fsck -f /dev/VolGroup00/lvvmware
resize2fs /dev/VolGroup00/lvvmware
mount /vmware/
Obviously for my purposes I use a /vmware directory structure for holding all of my disk images. You will need to adjust as needed for your own setup. /var/vmware is another common option.
Now we just enlarge the virtual disk itself. We will do this through the vmware-vdiskmanager command. You will need to execute this command on your vmdk and not your flat-vmdk even though this seems counter-intuitive when looking at your directory structure.
vmware-vdiskmanager -x 22GB ph-w2k3-ad.vmdk
This concludes the easy part. Now you have plenty of logical disk space available for Windows but in order to expand the System drive of Windows you will need to use a third party tool. Windows Server 2003 is unable to make partition changes that affect the running system.
If you are like me, you will want to fire up your virtual machine just to make sure that everything is okay after the disk change, but you will need to turn it off again before we make changes to the partition table.
There are many tools that can be used for this task but I decided to use GParted, which is available as a live CD which you can download for free. For the version that I used, I just cd’d into /tmp and used this command to get my copy of GParted’s bootable CD ISO file.
wget http://downloads.sourceforge.net/gparted/gparted-live-0.3.9-4.iso?modtime=1222872844&big_mirror=0
Using your VMWare Server Console (or through the command line) you will need to set your Windows Server image to boot from the GParted ISO which you just downloaded. Then go ahead and “start this virtual machine.”
You will likely need to hit “Esc” as soon as the virtual machine starts so that you can select to boot from CD. I keep my Virtual BIOS set to boot directly to the hard drive under normal circumstances because it is faster and I don’t want to accidentally boot to some CD media unless I really, really mean it.
Once GParted starts you will be given a boot menu. The default option works fine in more cases and worked fine for me. You will need to select your keyboard layout and then you will be taken to GParted’s graphical partition manager screen.
Once in the GParted Partition Manager you should see the current partition that you had before we started, in my case called /dev/sda1 and marked as being an NTFS file system. Mine also shows the “unallocated” partition space into which I will be expanding my /dev/sda1 partition.
Start by selection the partition which you are seeking to resize (sda1 for me) and then select “Resize/Move”. This will open the Resize/Move window. Do not alter the first numer – “Free Space Preceding”, this is for “moving” your partition. You only want to alter the second number – “New Size.” If you are doing like me and have created some empty space specifically for this purpose then you will simply set this number to the “Maximum Size” as displayed in the window. Then select “Resize/Move” to continue.
Once you have completed that step you can visually confirm that the disk now looks the way that you want it to look. If you look at the bottom of the window you will see that there is “1 operation pending.” If everything looks alright go ahead and click “Apply” to commit your changes and to resize your partition.
Once the resizing completes you are safe to reboot your virtual machine into Windows again. Double click the “Exit” button on the GParted desktop. Reboot should already be selected so just choose OK to continue.
When Windows starts it will detect the drive configuration change and force a disk consistency check. Allow it to run through this process and when it completes the system will restart automatically. Once Windows restarts you should see that your drive has been resized.
]]>(Warning, what is about to follow is anecdotal evidence as to the state of Vista from my own, limited first hand observations. But it could be worse, it could be second hand and out of context.)
My first attempt to work with Vista was on a dual-core AMD Turion X2 laptop. My hope was that with Vista it would finally make sense to run the operating system in 64-bit mode as Windows XP Pro 64-bit was a bit lack-luster. In Windows XP driver support had been extremely poor and I was unable to get much of anything to work. So all of my Windows XP machines ended up staying as 32-bit while my Linux machines moved back and forth. On Linux almost everything worked great as 64-bit. Only rarely would I get a driver issue or compatibility problem.
For the first week or so Windows Vista was incredibly slow. I decided that trying the 32-bit version of Vista (both had shipped with the laptop, thankfully) might be a good idea. So I performed a clean re-installation of Vista and started again.
Under Vista32 I noticed a significant increase in the overall speed and stability. The whole system seemed to hum right along now without the apparent slowness that I had had in 64-bit mode. Vista32 seems to work exceedingly well and starts and stops more reliably than my Windows XP machines have done in the past. The reliability of the shutdown process has been a major concern of mine from past Windows editions.
Because of the types of applications that I generally use on Windows (e.g. not video games, not entertainment applications, mostly serious business and management applications, only current versions, etc.) there were no compatibility issues in moving to Vista. Not a single application has failed to run and, I am told, that the only game that I actually would care about (Age of Empires 2 circa 1998) will run beautifully in Vista. I have a friend who has tested this on three separate Vista machines.
Few applications that are programmed “correctly” using Microsoft’s published standards and industry best practices have any issues moving to the Vista platform, in my experience. All of the complaints that I have heard about applications not working are either video games – which seldom follow platform guidelines, ancient legacy applications or small independent vendor applications that always fail to work between platforms because there are no updates, standards aren’t followed, etc. It happens. Every new operating system breaks a certain amount of old applications but in many cases, most cases, this is simply a separating of the wheat from the chaff. It is good to shake up the market and point out the weaklings in the herd and thin it out a bit for everyone’s long term health. Think of it as software genetics in action.
For contextual reasons I should point out that I have been using client side “firewalls” – a term that I am loathe to use but has become somewhat of the norm – for a long time, first with Symantec and more recently with Microsoft’s Live OneCare – and am quite familiar and comfortable with the concept of unblocking ports for every new application that is installed or any changes that are made. I am also used to this through the use of AppArmor on SUSE Linux and SELinux on Red Hat Linux.
Already being used to this as a matter of course makes the transition to Vista’s security system almost transparent. I have heard numerous complaints about the barrage of security notifications popping up and asking it “this software should be allowed to install” or if “such and such a port should be allowed to open” but if people were diligent about using past operating systems this would neither be new nor a surprise. This type of checking is wonderful in the computer security nightmare world in which we live. Many people want this “feature” suppressed but these are often the same people asking for continuous help to fix their virus and Trojan horse riddled computers caused, not by malicious external attacks, but by bad computer management habits and behaviours.
Even as a technology professional who is constantly installing and uninstalling applications, doing testing, making changes, fiddling with the network, etc. the number of these security alerts is not quite annoying enough to push me past the point of appreciating the protection which it provides. A normal user, who should not be installing new software or making network changes on a daily basis, should see these messages mostly only during the initial setup of the workstation and then somewhat rarely when new software or updates are applied. If this security feature is becoming annoying due to its regularity one must carefully ask oneself if there isn’t a behavioural issue that should be addressed. It is true, some users need to do “dangerous” things on a regular basis to use their computer the way that they need to use it. But these people are extremely rare and can almost always manage these issues on their own (turning off the feature, for example.)
Some people have had issues with the speed of their Vista machines. All of the complaints that I have heard to date, however, come from people who have moved from Windows XP to Windows Vista on the same hardware. This is not a move that I would suggest. Yes, Vista is slower than XP and noticably so. Just as XP was somewhat slower than Windows 2000 (although not very dramatically as 2000 was so slow. XP may not actually even be slower than 2000!) Windows 2000 was dramatically slower than Windows NT 4 and requires many times more system resources. The jump from the NT4 to the NT5 family was, by far, the biggest loss of performance that I have witnessed on these platforms. The move to Vista is minor.
The fact is that moving to newer, more feature rich, operating systems almost necessitates that the new operating systems will be slower. Each new generation is larger than the generation before. Each new version is more graphics intense (not true with Windows 2008 Core – yay!) and has power-hungry “eye candy” that demands faster processors, more memory and now graphics offload engines. Users clamour for features and then complain when those features cause their operating systems to be larger and more bloated. You can’t have both. If you want a car with one hundred cubic feet of hauling capacity the car absolutely must be larger than one hundred and four square feet in surface area. Period. It’s math. End of discussion. This isn’t Doctor Who – the inside can’t be larger than the outside. And your operating system can’t have less code than the sum of its components.
If I have one major complaint about Windows Vista it is the extreme difficulty with which one must search for standard management tools within the operating system. Under previous editions of Windows one could go to the Control Panel and find commonly used management tools in one convenient place. Now simply modifying a network setting – a fairly common task and impossible to research online when one needs it most – is nearly impossible to find even for full time Windows desktop support professionals. The interface for this portion of the system is cryptic at best and nothing is named in such a manner as to denote what task could possibly be performed with it.
Altogether I am very pleased with Vista and the progress that has been made with it and I am looking forward to seeing the improvements that are expected to come with the first Service Pack that should be released very soon. Vista is a solid product and Microsoft should be proud of the work that they have done. The security has been much improved and I hope that Vista proliferates in the wild rapidly as this is likely to have a positive effect on the virus levels that we are currently seeing.
Caveat: Moreso than previous versions of Microsoft Windows, Vista is designed to be managed by a support professional and used by a “user”. Vista is somewhat less friendly, out of necessity, and the average user would be better serviced to simply allow a knowledgeable professional handle settings and changes. Vista pushes people towards a “managed home” environment that would be more akin to a business environment.
This change, however, is not necessarily bad. As we have been seeing for many years, the security threats that exist with regular access to the Internet are simply far too complex for the average computer user to understand and with the number of computers in the hands of increasingly less sophisticated computer users the ability for viruses and other forms of malware to propagate has increased many fold. A computer user who does not properly protect his or herself from threats is not only a threat to themselves but to the entire Internet community.
In a business we do not expect non-technology professionals to regularly management their own desktops and perhaps we should not expect this of home users. Computers are far more complex than a car, for example, and only advanced hobbyist or amateur mechanics would venture to do much more than change their own oil. Why then, when a computer can be managed and maintained completely remotely, would we not use the same model for our most complex of needs?
With some basic remote support to handle the occasional software install or configuration change, automated system updates, pre-installed client side “firewall” all that is truly needed is a good anti-virus package and a normal home user could use their Windows Vista machine in a non-administrative mode for a long time with little need from the outside while enjoying an extreme level of protection. The loss of some flexibility would be minor compared to the great degree of safety and reliability that would be possible.
]]>In a recent New York Times article “They Criticized Vista. And They Should Know” author Randall Stross, professor at San Jose State University, uses skewed anecdotal evidence and out-of-context examples in a blatant attempt to bias the reader against Microsoft’s latest operating system, Windows Vista. Whether this has occurred simply because the author does not understand the material, because the New York Times has its own political agenda or because they have been paid to reverse-advertise by the competition I cannot say. But for some reason the “illustrious” New York Times is using its position as a media outlet to serve to the detriment of honesty and to mislead the readers who have been mislead into paying for what proves to be little more than a tabloid.
Mr. Stross begins his article by presenting the issue of Vista’s slow adoption rate. He acts as though its adoption rate is unexpected or not appropriate for a new operating system. However, given Windows XP’s presence in business, longevity, stability and feature set it is not surprising or unexpected, in the least, that Vista – not having yet reached Service Pack 1 – would have a very slow adoption rate. Each new operating system generation has to contend with a lesser and lesser value proposition to people updating and it has been seven years since the last major round of Microsoft operating systems – almost an eternity in the IT industry.
Vista also has a new kernel architecture (the first of the Windows NT 6 family as opposed to the NT 5 family that we are used to with Windows 2000 – NT5, Windows XP – NT5.1 and Windows 2003 – NT5.2) and therefore has many hurdles to cross that have not been seen since the migration from Windows NT 4 to Windows 2000. Additionally, this is the first major NT to NT family kernel update to hit the consumer market. Earlier NT family updates happened almost entirely within businesses where these processes are better understood and preparations happen much, much earlier. This is the first major consumer level change since users were slowly migrated from the Windows 9x family (95, 98 and ME) to the NT family (2000, XP) which happened over a very long time period. Users should recall the large number of headaches that occurred during that transition as few applications were compatible across the chasm created by the new security paradigm.
Mr. Stross takes the approach that Microsoft needs to answer for the natural slow adoption of a new, somewhat disruptive technology, but this is ridiculous. Vista market penetration is expected to be slow within the industry and no one is wondering why it hasn’t appeared on everyone’s desktops or laptops yet. Vista technicians are still being trained, bugs are still being found, issues still being fixed, applications are still being tested and Service Pack 1 is still being readied. I have Vista at home but I am an early adopter. I don’t expect “normal” (read: non-IT professionals) to be seriously considering Vista updates themselves until later this year.
Our author then asks the question “Can someone tell me again, why is switching XP for Vista an ‘upgrade’?” Actually, Mr. Stross, in the IT world this is what is known as an “update”, not an “upgrade”. An update occurs when you move from an older version to a newer version of the same product. An upgrade occurs when you move to a higher level product.
Windows XP Home to Windows Vista Home Basic is an update. Windows Vista Home Basic to Windows Vista Home Premium is an upgrade. Windows XP Home to Windows Vista Home Premium is, in theory, both. Please do not mislead consumers by claiming that Windows Vista is an upgrade. It is not. Windows Vista is simply the latest Windows family product for consumer use.
If you have Windows XP and it is meeting your current needs why would you go the route of updating? I have no idea. I think that people need to answer that question before having unreasonable expectations of any new software product. Windows XP is still supported by Microsoft and will be for a very long time.
If I may make a quick comparison, moving from Windows XP Home to Windows Vista Home Basic is like moving from a 2002 BMW 325i to a 2007 BMW 325i. This is not an upgrade. It is simply an update. Just a newer version of the same thing. Sure, some things change between the versions but no one would consider this to be a higher class of car. If you want a higher class get yourself a 760i.
Mr. Stross goes on to regale us with horror stories of Vista updates gone wrong. In each of the cases what we see is a confused consumer who felt that, contrary to Microsoft’s recommendations and contrary to any industry practice, they could simply purchase any edition of Vista and expect any and every piece of software that they owned to work. This is not how Windows, or any other operating system, functions.
In the first example, Jon A. Shirley – former Chief Operating Officer, President and current board member at Microsoft – updates two home computers and then discovers that the peripherals that he already owned did not yet have Vista drivers. Our author does not mention whether or not Mr. Shirley checked on the status of these drivers before purchasing Windows Vista nor does he complain about these unknown third party vendors not providing Vista drivers. It is implied in the article that it is Microsoft’s responsibility to provide third party drivers. It is not. Drivers are the responsibility of the hardware manufactures. Hardware compatibility is the responsibility of the consumer. In neither case is Microsoft responsible for third party drivers. It may be in their best interest to encourage their development but they are not Microsoft’s responsibility.
In the next example we see Mike Nash – Vice President of Windows Product Management – who buys a Vista-capable laptop. This laptop would have been loaded with Windows XP but capable, as stated, of running at least Windows Vista Home Basic when it would become available. It is absolutely critical to keep in mind that Windows XP Home’s direct update (not upgrade) path is to Windows Vista Home Basic.
When Mr. Nash attempted to update his laptop to Vista we are told that he was only able to run a “hobbled” version. What does “hobbled” imply? We can only assume that it means that he can run Windows Vista Home Basic as we would expect. What has handily been done here is that one version of Windows Vista has been considered “hobbled” and another is considered “not-hobbled” even though consumers must pay for the features between the versions – an upgrade. It a BMW 325i hobbled because the BMW 335i has a bigger engine but requires more fuel?
It is also mentioned that Mr. Nash is unable to run his favourite video editing software – Movie Maker. It is true that the edition of Movie Maker that comes with Windows Vista has some high requirements that may have kept Mr. Nash from being able to run the version of Movie Maker included in the Windows Vista box. But Microsoft makes a freely downloadable version of Movie Maker for Windows Vista specifically for customers who have run into this limitation. So this is not even a valid argument.
It is implied that Microsoft mislead consumers by stating that the laptop was Vista-capable, but we are not told that Windows Vista did not install successfully nor work properly. What is being done here is the application of unreasonable expectations on Microsoft. Microsoft has stated extremely clearly since long before Windows Vista was released to the public that there would be different versions and that many of the features had specific hardware requirements beyond the base requirements. The features in these higher-end editions were upgrade features not included in the basic Windows Vista distribution.
This begs the questions “Could Microsoft have done more to inform their customers of the Windows Vista requirements?” Perhaps. But the answer is not as easy as it seems. As it was, these requirements were incredibly well known and publicized. The issue that we are dealing with is consumers, including some inside of Microsoft, who did not check the well publicized details and had unreasonable expectations in this situation. Much like the often heard story of the purchase of a video game that requires an expensive high end graphics card that the purchases does not posses. That application has higher requirements than Windows Vista Home Basic so why shouldn’t Windows Vista Ultimate Edition not have higher requirements too?
It is unfortunate that so many consumers have difficulty understanding computers enough to be able to purchase them effectively. It is also unfortunately that many choose to ignore requirements that are clearly stated because it is too much effort. But in neither case can Microsoft be held to a higher level of expectation than any other company in the same position. If a Linux based desktop operating system was being purchased the same problems would have applied. Some features would require a more powerful machine and some are very complicated.
A key issue here is that because these two pieces of anecdotal evidence come from high-ranking Microsoft insiders we treat them as if they are more important than normal consumer issues. The fact is that these two Microsoft employees did not do the same level of consumer diligence that I would expect of anyone buying something so expensive and complex as a new computer. Computers are complex and desiring to “future proof” your purchase requires some careful forethought and planning.
We are also not seeing the whole picture. Perhaps Mr. Nash and Mr. Shirley were purchasing Vista intentionally without putting in any forethought to see what problems the least diligent segment of customers were likely to run into and were using this information to allow Microsoft to attempt to fix their problems even though it was not Microsoft’s responsibility to do so. In this case Microsoft should be being praised for being willing to put so much effort into fixing things that are not their problem just because it makes for happier customers.
I am most unhappy that this article’s use of two pieces of out-of-context anecdotal evidence and using them as a basis for the implication that Vista is not yet finished – by calling it “supposedly finished” without any justification whatsoever. This is called “leading”. Clearly Windows Vista was finished, shipped and is used by many people. But now the reader is lead to believe that it is not finished even though it is not actually stated by the author. This is not the job of journalism – to decide on a verdict and indicate to the reader the way in which they should think. While not strictly lying the intent is to mislead ergo making the intent – to lie.
Even worse is the blatant falsification that “PCs mislabeled as being ready for Vista when they really were not” which is completely and utterly untrue and clearly intentional defamation and libel. It is never said that Windows Vista did not run on any machine stated here as being capable of running Windows Vista. It is simply implied that some upgrades to higher editions of Windows Vista were not possible.
The article wraps up with a look at the timeline of the decision process in the labeling of machines as being Vista-capable. We can see that internally Microsoft was torn as to which direction to go but chose, in the end, to label all machine capable of running Windows Vista as being Vista-capable.
I understand that there are many reasons why Microsoft may have wanted to mislead consumers (for the consumer’s own good) into buying overpowered new hardware just to feed the coffers of their hardware partners by only labeling a machine Vista-capable if they were able to run the high-end, expensive upgraded versions that would only be of interest to more affluent or intensive users.
Nevertheless, Microsoft resisted misleading consumers and labeled the computers accurately and did not use the Vista release as an opportunity to push hardware prices higher. They labeled their computers honestly and accurately. Labeling them in any other way would actually have been misleading and would have been of questionable intent.
At least poor consumers were not told to buy expensive computers just to find out that a much less expensive model would have sufficed to run Windows Vista! Microsoft would most definitely have been accused to misleading customers in that case. Those customers for whom the price of the computer was most difficult to manage were the ones protected the most.
The article ends asking “where does Microsoft go to buy back its lost credibility?” But the real question is after so blatantly attacking Microsoft without merit, where does the New York Times and San Jose State University professor Randall Stross go to buy back their credibility?
]]>